Adobe Podcast AI Audio
I lead product for Adobe Podcast—developing AI tools that enhance spoken audio. I work closely with research and engineering to turn audio models into tools like Enhance Speech—improving clarity by reducing noise—and Studio, which enables multi-language speech-to-text, making it easier to edit audio like a document.
Adobe Firefly AI Imaging & Design
Previously, I worked on Adobe Firefly, Adobe's family of generative AI models for imaging. I partnered with research, engineering, and ethics teams on evals, refining model quality as well as built features like composition reference—helping users unlock the model's capabilities in intuitive ways to guide creation.
AI Projects LLM, Models & generative AI
Below is a mix of my work in the space—from building AI products at Adobe to personal tools and prototypes that explore LLMs, models, AI coding IDEs, and agents. It reflects a journey that blends equal parts code, craft, and product—shaped by my engineering background, design taste, and product expertise.
I led product for image generation in Firefly Image Model 3 while partnering with research and engineering on evals and model refinement focused on improving human portrayal, diverse styles, and prompt coherence. I also launched features like generation history, stylekits and Structure Reference enabling users to guide AI generations with a visual reference.
I lead product for Adobe Podcast, developing AI tools that enhance speech. I work closely with research and engineering to turn audio models into tools like Enhance Speech which improves clarity by reducing noise and Studio, which enables multi-language speech-to-text, making it easier to record, and edit, speech like a document.
A personal toolbox I built to speed up audio model evaluation while working on Adobe Podcast. Built with Claude and WebAudio API to automate time-consuming tasks like comparing original and enhanced audio generations, iterating on audio stem mixing approaches, and evaluating speech-to-text output.
I built the Podcast Digest prototype using the Anthropic API and Cursor to transform episode transcripts into structured, digestible episode summaries, chapters, speaker introductions, key takeaways, and content for social sharing. Learned a lot here about prompt engineering, and working with LLMs, all while developing and learning faster in an AI-powered IDE.
A playful site built with Perplexity's Sonar API using V0 and Cursor, designed to help you find the perfect cocktail bar based on location and spirit of choice. I initially prototyped the experience in V0 and then designed the interface and modified the implementation in Cursor. Another learning experience in building with a search model, and how it contrasts with prompting an LLM.
A prototype web app exploring how we might edit prompts using design paradigms like resizing a prompt, adding semantic layers, and applying boolean operations. Built with Cursor and Anthropic's API and inspired by design tools, it's a sandbox for mixing text with visual design paradigms. A fun experiment to see how quickly I could turn an idea into reality.