Aakash GuptaComplete Course: AI Product Design
Aakash Gupta and Elizabeth Laraki on practical frameworks for designing AI products beyond chat interfaces today.
In this episode of Aakash Gupta, featuring Aakash Gupta and Elizabeth Laraki, Complete Course: AI Product Design explores practical frameworks for designing AI products beyond chat interfaces today AI features should be integrated into existing user workflows (like Search) while clearly managing nondeterminism and hallucination risk.
At a glance
WHAT IT’S REALLY ABOUT
Practical frameworks for designing AI products beyond chat interfaces today
- AI features should be integrated into existing user workflows (like Search) while clearly managing nondeterminism and hallucination risk.
- Chat-based UX is inherently linear and often mismatched to tasks that require stable artifacts, visual context, or iterative co-creation on a canvas.
- AI product design requires safeguards across model training/evals and interface design, with explicit human-in-the-loop review for sensitive outputs.
- Great AI products “bake AI into the cake” by embedding it across core jobs-to-be-done (e.g., transcript editing, filler word removal, clip creation).
- Strong product outcomes still start with classic design fundamentals: define the product, design the experience/architecture, then build and iterate using user research.
IDEAS WORTH REMEMBERING
7 ideasTreat AI outputs as probabilistic and design for uncertainty.
Laraki and Gupta emphasize that hallucinations are normal across LLMs, so products should gate when to show answers, communicate confidence, and provide verification paths rather than presenting every output as fact.
Embed AI into real user jobs, not as a superficial add-on.
Tools like Descript feel powerful because AI supports each step of the workflow (edit via transcript, remove filler words, generate clips/titles) instead of being a single flashy feature bolted on top.
Move beyond linear chat when the task needs stable structure or visual grounding.
For tasks like travel itineraries or physical troubleshooting, a canvas or artifact-centered UI (image/video stays central; chat becomes a tool around it) better supports iteration, reference, and user control.
Pair model-side safeguards with UI-side review mechanisms.
The image expander incident shows that “reasonable workflows” can yield harmful results; mitigation includes training/evals plus UI that clearly marks AI-generated regions and prompts human review before publishing.
Design voice experiences around context, turn-taking, and how people actually consume information.
ChatGPT voice in a car works because it feels like a natural participant, while reading an entire menu aloud (Meta glasses) fails because it ignores human scanning behavior and conversational preferences.
Start every ‘AI redesign’ with product definition, not pixels.
In the LinkedIn-for-AI exercise, Laraki begins by clarifying objectives (matchmaking vs certification vs content/networking), identifying user groups, and mapping the marketplace “magic in the middle” before UI details.
Simplicity is a product strategy you must continuously defend.
Laraki praises Search/early Maps for clarity and criticizes modern Maps clutter, arguing products accumulate features until they require deliberate “purge” cycles back to core use cases.
WORDS WORTH SAVING
5 quotesThere are safeguards with which, as users, we need to think about when assuming any answer.
— Elizabeth Laraki
It felt like I walked into a bike shop and got the least helpful bike mechanic I could possibly find.
— Elizabeth Laraki
AI can have very unintended consequences, and as people using these tools, we need to have a heightened level of scrutiny.
— Elizabeth Laraki
The goal is really to emerge from ambiguity with a clear sense of what you're building and for whom.
— Elizabeth Laraki
We went from three tabs with a total of five different search boxes to one single search box.
— Elizabeth Laraki
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsOn Google Search AI summaries, what specific UX patterns best communicate uncertainty without reducing usefulness (confidence labels, citations, ‘only show when confident’ gating, or something else)?
AI features should be integrated into existing user workflows (like Search) while clearly managing nondeterminism and hallucination risk.
For image/video-centered AI help (like the bike-seat example), what would an ideal interface look like—annotation overlays, step-by-step cards, live camera mode, or multimodal timeline?
Chat-based UX is inherently linear and often mismatched to tasks that require stable artifacts, visual context, or iterative co-creation on a canvas.
In the image expander incident, what concrete product requirements would you write to prevent sexualized or sensitive hallucinations, and how would you test them with evals?
AI product design requires safeguards across model training/evals and interface design, with explicit human-in-the-loop review for sensitive outputs.
What is the most effective UI technique you’ve seen for clearly distinguishing “original” versus “AI-generated” regions in hybrid media outputs?
Great AI products “bake AI into the cake” by embedding it across core jobs-to-be-done (e.g., transcript editing, filler word removal, clip creation).
For voice interfaces, how do you design interruption, clarification, and summarization so it feels conversational but still efficient (especially in driving contexts)?
Strong product outcomes still start with classic design fundamentals: define the product, design the experience/architecture, then build and iterate using user research.
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome