Skip to content
Aakash GuptaAakash Gupta

Complete Course: AI Product Design

Elizabeth Laraki reveals how to design AI products users actually love. She breaks down the 3-phase design framework from Google, shares the shocking AI image expander story, and shows why everyone's adding chat interfaces wrong. --- Full Writeup: https://www.news.aakashg.com/p/elizabeth-laraki-podcast Transcript: https://www.aakashg.com/how-to-design-ai-products-complete-masterclass-with-elizabeth-laraki/ --- Timestamps: 00:00:00 - Intro 00:01:52 - Elizabeth's background at Google 00:04:19 - Google's AI search integration 00:06:19 - Designing image & video for AI 00:09:44 - AI image expander disaster 00:16:05 - Ads 00:17:50 - AI safeguards & human-in-the-loop 00:18:28 - 3-step AI design process 00:31:29 - Ads 00:33:25 - Designing AI voice interfaces 00:38:25 - Designing beyond chat 00:41:52 - AI design tools for designers 00:44:49 - Live design: LinkedIn for AI 00:57:04 - Google Maps redesign story 01:04:14 - Google Maps India landmarks 01:10:09 - Where to find Elizabeth 01:12:00 - Outro --- Thanks to our sponsors: 1. Vanta: Automate compliance, manage risk, and prove trust - http://vanta.com/aakash 2. Kameleoon: Leading AI experimentation platform - http://www.kameleoon.com/ 3. The AI PM Certificate: Get $550 off with ‘AAKASH550C7’ - https://maven.com/product-faculty/ai-product-management-certification?promoCode=AAKASH550C7 4. The AI Evals Course for PMs: Get $1155 off with code ‘ag-evals’ - https://maven.com/parlance-labs/evals?promoCode=ag-evlas --- Key takeaways: 1. The Core Design Process Hasn't Changed: Define the product (who, what tasks, what needs), Design it (features, architecture, flows), Build it (UIs, brand). Don't skip to "let's add a chatbot" because you have API access. 2. AI Adds Non-Deterministic Risk: Traditional software is deterministic - click A, get B every time. AI is non-deterministic with unpredictable outputs. Elizabeth's image expander added a bra strap that wasn't in the original photo. Completely unintentional, completely unacceptable. 3. Work With Research on Safeguards: Audit training data for bias. Build evals that flag sensitive content (human bodies, faces, private information). Show A/B options for ambiguous cases. Make AI's work visible in the UI so users can scrutinize changes. 4. Start With Jobs To Be Done: Don't ask "We have GPT-4, what should we build?" Ask "What painful workflow takes users hours?" Descript mapped video editing lifecycle and baked AI into each job: remove filler words, edit from transcript, create clips, write titles. 5. Map User Context, Not Just Needs: ChatGPT voice in car with three kids? Perfect - nobody's looking at screen. Meta Ray-Bans reading Spanish menu item by item? Terrible - should ask "What are you in the mood for?" Same AI, different context requires different design. 6. Emerge From Ambiguity First: For "LinkedIn for AI," Elizabeth mapped 4 possible directions, picked Matchmaking, identified AI's unlock (personality patterns vs keyword matching), mapped separate UIs for job seekers and employers. Only then touch pixels. 7. Chat Fails for Complex Tasks: Elizabeth tried creating Madrid itinerary in ChatGPT. Every change regenerated everything with new hallucinations. Chat works for Q&A but fails for document creation, visual tasks, multi-step workflows that need persistent editable outputs. 8. Make Chat Supporting, Not Primary: Photoshop embeds AI in existing canvas tools. Google Search shows AI summaries inline in normal results. Cove gives canvas with multiple AI conversations in parallel. Chat is a tool, not THE interface. 9. Stop Adding AI Sprinkles: Elizabeth: "I can't help but think of this massive container of AI sprinkles everybody's shoving on top." Twitter/X + Grok, Amazon + Rufus, Apple Photos all feel forced. Ask three questions: Is this solving a real problem? Does chat make sense? Can you show your work? 10. Google Maps India Innovation: Researched how Indians actually navigate (by landmarks, not street names). Identified which landmarks work (visible from street level like temples, petrol stations). Redesigned entire directions system around that insight. That's design, whether AI or not. --- Where to find Elizabeth: Twitter: https://twitter.com/elizlaraki LinkedIn: https://linkedin.com/in/elizlaraki Substack: https://elizlaraki.substack.com --- Where to find Aakash: Twitter: twitter.com/aakashg0 LinkedIn: linkedin.com/in/aagupta/ Newsletter: news.aakashg.com #aidesign #productdesign #aiproducts --- About Product Growth: The world's largest podcast focused solely on product + growth, with over 187K listeners. Hosted by Aakash Gupta, who spent 16 years in PM, rising to VP of product, this 2x/week show covers product and growth topics in depth. Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostElizabeth Larakiguest
Oct 7, 20251h 12mWatch on YouTube ↗

CHAPTERS

  1. Why AI product design is hard: moving beyond chat UX

    Aakash introduces Elizabeth Laraki’s background on Google Search and Maps and frames the core challenge: AI features are non-deterministic and shouldn’t default to a linear chat interface. The episode’s goal is to show practical design thinking, pitfalls, and real workflows for building AI-powered products.

  2. How Google Search evolved without constant redesign

    Elizabeth explains how early Google Search work focused on integrating new content types (images, video, maps) while preserving a familiar structure. The conversation highlights how strong products often improve through nuanced, continuous iteration rather than visual overhauls.

  3. Designing AI inside Google Search: benefits, backlash, and hallucinations

    They discuss Google’s AI search integration as a strategic response to ChatGPT’s rise. Elizabeth is positive on embedding AI where users already are, while Aakash emphasizes confidence thresholds, evals, and when not to show AI answers.

  4. Rethinking image/video answers: from linear chat to “show me” interfaces

    Elizabeth critiques the limitations of linear chat when solving visual, hands-on problems (e.g., adjusting a bike seat). They propose interfaces where the image/video stays central and the conversation becomes a supporting layer, enabling more effective guidance.

  5. AI image expander disaster: unintended consequences in real workflows

    Elizabeth shares a cautionary story where an image expansion tool generated inappropriate fabricated details during a conference promo workflow. The incident illustrates how seemingly reasonable human+AI pipelines can produce reputational and safety risks.

  6. Safeguards and human-in-the-loop: model, evals, and UI transparency

    They discuss practical mitigation: improving training data, adding evals for sensitive cases, and designing UI that clearly distinguishes original vs AI-generated regions. The key is combining backend guardrails with frontend clarity so humans can effectively review.

  7. A 3-step process for designing AI features: define → design → build

    Elizabeth outlines a simple but rigorous product approach: clarify what you’re building, design the experience, then build with the right constraints and validations. This frames the rest of the discussion about what “good” AI products get right.

  8. What well-designed AI products do right: ChatGPT, Descript/Riverside, Midjourney

    Elizabeth reviews AI products she finds highly useful and why: ChatGPT’s simplicity and layered power, Descript/Riverside’s workflow-level assistance, and Midjourney’s strong output despite UX tradeoffs. Aakash emphasizes “baked in” AI—supporting end-to-end jobs rather than bolted-on features.

  9. Designing AI voice experiences: context-first, conversational, not a screen reader

    They explore voice UX through examples: ChatGPT voice in the car feels like “another person,” Limitless enables conversation summaries and coaching, and Meta Ray-Bans reveal pitfalls when voice behaves like a literal screen reader. The chapter emphasizes designing for context and dialogue control.

  10. Designing beyond chat: canvases, co-creation, and deterministic outputs

    Elizabeth explains why chat struggles for tasks requiring stable artifacts (e.g., travel itineraries). The future is interfaces where AI is a tool inside a more structured workspace—like a canvas—so users can co-create, edit, and maintain control.

  11. AI design tools for designers: useful at the edges, taste still matters

    Elizabeth is skeptical that AI design tools can meet high-quality expectations for experienced designers, though they can accelerate drafts and support non-designers. They discuss using AI for specs, first-pass layouts, and prototyping—while relying on human taste for final quality.

  12. Live design exercise: decomposing “LinkedIn for AI” into a real product strategy

    In a live session, Elizabeth demonstrates how to tackle an ambiguous prompt by clarifying the objective and exploring product directions (matchmaking, certification/training, networking, content). They zoom into matchmaking and map the marketplace, attributes, onboarding, and the “AI magic” in the middle.

  13. Google Maps redesign: from tab clutter to a single search box architecture

    Elizabeth recounts how early Maps became cluttered with multiple tabs and search boxes for different tasks. The redesign consolidated into one search box, informed by clear information architecture and controversial at the time because directions were Maps’ core value.

  14. Launching Google Maps directions in India: landmark-based navigation through research

    They close with a landmark case study: turn-by-turn directions didn’t work well in India due to navigation norms, lack of street names, and pre-smartphone constraints. Research in-country led to landmark-based directions that help users verify they’re on track and navigate culturally appropriate cues.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome