Skip to content
How I AIHow I AI

The secret to better AI prototypes: Why Tinder's CPO starts with JSON, not design | Ravi Mehta

Ravi Mehta, now a product advisor, has built and scaled products used by millions. His past roles include Chief Product Officer at Tinder, Entrepreneur in Residence at Reforge, and senior product leadership positions at Facebook, TripAdvisor, and Xbox. In this episode, Ravi demonstrates his data-driven approach to AI prototyping that produces dramatically better results than traditional "vibe prototyping." He also shares his structured framework for generating professional-quality images in Midjourney that look like they were shot by a professional photographer. *What you’ll learn:* 1. Why most product managers and designers are “vibe prototyping” with AI and getting mediocre results 2. How to use JSON data models instead of design systems as the foundation for better AI prototypes 3. A simple three-part framework for structuring Midjourney prompts to get professional-quality photos 4. How to use Claude and Unsplash’s MCP server to generate realistic data and images for your prototypes 5. Why real data (not Lorem Ipsum) is critical for getting meaningful feedback from stakeholders 6. The film stock “cheat code” that instantly elevates your AI-generated photos *Brought to you by:* Google Gemini—Your everyday AI assistant: https://ai.dev/ Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai *Where to find Ravi Mehta:* Website: https://www.ravi-mehta.com/ Reforge: https://www.reforge.com/profiles/ravi-mehta LinkedIn: https://www.linkedin.com/in/ravimehta/ X: https://x.com/ravi_mehta *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Ravi and data-driven prototyping (02:31) The problem with “vibe prototyping” in product development (04:18) Spec-driven prototyping vs. data-driven prototyping (05:27) Demo: Spec-driven approach to prototyping (08:26) Limitations of the basic AI prototype approach (11:24) The data-driven prototyping approach explained (12:08) Demo: Data-driven prototyping (17:45) Creating a prototype with the generated JSON data (23:33) Comparing the quality difference between approaches (26:44) Modifying the prototype (28:53) Benefits of this approach (34:40) Structured Midjourney prompting (36:20) The subject-setting-style framework for better image prompts (44:27) Using camera metadata to refine your results (48:54) Lightning round and final thoughts *Tools referenced:* • Claude: https://claude.ai/ • Reforge Build: https://www.reforge.com/build • Midjourney: https://www.midjourney.com/ • Unsplash MCP: https://github.com/okooo5km/unsplash-mcp-server-go?utm_source=chatgpt.com *Other references:* • Reforge AI Strategy Course: https://www.reforge.com/courses/ai-strategy _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostRavi Mehtaguest
Sep 29, 202554mWatch on YouTube ↗

CHAPTERS

  1. Why most AI prototypes disappoint: shifting from “vibes” to structure

    Claire frames the core problem: AI prototyping tools generate impressive demos, but often miss the specific product experience teams need. Ravi introduces “data-driven prototyping” as a way to improve fidelity by giving models better inputs—starting with data, not pixels.

  2. Two common approaches and what’s missing: spec-driven vs design-driven prototyping

    Ravi contrasts today’s defaults: long, detailed prompts (spec-driven) or uploading Figma/wireframes (design-driven). He explains that in real product development, engineering quickly anchors ambiguity by defining a data schema—so prototypes should start there too.

  3. Demo setup: a “vibe prototyping” prompt for a shared Paris trip planner

    Using Reforge Build, Ravi enters a minimal prompt to create a multiplayer Paris itinerary site with profiles and comments. The tool’s follow-up questions highlight how much ambiguity is packed into a single prompt.

  4. What the spec-driven prototype gets wrong: hallucinations and low-fidelity content

    They review the generated prototype: decent structure and components, but broken images, wrong visuals, and “average” quality. The biggest issues come from weak or fabricated data/media that undermines trust and realism.

  5. Data-driven prototyping: generate the dataset first (JSON as the spec)

    Ravi shows the pivot: prompt an LLM to generate a structured JSON dataset that mirrors a realistic schema—travelers, items, timestamps, ratings, tags, and threaded notes. This acts as an explicit, testable ‘spec’ that the prototyping tool can reliably build around.

  6. Solving media quality with MCP: pulling real images from Unsplash

    To avoid hallucinated URLs and poor visuals, Ravi connects Claude to an Unsplash MCP server so image links are real and relevant. Claire highlights how this replaces time-consuming manual stock photo searching with programmatic retrieval.

  7. Building the prototype from JSON: minimal prompt, maximal fidelity

    They paste the generated JSON into Reforge Build with a simple instruction to generate the experience based on the data. The resulting prototype looks richer and more modern because it’s grounded in specific, consistent content and metadata.

  8. Stress-testing UX with real-world data (and production-like datasets)

    Claire connects the method to real product practice: prototypes break in the real world due to messy UGC, long text, odd photo aspect ratios, localization, etc. Data-driven prototyping makes it easy to test those edge cases early by swapping in representative datasets.

  9. Iterating fast: editing the data file and swapping entire scenarios

    Ravi demonstrates how modifications become straightforward: change a name, replace a cover photo, or regenerate a whole new itinerary (Paris → Thailand) while keeping the same UI behaviors. The key is that functionality remains dynamic while data is easily replaced.

  10. Why it works: flexibility, realism, and better stakeholder feedback

    They summarize the core benefits: more realistic content, faster iteration, and prototypes that behave like real products. This improves research quality and helps teams explore different personas, segments, and contexts with minimal rebuild work.

  11. From prototypes to visuals: structured Midjourney prompting for usable images

    The conversation shifts to generating high-quality images directly in Midjourney. Ravi explains that vague prompts (e.g., “office chair”) produce generic results, while structured prompts yield catalog-ready, art-directed outputs.

  12. The Subject–Setting–Style framework (and why lighting is part of “setting”)

    Ravi teaches a prompt structure: define the subject precisely, specify setting (including lighting/mood), and choose a style using photographic language. They show how changing the setting (e.g., morning light → rainy autumn) shifts the image more naturally than describing lighting directly.

  13. Cheat codes for style: film stock and camera metadata to steer outputs

    Ravi demonstrates how film stocks (e.g., Fujicolor C200, Kodak Tri‑X) and camera details (Leica, 50mm, f/1.2) reliably move Midjourney toward high-end photography distributions. They compare results with and without metadata, noting reduced ‘uncanny valley’ effects for portraits.

  14. Lightning round: consumer AI opportunities, prompting tactics, and closing

    Ravi argues consumer AI success depends on real user psychology, not tech-first novelty, and emphasizes delight/personalization as differentiators. He also shares a practical prompting trick: use ‘elite’ role framing to push models into higher-quality output modes, then closes with where to find his work.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome