How I AIThe secret to better AI prototypes: Why Tinder's CPO starts with JSON, not design | Ravi Mehta
CHAPTERS
Why most AI prototypes disappoint: shifting from “vibes” to structure
Claire frames the core problem: AI prototyping tools generate impressive demos, but often miss the specific product experience teams need. Ravi introduces “data-driven prototyping” as a way to improve fidelity by giving models better inputs—starting with data, not pixels.
Two common approaches and what’s missing: spec-driven vs design-driven prototyping
Ravi contrasts today’s defaults: long, detailed prompts (spec-driven) or uploading Figma/wireframes (design-driven). He explains that in real product development, engineering quickly anchors ambiguity by defining a data schema—so prototypes should start there too.
Demo setup: a “vibe prototyping” prompt for a shared Paris trip planner
Using Reforge Build, Ravi enters a minimal prompt to create a multiplayer Paris itinerary site with profiles and comments. The tool’s follow-up questions highlight how much ambiguity is packed into a single prompt.
What the spec-driven prototype gets wrong: hallucinations and low-fidelity content
They review the generated prototype: decent structure and components, but broken images, wrong visuals, and “average” quality. The biggest issues come from weak or fabricated data/media that undermines trust and realism.
Data-driven prototyping: generate the dataset first (JSON as the spec)
Ravi shows the pivot: prompt an LLM to generate a structured JSON dataset that mirrors a realistic schema—travelers, items, timestamps, ratings, tags, and threaded notes. This acts as an explicit, testable ‘spec’ that the prototyping tool can reliably build around.
Solving media quality with MCP: pulling real images from Unsplash
To avoid hallucinated URLs and poor visuals, Ravi connects Claude to an Unsplash MCP server so image links are real and relevant. Claire highlights how this replaces time-consuming manual stock photo searching with programmatic retrieval.
Building the prototype from JSON: minimal prompt, maximal fidelity
They paste the generated JSON into Reforge Build with a simple instruction to generate the experience based on the data. The resulting prototype looks richer and more modern because it’s grounded in specific, consistent content and metadata.
Stress-testing UX with real-world data (and production-like datasets)
Claire connects the method to real product practice: prototypes break in the real world due to messy UGC, long text, odd photo aspect ratios, localization, etc. Data-driven prototyping makes it easy to test those edge cases early by swapping in representative datasets.
Iterating fast: editing the data file and swapping entire scenarios
Ravi demonstrates how modifications become straightforward: change a name, replace a cover photo, or regenerate a whole new itinerary (Paris → Thailand) while keeping the same UI behaviors. The key is that functionality remains dynamic while data is easily replaced.
Why it works: flexibility, realism, and better stakeholder feedback
They summarize the core benefits: more realistic content, faster iteration, and prototypes that behave like real products. This improves research quality and helps teams explore different personas, segments, and contexts with minimal rebuild work.
From prototypes to visuals: structured Midjourney prompting for usable images
The conversation shifts to generating high-quality images directly in Midjourney. Ravi explains that vague prompts (e.g., “office chair”) produce generic results, while structured prompts yield catalog-ready, art-directed outputs.
The Subject–Setting–Style framework (and why lighting is part of “setting”)
Ravi teaches a prompt structure: define the subject precisely, specify setting (including lighting/mood), and choose a style using photographic language. They show how changing the setting (e.g., morning light → rainy autumn) shifts the image more naturally than describing lighting directly.
Cheat codes for style: film stock and camera metadata to steer outputs
Ravi demonstrates how film stocks (e.g., Fujicolor C200, Kodak Tri‑X) and camera details (Leica, 50mm, f/1.2) reliably move Midjourney toward high-end photography distributions. They compare results with and without metadata, noting reduced ‘uncanny valley’ effects for portraits.
Lightning round: consumer AI opportunities, prompting tactics, and closing
Ravi argues consumer AI success depends on real user psychology, not tech-first novelty, and emphasizes delight/personalization as differentiators. He also shares a practical prompting trick: use ‘elite’ role framing to push models into higher-quality output modes, then closes with where to find his work.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome