Aakash GuptaAakash Gupta

If you can’t AI prototype after this, nothing will help you

Aakash Gupta and Sachin Rekhi on from AI slop to high-craft prototypes with validated solutions fast.

Aakash GuptahostSachin RekhiguestAakash Guptahost
Jan 26, 20261h 12mWatch on YouTube ↗
Product shaping vs traditional roadmappingAI slop: causes and avoidanceAI Prototyping Mastery Ladder (15 skills)Baselining via screenshot recreationBatching edits and versioning tradeoffsDiverging across tools for more variantsFunctional prototyping with real APIs (OpenAI)Secure handling of API keys/secretsCustomer validation at scale (surveys, analytics, session replay)Workflows vs agents (conceptual decisioning)Prototypes vs PRDs (discovery vs strategy)AI prototyping tools market map and picks

In this episode of Aakash Gupta, featuring Aakash Gupta and Sachin Rekhi, If you can’t AI prototype after this, nothing will help you explores from AI slop to high-craft prototypes with validated solutions fast Anthropic’s approach flips traditional roadmapping by prototyping many problem-solution pairs first, dogfooding them, and only then productionizing the best-performing prototypes.

At a glance

WHAT IT’S REALLY ABOUT

From AI slop to high-craft prototypes with validated solutions fast

  1. Anthropic’s approach flips traditional roadmapping by prototyping many problem-solution pairs first, dogfooding them, and only then productionizing the best-performing prototypes.
  2. “AI slop” happens when prototypes are generic, undifferentiated, and shallow in real workflows, but high-craft outcomes are achievable with the right techniques.
  3. Rekhi’s AI Prototyping Mastery Ladder outlines 15 skills from apprentice (prompting/editing/design consistency) to journeyman (debugging/versioning/diverging/validation) to master (functional prototypes and product shaping).
  4. Core methods include baselining your existing product via screenshot recreation, iterating with targeted edits (including batching), and forking templates to ensure consistent design across prototypes.
  5. Master-level validation uses deployed prototypes with embedded surveys plus analytics (e.g., PostHog events, heatmaps, session replays) to scale learning and simplify interfaces based on real behavior.

IDEAS WORTH REMEMBERING

7 ideas

Prototype many problem-solution pairs before committing roadmap capacity.

Anthropic-style “product shaping” prioritizes solutions that are already internally or customer-vetted, reducing the risk of building the wrong thing even when the problem is real.

Treat “one-shot apps” as a starting point, not a shippable output.

The first AI-generated UI is often generic in styling, undifferentiated in concept, and weak on real user workflows—use it to accelerate iteration, not to declare victory.

Baseline your product’s look-and-feel, then build everything on top of it.

Recreating a screenshot, refining it through edits, and duplicating/forking that baseline lets future prototypes inherit components and styling automatically, eliminating “wireframe AI” aesthetics.

Batch related edits to reduce round-trip time—without losing control.

Grouping similar changes (e.g., multiple color tweaks) speeds iteration, but batching unrelated changes makes it harder to isolate failures and manage versions when something breaks.

Use diverging intentionally—and use multiple tools to expand the idea space.

Ask for multiple variants (and even run the same explore prompt in different tools) because differing system prompts yield meaningfully different designs, producing more inspiration than a single tool run.

Aim for functional prototypes when interaction quality affects decisions.

Integrating real APIs (e.g., OpenAI) and even model selectors enables PMs to evaluate UX plus model-output differences before engineering investment, but still treat the code as disposable discovery code.

Scale validation by instrumenting prototypes like real products.

Embedded surveys, PostHog/Mixpanel-style event tracking, heatmaps, and session replay turn prototypes into learning engines—revealing feature discoverability, entry-point performance, and friction without one-on-one interviews only.

WORDS WORTH SAVING

5 quotes

They’re prioritizing not only what is actually… a problem worth solving, but a problem-solution pair that’s already vetted.

Sachin Rekhi

It still is AI slop because we could never ship this. This would never be considered high-craft work.

Sachin Rekhi

There’s actually 15 unique skills you kind of have to master to be able to do AI prototyping well.

Sachin Rekhi

We should be using [AI] to create multiple outputs… a designer would come up with three variants.

Sachin Rekhi

If a PM is trying to get a full version of their product out through these vibe coding tools, they’re doing it wrong.

Sachin Rekhi

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

In Anthropic’s “prototype-first” approach, what criteria determine which prototypes get productionized—usage, qualitative feedback, retention, or something else?

Anthropic’s approach flips traditional roadmapping by prototyping many problem-solution pairs first, dogfooding them, and only then productionizing the best-performing prototypes.

What are the 15 skills on the AI Prototyping Mastery Ladder, and which 3 typically unblock teams the fastest?

“AI slop” happens when prototypes are generic, undifferentiated, and shallow in real workflows, but high-craft outcomes are achievable with the right techniques.

When baselining from a screenshot, what are the most common failure modes (fonts, spacing, component structure), and how do you correct them efficiently?

Rekhi’s AI Prototyping Mastery Ladder outlines 15 skills from apprentice (prompting/editing/design consistency) to journeyman (debugging/versioning/diverging/validation) to master (functional prototypes and product shaping).

How do you decide the “right” amount of batching in a prompt, and what’s your rollback/versioning workflow when a batch partially fails?

Core methods include baselining your existing product via screenshot recreation, iterating with targeted edits (including batching), and forking templates to ensure consistent design across prototypes.

For diverging, how do you avoid getting overwhelmed by options—do you use a rubric, quick customer tests, or designer critique first?

Master-level validation uses deployed prototypes with embedded surveys plus analytics (e.g., PostHog events, heatmaps, session replays) to scale learning and simplify interfaces based on real behavior.

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome