Aakash GuptaAakash Gupta

How AI PMs Ship Features Users Love (Descript CEO Explains)

Aakash Gupta and Laura Burkhauser on descript CEO on building AI editing tools and PM leadership.

Laura BurkhauserguestAakash Guptahost
Dec 15, 202554mWatch on YouTube ↗
Product transformation and identity (“how it makes you feel”)LLM-enabled editing actions (retakes, filler words, clarity, chapters)Context window constraints and chunking strategiesAI rollout strategy: public beta, A/B tweaks, production dataMeasuring AI success: adoption, retention, export behaviorPM ownership in AI: eval criteria and pass/fail definitionsUnderlord agent: objectives, emergence, tool coverage, activation

In this episode of Aakash Gupta, featuring Laura Burkhauser and Aakash Gupta, How AI PMs Ship Features Users Love (Descript CEO Explains) explores descript CEO on building AI editing tools and PM leadership Descript’s early AI feature strategy focused on packaging reliable, job-based “buttons” (e.g., remove retakes, edit for clarity) rooted in well-understood user workflows rather than novelty prompts.

At a glance

WHAT IT’S REALLY ABOUT

Descript CEO on building AI editing tools and PM leadership

  1. Descript’s early AI feature strategy focused on packaging reliable, job-based “buttons” (e.g., remove retakes, edit for clarity) rooted in well-understood user workflows rather than novelty prompts.
  2. The team shipped AI tools using pragmatic, human-driven evaluation against real production data, iterating via public beta, adoption/retention metrics, and whether users exported the AI-modified output.
  3. As user needs became too parameter-heavy for fixed tools (e.g., Create Clips requesting endless knobs), Descript shifted toward Underlord, an objective-driven, open-ended co-editor agent.
  4. Underlord’s rollout emphasized tool coverage, representative regression tests, real-customer private alpha feedback, and improved activation—especially helping novices “get over the hump” of video editing.
  5. Burkhauser frames the PM’s unique value in AI as defining success/failure criteria for evals, while her career advice stresses deep product/customer command, shipping excellence, and humility in founder-led environments.

IDEAS WORTH REMEMBERING

7 ideas

Great AI features start with a concrete workflow pain, not the model.

Descript mapped creator workflows (scripted vs. improvised) and attached AI to specific pains like retakes, eye contact, and clarity—then hid prompts behind dependable, job-based buttons.

Ship “reliable buttons” first; use agents when customization explodes.

Fixed tools work well when inputs are bounded, but Create Clips requests kept adding parameters; Underlord emerged as the right abstraction once users needed highly customized, conversational control.

Human evals against real data are a valid starting point—if you’re disciplined.

Before formal eval stacks were common, the team tested on production-like content and shipped when results were genuinely usable; later they layered regression tests, A/B tweaks, and more automation.

PMs uniquely own the definition of “quality” for AI outputs.

Burkhauser argues only the PM can codify what “good,” “acceptable,” and “harmful” look like because it requires deep job/context understanding (e.g., judging jump-cut density, not just grammar).

Representative eval data matters more than sophisticated scoring.

Studio Sound quality regressed when evaluators used unrealistically terrible audio; the best model for ‘disaster audio’ differed from the best for the common ‘laptop mic’ use case, so the dataset must match the target workflow.

For agents, optimize rollout around emergence risk and activation gains.

Underlord’s open-ended input increases weird edge cases, so Descript used private alpha with diverse users (editing/AI novices and experts), built regression suites from real prompts, then expanded to new-user activation and opt-in broader use.

Founder-led trust is earned through command and shipping, not theatrics.

To avoid founder micromanagement spirals, Burkhauser recommends humility, intense product usage, customer closeness, business fluency, and continued excellence in core IC execution while gradually taking on strategy.

WORDS WORTH SAVING

5 quotes

The best products out there, they don't just do a job for you. They transform how you feel about yourself.

Laura Burkhauser

Build them in these prepackaged, parameterized, job-based buttons that can give you a reliable result over and over again.

Laura Burkhauser

You and only you are qualified to write the eval criteria for what… a good job looks like.

Laura Burkhauser

What it didn't take into account is… how many jump cuts per 10 seconds are you putting into my video?

Laura Burkhauser

If you're allowing for emergence, you're also allowing for a lot of, like, whack stuff to happen in your product.

Laura Burkhauser

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

For each early AI “button” (retakes, filler words, clarity), what were the exact launch criteria that flipped it from ‘don’t ship’ to ‘ship’?

Descript’s early AI feature strategy focused on packaging reliable, job-based “buttons” (e.g., remove retakes, edit for clarity) rooted in well-understood user workflows rather than novelty prompts.

How did you decide which editing tasks should be chunked for context-window limits, and which tasks were blocked until models improved?

The team shipped AI tools using pragmatic, human-driven evaluation against real production data, iterating via public beta, adoption/retention metrics, and whether users exported the AI-modified output.

When you say you measured retention for an AI tool, what was the retention definition (e.g., weekly reuse, per-project reuse, cohort reuse)?

As user needs became too parameter-heavy for fixed tools (e.g., Create Clips requesting endless knobs), Descript shifted toward Underlord, an objective-driven, open-ended co-editor agent.

What were the most common reasons users *didn’t* export after applying an AI edit, and how did that feed back into quality improvements?

Underlord’s rollout emphasized tool coverage, representative regression tests, real-customer private alpha feedback, and improved activation—especially helping novices “get over the hump” of video editing.

What did the ‘jump cuts per 10 seconds’ quality bar translate to operationally—did you add constraints, post-processing, or different prompting?

Burkhauser frames the PM’s unique value in AI as defining success/failure criteria for evals, while her career advice stresses deep product/customer command, shipping excellence, and humility in founder-led environments.

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome