Skip to content
Aakash GuptaAakash Gupta

How OpenAI Builds Products: The Framework You Need

Jake Brill leads Integrity Product at OpenAI. And in today's episode he gives a masterclass in product management at the world's hottest AI company. ⏰ Timestamps: Intro - 0:00 GPT-5 Launch Behind the Scenes - 2:28 How OpenAI Runs on Slack - 3:34 What is Integrity Product? - 4:47 Red Teaming & Safety Evals Explained - 7:22 Why Everyone's Obsessed with AI Agents - 13:11 Jira Product Discovery & AI PM Certification Ads - 15:57 Building Agent-First Products - 17:40 Real Agent Use Cases at OpenAI - 18:45 Future of PRDs vs AI Prototypes - 21:57 When Agents Learn to Cheat - 28:33 AI Evals Course & Maven Ads - 31:01 Inside OpenAI's Product Culture - 33:03 Working with Research Teams - 40:59 Platform PM vs Regular PM - 47:23 Breaking Into OpenAI (Career Advice) - 50:48 Jake's Facebook Origin Story - 56:57 Brain Injury & Instacart Journey - 1:08:13 Building Your Reputation at Work - 1:14:56 PM Role in 5 Years - 1:16:28 Managing Humans vs Agents - 1:19:07 Transcript: https://www.news.aakashg.com/p/jake-brill-podcast 🏆 Thanks to our sponsors: 1. Jira Product Discovery: Build the right thing, reliably - https://www.atlassian.com/software/jira/product-discovery 2. AI PM Certification: $550 with code AAKASH550C7 - https://maven.com/product-faculty/ai-product-management-certification?promoCode=AAKASH550C7 3. The AI Evals Course for PMs & Engineers: Get $1050 off with code ag-product-growth - https://maven.com/parlance-labs/evals?promoCode=ag-product-growth 4. Maven: Get $100 off my curation of their top courses - http://maven.com/x/aakash 👀 Where to find Jake: LinkedIn: https://www.linkedin.com/in/jake-brill/ 👨‍💻 Where to find Aakash: Twitter: https://www.twitter.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Instagram: https://www.instagram.com/aakashg0/ 🔔 Subscribe and like the video to support our content! 🔑 Key Takeaways 1. The PM Role Has Changed. PMs are shifting from specification writers to evaluation architects. You now define what AI products should achieve and measure capabilities, not just write feature requirements. 2. Prototypes Beat Documentation. Stop writing lengthy PRDs. Build functional AI demonstrations that show exactly how products work, accelerating feedback cycles and eliminating interpretation gaps between teams. 3. Every PM Needs Agents. If you're not building agent-first products, you're building something obsolete. Design for task completion and asynchronous workflows, not just question-answer interactions. 4. Managing Agents Is Coming. PMs will soon manage both humans and AI agents. Learn prompt engineering, set delegation boundaries, and decide when humans need to stay in the loop. 5. Empathy Becomes Your Advantage. As technical tasks automate, human skills matter more. Understanding diverse users, team dynamics, and communication styles becomes your competitive edge over AI. 6. Research Drives Product Strategy. OpenAI starts with AI breakthroughs, then finds applications. This research-first approach requires unprecedented collaboration between product and research teams from day one. 7. Planning Embraces Failure. Assume only 60-70% plan completion. Lightweight documents and async reviews work better than rigid roadmaps when building in uncertain AI environments. 8. Infrastructure Enables Everything. Identity systems, payment rails, and safety measures determine whether AI products reach users or crash. Build platform reliability before you need it. 9. AI Fluency Is Baseline. Technical familiarity with AI products and APIs is now required for all PMs. Combine this baseline with domain expertise to tackle unprecedented problems. 10. Breaking Into OpenAI. Jake's path: 2 months of intense interview prep, leveraging Facebook connections, and starting informal trust & safety work before formal recognition. Referrals still matter most. #ProductManagement #AIAgents #OpenAI 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 182K listeners. Hosted by Aakash Gupta, who spent 16 years in PM, rising to VP of product, this 2x/ week show covers product and growth topics in depth.

Aakash GuptahostJake Brillguest
Aug 23, 20251h 21mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

OpenAI’s product framework: integrity, evals, agents, and PM evolution

  1. OpenAI’s Integrity Product function spans safety enforcement plus critical platform systems like identity, payments, and fraud prevention that must stay reliable during major launches like GPT-5.
  2. OpenAI treats safety as a core product philosophy, using iterative deployment to learn from real-world misuse while still defining non-negotiable risks that must be mitigated before launch.
  3. Evals and red teaming (manual and automated, pre- and post-launch) are positioned as the objective backbone for deciding whether models/products are “safe enough,” including measuring deception and refusal behavior.
  4. “Assistance” is evolving into “agents,” shifting product design toward delegating complex synchronous or asynchronous tasks, and increasing the need for standards (e.g., MCP-like protocols) for tool and agent interoperability.
  5. PM work is moving toward AI prototyping and eval creation, while the most enduring PM advantage remains empathy—paired with a new skill: collaborating with and managing agents.

IDEAS WORTH REMEMBERING

5 ideas

Integrity is more than safety—it’s launch-critical reliability infrastructure.

Brill frames Integrity as both harm prevention and the platform behind identity/login, payments, and fraud controls; if these fail at launch, even a great model delivers a broken first experience.

Red teaming must be continuous, not a pre-launch checkbox.

OpenAI red-teams during training, at checkpoints, right before release, and after launch to catch new jailbreaks and real-world attack patterns that only emerge in the wild.

Automated enforcement lives and dies by precision and recall.

Blocking generations, warning, or banning accounts are “serious interventions,” so OpenAI emphasizes high precision (avoid false positives) while maintaining high recall to avoid being blind to harms.

Evals are the closest thing to a safety ‘source of truth.’

For decisions like delaying a release, Brill argues against “vibes-based” calls and favors evals that quantify risks like deception and compliance with refusals for high-risk prompts.

Most companies should not reinvent evals or moderation layers.

Start with industry-standard/public evals and existing safety layers (e.g., moderation APIs or open safety models), then extend them for domain-specific risks rather than building from scratch.

WORDS WORTH SAVING

5 quotes

It's like you're- if you're not building a product that has AI fundamentally in its DNA, you're not really keeping up with the future of digital technology.

Jake Brill

You can make vibes-based decisions on this sort of stuff, but ultimately, evals are, are really what's gonna guide the day in helping you objectively and with data determine if your, if your model is safe enough to, to release.

Jake Brill

Plans are useless, but planning is everything.

Jake Brill

It's like if you're doing a really great job, people don't notice, and it's only if like things go sideways, uh, that, that the, the light shines on you.

Jake Brill

I think the most important skill that a PM can have, and that, like, is going to be the case in five years, is empathy.

Jake Brill

Integrity Product scope (safety, identity, payments, fraud)Red teaming across the full lifecyclePrecision/recall tradeoffs in automated enforcementEvals as objective safety gates (deception, bio refusal, etc.)Iterative deployment and “non-negotiable” risksAgent-first product design (async workflows, delegation)PM evolution: prototyping, eval ownership, managing agents

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome