Skip to content
Aakash GuptaAakash Gupta

How this PM Used Claude Code to Support 20 People

Hannah Stulberg is a PM at DoorDash and former Google APM with 1,500+ hours in Claude Code. She walks through her Team OS live - a shared repo where every function self-serves context without asking the PM. Full Writeup: https://www.news.aakashg.com/p/hannah-stulberg-podcast Transcript: https://www.aakashg.com/hannah-stulberg-podcast/ Team OS Repo: https://github.com/in-the-weeds-hannah-stulberg/team-os-example-repo --- Timestamps: 0:00 - Intro 1:45 - What is a Team OS 3:50 - Live folder walkthrough 6:34 - Context management theory 8:27 - Nested CLAUDE.md files 11:36 - Ads 13:37 - Shared skills and commands 17:24 - Scaling analytics 25:24 - Shared automations 31:10 - Ads 33:32 - Plan mode for docs 49:47 - Parallel agents 59:50 - The learning flywheel 1:04:22 - Which AI tool when 1:09:11 - Outro --- 🏆 Thanks to our sponsors: 1. Bolt - Ship AI products 10x faster - https://bolt.new/solutions/product-manager?utm_source=Promoted&utm_medium=email&utm_campaign=aakash-product-growth 2. Jira Product Discovery - https://www.atlassian.com/software/jira/product-discovery 3. Kameleoon - Prompt-based experimentation - http://www.kameleoon.com/ 4. Amplitude - Product analytics leader - https://amplitude.com/session-replay?utm_campaign=session-replay-launch-2025&utm_source=linkedin&utm_medium=organic-social&utm_content=productgrowthpodcast 5. Product Faculty - $550 off AI PM Cert with code AAKASH550C7 - https://maven.com/product-faculty/ai-product-management-certification?promoCode=AAKASH550C7 --- Key Takeaways: 1. Build a Team OS, not a personal OS - A shared repo where every function checks in work. Engineers, designers, and analysts self-serve without asking the PM. 2. Root CLAUDE.md is everything - Doc index, team roster with Slack IDs, channel map. Keep under one page or you burn context every session. 3. Nested indexes save 97% of context - Every folder gets a navigation CLAUDE.md. A customer query used only 3% of the context window. 4. Three token tiers - Always-loaded root (~500 tokens), folder indexes on navigation (200-500), content files on demand (1,000-10,000+). 5. Split analytics by product area - Metrics, queries, schemas separated. Progressive loading prevents waste. 6. Gate launches on repo updates - Feature not shipped until metrics, queries, schemas, and playbooks are checked in. 7. Verified playbooks kill hallucinations - Analyst-audited methodology. Claude follows verified steps instead of inventing its own. 8. Plan mode makes 10x docs - Shift+Tab twice. Five phases: load context, ask questions, build plan, push thinking, review agents. 9. Split long docs across parallel agents - Each writes to a temp file. Orchestrating agent compiles. Prevents context overflow. 10. The flywheel compounds daily - Automate one task, free time, improve the repo. After 1,500 hours still iterating every day. --- 👨‍💻 Where to find Hannah Stulberg: LinkedIn: https://www.linkedin.com/in/hannah-stulberg/ Substack: https://hannahstulberg.substack.com 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ PM Newsletter: https://www.news.aakashg.com AI Newsletter: https://www.aibyaakash.com/ #claudecode #teamos --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Hannah StulbergguestAakash Guptahost
Apr 7, 20261h 10mWatch on YouTube ↗

CHAPTERS

  1. Why PMs must scale: supporting bigger, cross-functional teams with AI

    Aakash frames the trend: PMs increasingly support 10–20+ partners across engineering, design, data, and go-to-market. Hannah explains how roles are blending (PMs doing analysis/prototyping; engineers/designers making product decisions), making shared context the bottleneck—and the leverage point.

  2. Team OS: the shared knowledge repo that turns context into leverage

    Hannah introduces “Team OS” (Team Operating System): a single repository that holds a team’s shared context and workflows. She outlines the core structure: a root CLAUDE.md, a .claude folder for shared AI assets, and function-oriented folders for product and team artifacts.

  3. Root CLAUDE.md best practices: keep it lean and action-ready

    They unpack what belongs in the root CLAUDE.md and what doesn’t. Hannah emphasizes that root instructions should be minimal, high-frequency context, enabling natural-language actions like messaging the right people via Slack integrations.

  4. Context management 101: windows, compaction, and ‘thinking room’

    Hannah explains the theory behind the repo design: managing what the model sees, when. She defines context, context window, compaction (lossy compression when full), and thinking room (space left to reason), tying these concepts to performance and reliability.

  5. Nested CLAUDE.md doc indexes: how Claude navigates without wasting tokens

    They show how CLAUDE.md files exist at multiple folder levels as “doc indexes,” letting Claude traverse directly to relevant files instead of exploring the whole repository. Live examples demonstrate low context usage while answering customer questions accurately.

  6. Customer intelligence at scale: summaries, per-customer context, and consistent formats

    Hannah demonstrates how storing customer data in structured files enables fast synthesis like “Who did I meet in the last two weeks and what did I learn?” The system prioritizes summaries over raw transcripts and uses shared ‘skills’ to standardize outputs across many contributors.

  7. Shared skills, commands, and workflows: turning team habits into reusable AI primitives

    Hannah explains how teams codify repeatable work into shared skills/commands and multi-step workflows stored alongside docs. This creates compounding leverage: everyone captures information the same way, and Claude can execute recurring processes quickly and predictably.

  8. Scaling analytics with a repo: metrics, queries, schemas, and verified playbooks

    They walk through an analytics folder pattern: dashboards, experiment results, investigations, plus the critical triad—metrics definitions, SQL queries, and table schemas. This empowers PMs and engineers to do correct analysis without guessing joins or definitions, reducing hallucinations and errors.

  9. Engineering + operations in the same system: bugs, RFCs, and shared ownership

    Hannah argues Team OS is not just for PMs or engineers—everyone contributes, including ops and strategy partners. Bug investigations and RFCs become searchable institutional memory, and functional leads guide structure while the whole team maintains quality.

  10. Automations on top of the repo: weekly synthesis, Slack updates, and process glue

    They describe “shared automations” as the third pillar: using the repository as the source of truth to produce recurring outputs automatically. Example: a weekly customer research synthesis posted to Slack so the team stays continuously aligned.

  11. How work gets done day-to-day: docs in Claude, PRs in GitHub, Slack notifications

    Hannah describes the tactical operating model: write docs in Claude Code, check them into the repo, and review via pull requests—across PM, design, engineering, data, and ops. Claude can even open PRs and message reviewers via integrated tools and shared commands.

  12. Plan mode for better docs: from basic prompts to robust, verifiable plans

    Hannah demonstrates the difference between prompting and planning, using Plan mode to remove the model’s bias-for-action and produce a structured approach. She covers clearing context between tasks, adding verification criteria, checkpointing phases, and explicitly invoking writing guides/skills for consistency.

  13. Parallel agents + long-doc execution: prompts, temp files, and orchestration

    They discuss using parallel sub-agents to research and draft different sections, then orchestrating synthesis—critical for long documents given context constraints. Hannah highlights two advanced tactics: inspecting agent prompts and forcing outputs into temporary files to prevent context overload/crashes.

  14. The learning flywheel: beginner mindset, avoiding ‘give up early,’ and choosing tools

    Hannah and Aakash close with a learning philosophy: keep asking why setups work, iterate daily, and don’t abandon the process after a bad first try. They cover what’s under-hyped (curiosity, depth), when to use chat vs coding agents, and how to create learning time by automating work.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome