Aakash GuptaAakash Gupta

How this PM Used Claude Code to Support 20 People

Aakash Gupta and Hannah Stulberg on building a Team OS with Claude Code for PM leverage.

Hannah StulbergguestAakash Guptahosthost
Apr 7, 20261h 10mWatch on YouTube ↗
Team OS (team knowledge repo) conceptRoot and nested CLAUDE.md doc indexesContext window, compaction, and thinking roomStructured customer research: summaries vs transcriptsShared skills/commands and standard templatesAnalytics scaling: metrics, SQL queries, schemas, playbooksPlan mode, verification, parallel agents, saved plan filesGitHub PR workflow for non-technical teammatesMCP/CLI integrations (Slack, Google Docs, Snowflake, Playwright)Beginner mindset and continuous iteration

In this episode of Aakash Gupta, featuring Hannah Stulberg and Aakash Gupta, How this PM Used Claude Code to Support 20 People explores building a Team OS with Claude Code for PM leverage The episode introduces a “Team OS,” a Git-backed knowledge base designed so coding agents can retrieve the right context on demand without bloating the LLM session.

At a glance

WHAT IT’S REALLY ABOUT

Building a Team OS with Claude Code for PM leverage

  1. The episode introduces a “Team OS,” a Git-backed knowledge base designed so coding agents can retrieve the right context on demand without bloating the LLM session.
  2. Hannah explains context-management fundamentals—context window, compaction, and “thinking room”—and shows how nested, lean CLAUDE.md doc indexes prevent unnecessary context loading.
  3. The team standardizes reusable “skills,” commands, and workflows so unstructured inputs (like customer calls) become consistent artifacts that Claude can synthesize reliably.
  4. Analytics is scaled by storing metric definitions, vetted SQL queries, and table schemas in structured folders, reducing hallucinations and enabling PMs/engineers to self-serve analysis.
  5. High-quality docs come from deliberate planning using Plan mode, checkpoints, verification criteria, parallel agents writing to temp files, and saving plan files to avoid rework and “context rot.”

IDEAS WORTH REMEMBERING

10 ideas

Treat team context like a version-controlled product, not scattered docs.

By keeping PRDs, research, analytics references, and workflows in a repo, teams create shared, searchable context that an AI agent can use consistently across roles.

Keep the root CLAUDE.md extremely lean and push detail downward.

The root file loads every session, so it should include only high-frequency essentials (doc index, team roster/handles, key channels) while nested CLAUDE.md files act as local indexes.

Doc indexes reduce token burn and improve reasoning quality.

Nested indexes let Claude navigate directly to relevant folders, conserving context window and preserving “thinking room,” which improves reasoning compared with broad repo exploration.

Separate “summary” artifacts from “raw” artifacts to maximize fidelity.

Storing call summaries in a consistent format allows fast synthesis across many meetings, while transcripts remain available only when deeper detail is needed.

Standardized skills turn messy human input into machine-friendly structure.

Team-wide templates for things like customer call summaries create uniform outputs, enabling reliable cross-customer analysis even when many people contribute.

Scale analytics safely by storing vetted metric definitions, queries, and schemas.

When analysts/data scientists curate the “metrics/queries/schemas” folders, PMs and engineers can self-serve correct analysis and reduce hallucinations from ad-hoc querying.

Make repository updates a launch gate to prevent ‘context rot.’

Hannah’s team doesn’t ship features until the repo is updated, ensuring the AI agent’s knowledge stays current and the team can operate without tribal memory.

Use Plan mode to eliminate the model’s ‘bias for action’ on complex tasks.

Plan mode forces up-front alignment on phases, structure, sources, and verification so you don’t waste time correcting a misframed first draft after the fact.

Parallelize long work, but require agents to write to temp files.

If multiple sub-agents return large outputs directly into the parent context, you risk crashes/compaction loss; temp files keep outputs stable for later synthesis.

Store plan files as first-class artifacts for reuse and continuity.

Plans take time to refine and the default plan files are ephemeral, so saving plans in the repo lets the team repeat complex processes and resume multi-day work reliably.

WORDS WORTH SAVING

5 quotes

I have spent now, like, 1,500 hours in Claude, and I'm still iterating on my setup and improving it literally every single day.

Hannah Stulberg

You don't want very much in your CLAUDE.md file. CLAUDE.mds should be very, very, very lean.

Hannah Stulberg

The whole repository is structured around helping Claude read and use the right information at the right time.

Hannah Stulberg

When we're rolling out a new feature, the feature is not rolled out until the repository is updated.

Hannah Stulberg

Claude is like a really, really eager and highly talented junior employee.

Hannah Stulberg

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

What exact fields are in your root CLAUDE.md today, and what did you remove after learning it should stay “very lean”?

The episode introduces a “Team OS,” a Git-backed knowledge base designed so coding agents can retrieve the right context on demand without bloating the LLM session.

How do you decide which information qualifies as “needed in 80% of sessions” for a nested CLAUDE.md (customer vs folder vs root)?

Hannah explains context-management fundamentals—context window, compaction, and “thinking room”—and shows how nested, lean CLAUDE.md doc indexes prevent unnecessary context loading.

Can you share a concrete example of a ‘customer call summary skill’ template (headers/sections) that your whole team standardizes on?

The team standardizes reusable “skills,” commands, and workflows so unstructured inputs (like customer calls) become consistent artifacts that Claude can synthesize reliably.

What’s your process for preventing ‘context rot’ beyond updating competitive intel—do you have review cadences, owners, or automated checks?

Analytics is scaled by storing metric definitions, vetted SQL queries, and table schemas in structured folders, reducing hallucinations and enabling PMs/engineers to self-serve analysis.

In the analytics folder, what validation steps does the data scientist use to ‘audit’ metric definitions, join keys, and queries before others rely on them?

High-quality docs come from deliberate planning using Plan mode, checkpoints, verification criteria, parallel agents writing to temp files, and saving plan files to avoid rework and “context rot.”

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome