CHAPTERS
Why PMs must scale: supporting bigger, cross-functional teams with AI
Aakash frames the trend: PMs increasingly support 10–20+ partners across engineering, design, data, and go-to-market. Hannah explains how roles are blending (PMs doing analysis/prototyping; engineers/designers making product decisions), making shared context the bottleneck—and the leverage point.
Team OS: the shared knowledge repo that turns context into leverage
Hannah introduces “Team OS” (Team Operating System): a single repository that holds a team’s shared context and workflows. She outlines the core structure: a root CLAUDE.md, a .claude folder for shared AI assets, and function-oriented folders for product and team artifacts.
Root CLAUDE.md best practices: keep it lean and action-ready
They unpack what belongs in the root CLAUDE.md and what doesn’t. Hannah emphasizes that root instructions should be minimal, high-frequency context, enabling natural-language actions like messaging the right people via Slack integrations.
Context management 101: windows, compaction, and ‘thinking room’
Hannah explains the theory behind the repo design: managing what the model sees, when. She defines context, context window, compaction (lossy compression when full), and thinking room (space left to reason), tying these concepts to performance and reliability.
Nested CLAUDE.md doc indexes: how Claude navigates without wasting tokens
They show how CLAUDE.md files exist at multiple folder levels as “doc indexes,” letting Claude traverse directly to relevant files instead of exploring the whole repository. Live examples demonstrate low context usage while answering customer questions accurately.
Customer intelligence at scale: summaries, per-customer context, and consistent formats
Hannah demonstrates how storing customer data in structured files enables fast synthesis like “Who did I meet in the last two weeks and what did I learn?” The system prioritizes summaries over raw transcripts and uses shared ‘skills’ to standardize outputs across many contributors.
Shared skills, commands, and workflows: turning team habits into reusable AI primitives
Hannah explains how teams codify repeatable work into shared skills/commands and multi-step workflows stored alongside docs. This creates compounding leverage: everyone captures information the same way, and Claude can execute recurring processes quickly and predictably.
Scaling analytics with a repo: metrics, queries, schemas, and verified playbooks
They walk through an analytics folder pattern: dashboards, experiment results, investigations, plus the critical triad—metrics definitions, SQL queries, and table schemas. This empowers PMs and engineers to do correct analysis without guessing joins or definitions, reducing hallucinations and errors.
Engineering + operations in the same system: bugs, RFCs, and shared ownership
Hannah argues Team OS is not just for PMs or engineers—everyone contributes, including ops and strategy partners. Bug investigations and RFCs become searchable institutional memory, and functional leads guide structure while the whole team maintains quality.
Automations on top of the repo: weekly synthesis, Slack updates, and process glue
They describe “shared automations” as the third pillar: using the repository as the source of truth to produce recurring outputs automatically. Example: a weekly customer research synthesis posted to Slack so the team stays continuously aligned.
How work gets done day-to-day: docs in Claude, PRs in GitHub, Slack notifications
Hannah describes the tactical operating model: write docs in Claude Code, check them into the repo, and review via pull requests—across PM, design, engineering, data, and ops. Claude can even open PRs and message reviewers via integrated tools and shared commands.
Plan mode for better docs: from basic prompts to robust, verifiable plans
Hannah demonstrates the difference between prompting and planning, using Plan mode to remove the model’s bias-for-action and produce a structured approach. She covers clearing context between tasks, adding verification criteria, checkpointing phases, and explicitly invoking writing guides/skills for consistency.
Parallel agents + long-doc execution: prompts, temp files, and orchestration
They discuss using parallel sub-agents to research and draft different sections, then orchestrating synthesis—critical for long documents given context constraints. Hannah highlights two advanced tactics: inspecting agent prompts and forcing outputs into temporary files to prevent context overload/crashes.
The learning flywheel: beginner mindset, avoiding ‘give up early,’ and choosing tools
Hannah and Aakash close with a learning philosophy: keep asking why setups work, iterate daily, and don’t abandon the process after a bad first try. They cover what’s under-hyped (curiosity, depth), when to use chat vs coding agents, and how to create learning time by automating work.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome