Skip to content
Aakash GuptaAakash Gupta

Claude Code Secrets for PMs: The Operating System, Skills, and Data Viz

Carl Vellotti returns for his 3rd episode. His first two crossed over 1M views. Today he shows how to turn Claude Code into a full PM operating system with context management, sub-agents, self-checking skills, and Jupyter notebooks. Full Writeup: https://www.news.aakashg.com/p/carl-vellotti-3 Transcript: https://www.aakashg.com/carl-vellotti-podcast-3/ --- Timestamps: 0:00 - Intro 1:40 - Does Claude Code still matter vs Cowork and OpenClaw 6:51 - Setting up the context status line 10:41 - Ads 12:03 - Sub-agents for context preservation 17:49 - Creating skills live 23:58 - The ask user questions tool 31:13 - Ads 33:33 - Tool-powered skills with Tavily and Firecrawl 36:57 - CLI vs MCP vs API hierarchy 39:30 - Make slides skill with Puppeteer 43:32 - Auto-invoking skills with hooks 46:49 - Jupyter notebooks for data trust 55:09 - The operating system file structure 1:05:58 - Outro --- 🏆 Thanks to our sponsors: 1. Bolt: Ship AI-powered products 10x faster - https://bolt.new/solutions/product-manager?utm_source=Promoted&utm_medium=email&utm_campaign=aakash-product-growth 2. Amplitude: The market-leader in product analytics - https://amplitude.com/session-replay?utm_campaign=session-replay-launch-2025&utm_source=linkedin&utm_medium=organic-social&utm_content=productgrowthpodcast 3. Pendo: The #1 software experience management platform - http://www.pendo.io/aakash 4. NayaOne: Airgapped cloud-agnostic sandbox - https://nayaone.com/aakash/ 5. Product Faculty: Get $550 off their #1 AI PM Certification with my link - https://maven.com/product-faculty/ai-product-management-certification?promoCode=AAKASH550C7 --- Key Takeaways: 1. Context management is the real skill - A single web search eats 10% of your context. Run /context to see what is consuming it. System prompt and MCPs take 10-16% before you type one message. 2. Sub-agents save 20x context - Delegate research to a sub-agent. Same task costs 0.5% instead of 10%. Your main session only gets the summary. 3. Replace MCPs with CLIs - MCPs eat context by existing. CLIs have zero overhead. GitHub CLI, Vercel CLI, Google Workspace CLI are all dramatically more efficient. 4. Powerful skills need zero code - Anthropic's front-end design plugin is just a good prompt. No APIs or tooling. Just rules that tell Claude "do not look like AI." 5. Give Claude self-checking tools - The make slides skill uses Puppeteer to screenshot output, measure overflow, and fix issues before you see them. 6. Repeat prompts for better quality - A Google paper showed pasting a prompt twice helps. Tell Claude to double-check against skill instructions after the first pass. 7. Use hooks to auto-invoke skills - A user_prompt_submit hook matches your words against skill keywords instantly. Zero context cost. 8. Jupyter notebooks solve data trust - Every analysis shows exact code, inputs, and outputs. Traceable and reproducible. 9. Build an operating system - Knowledge folder for people context. Projects folder for task isolation. Tools folder for scripts. CLAUDE.md for identity. 10. The people folder compounds - Connect meeting transcription. After every meeting, update each person's dossier. Every prompt gets more specific over time. --- 👨‍💻 Where to find Carl Vellotti: LinkedIn: https://www.linkedin.com/in/carlvellotti/ Twitter/X: https://x.com/carlvellotti Newsletter: The Full-Stack PM 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ Newsletter: https://www.news.aakashg.com #claudecode #aipm --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostCarl Vellottiguest
Mar 30, 20261h 6mWatch on YouTube ↗

CHAPTERS

  1. Why Claude Code is exploding—and why most people still use it like a chatbot

    Aakash frames Claude Code’s breakout growth and the core problem: people treat it like a prompt-in, answer-out chatbot instead of a system for doing real work. Carl previews an “operating system” approach: context management, skills, sub-agents, and workflows that make outputs higher-quality and more reliable.

  2. Does Claude Code still matter vs Cowork and OpenClaw?

    Carl compares Claude Code to newer products like Cowork (a UI layer) and OpenClaw (more autonomous/always-on monitoring). The conclusion: for PMs who must stay in the driver’s seat and do the most powerful work, Claude Code remains the core foundation tool.

  3. Make context visible: customizing the status line to track context usage

    Carl demonstrates a fast win: customize Claude Code’s UI status line to show model, working directory, and a color-coded context meter. This gives immediate feedback on which actions consume context and helps prevent surprise compactions.

  4. Why context limits hurt: compaction delays, “context rot,” and quality drop-off

    They explain why context management matters even though Claude can compact automatically. Compaction interrupts flow, tool calls eat context rapidly (especially research), and long conversations can degrade output quality—so minimizing irrelevant context improves results.

  5. Diagnose what’s consuming context: /context and trimming unnecessary tools

    Carl shows the /context view to reveal baseline token usage from system prompts and enabled tools/MCPs. A key optimization is disabling tools you don’t need, because some integrations consume significant context just by being available.

  6. Sub-agents for context preservation: delegate “expensive” work and keep the main thread clean

    Carl demonstrates using sub-agents for research so the main session only receives summarized results rather than all intermediate tool calls. He also shows running agents in the background to keep moving while parallel work completes.

  7. Recover from wrong turns: stopping, rolling back, and saving context

    They cover practical control shortcuts: Escape to stop a run, and double-Escape to access message history and roll back. Rolling back removes erroneous steps from the session entirely, which can also conserve context.

  8. Creating skills live: build a “Context Guard” skill to decide when to use sub-agents

    Carl deletes a flawed skill and rebuilds a new one in natural language: “Context Guard,” which evaluates whether a task should be delegated to a sub-agent. They also discuss security: avoid random skill marketplaces due to malware/prompt-injection risks.

  9. Ask User Questions tool: turn Claude into an interviewer to remove assumptions

    Carl highlights his favorite feature: the Ask User Questions tool, which generates an interactive UI to collect clarifications. This reduces hidden assumptions and improves final output quality by systematically gathering missing context before execution.

  10. Skills that are “just prompts”: the Front-End Design plugin and why it works

    They show how an official front-end design skill dramatically improves UI quality without adding new capabilities—just better rules and constraints. The key lesson: a well-crafted prompt packaged as a skill can outperform generic prompting.

  11. Tool-powered research skills: Tavily + Firecrawl for higher-trust web research

    Carl explains why default web search is unreliable and demonstrates a research skill using Tavily (better search) plus Firecrawl (clean markdown scraping). This improves source quality and reduces junk content in context.

  12. CLI vs MCP vs API: a practical hierarchy for context efficiency and reliability

    They argue CLIs are often superior for agents: they’re easy for models to operate and don’t consume as much context as MCPs. GitHub and Vercel CLIs are called out as especially powerful for “vibe coding” and deployment workflows.

  13. Slide-making that self-checks: a “Make Slides” skill with Puppeteer and iterative validation

    Carl shares an advanced skill that generates HTML slides and uses Puppeteer to screenshot and detect layout overflow, enabling the model to verify visuals. They also discuss forcing multiple iterations (builder/validator pattern) to improve design quality.

  14. Auto-invoking skills with hooks: keyword matching on user prompt submit

    They tackle finicky skill auto-invocation by using hooks—automation points in the Claude Code lifecycle. A user-prompt-submit hook runs a script to detect skill keywords and nudge the model to apply the relevant skill without bloating CLAUDE.md.

  15. Trustworthy data analysis with Jupyter notebooks: traceable “proof of work” and visualization

    Carl demonstrates using Jupyter notebooks inside Cursor/VS Code to make AI analysis transparent and reproducible. The workflow moves from inspecting CSV structure to charts and correlation heatmaps, giving PMs traceable calculations they can defend to stakeholders.

  16. Building a Claude Code “operating system”: folder structure, projects, tools, tasks, and CLAUDE.md

    Carl outlines a cohesive file system so outputs don’t scatter: Knowledge (people/refs), Meetings, Projects (artifact hub), Data, Tasks/Ideas, and Tools (scripts). He explains why CLAUDE.md is special—always in context—so it should encode stable working preferences and company context.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome