Skip to content
How I AIHow I AI

Claude Code + 15 repos: how a non-engineer answers every customer question | Al Chen

Al Chen is a field engineer at Galileo, an observability platform for AI applications, where he works on the front lines with enterprise customers asking highly technical questions. Despite never having held an engineering role, Al has built a system using Claude Code to query Galileo’s 15 separate repositories, combine that with Confluence documentation and customer-specific quirks, and deliver hyper-personalized answers that would otherwise require constant engineering support. *What you’ll learn:* 1. How to use Claude Code to query multiple repositories simultaneously for customer support 2. Why code is often a better source of truth than documentation 3. How to combine repository context with Confluence and Slack using MCPs 4. The “customer quirks” system that creates hyper-personalized deployment guides 5. How to build virtuous loops that turn single customer questions into scalable knowledge 6. Why information organization matters less in the AI era 7. A simple 16-line script (written by Claude Code) that pulls the latest main branch across all your repositories to keep your context current 8. How to reduce engineering interruptions to near-zero by empowering customer-facing teams to query the codebase directly *Brought to you by:* Orkes—The enterprise platform for reliable applications and agentic workflows: https://www.orkes.io/ Tines—Start building intelligent workflows today: https://tines.com/howiai *In this episode, we cover:* (00:00) Introduction to Al Chen (02:50) The problem: documentation wasn’t enough (04:23) Pulling 15 repos into VS Code (06:03) How Claude Code queries the entire codebase (08:00) Why current code beats documentation (08:31) The pull script that keeps everything updated (09:54) Opening projects at the multi-repo level (11:40) Live demo: answering deployment questions (13:25) The customer quirks system (15:00) Living in chaos: why organization matters less now (17:03) Competing on customer experience, not just product (18:20) Should customers be able to query the code directly? (20:05) Where humans still add value (25:46) Using AI for reactive Slack support (29:16) The “and then” workflow discovery (32:07) Scaling processes across the team (34:07) Lightning round and final thoughts *Detailed workflow walkthroughs from this episode:* • How Al Chen Uses Claude Code and 15 Repos to Answer Any Customer Question: https://www.chatprd.ai/how-i-ai/claude-code-and-repos-to-answer-any-customer-question • Automatically Create a Knowledge Base from Slack Support Threads: https://www.chatprd.ai/how-i-ai/workflows/automatically-create-a-knowledge-base-from-slack-support-threads • How to Use AI to Answer Customer Questions from Your Entire Codebase: https://www.chatprd.ai/how-i-ai/workflows/how-to-use-ai-to-answer-customer-questions-from-your-entire-codebase *Tools referenced:* • Claude Code: https://claude.ai/code • VS Code: https://code.visualstudio.com/ • Pylon: https://usepylon.com/ • Confluence: https://www.atlassian.com/software/confluence *Other references:* • Slack: https://slack.com/ • Kubernetes: https://kubernetes.io/ • Stack Overflow: https://stackoverflow.com/ • Intercom: https://www.intercom.com/ *Where to find Al Chen:* LinkedIn: https://www.linkedin.com/in/thealchen/ Company: https://www.rungalileo.io *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Al ChenguestClaire Vohost
Apr 6, 202645mWatch on YouTube ↗

CHAPTERS

  1. 4:23 – 6:03

    Cloning 15 repositories into one VS Code workspace

    Galileo’s platform spans many services and repositories rather than a monorepo. Al’s key unlock was pulling ~15 repos into a single parent directory and opening them together so Claude Code can traverse the whole product surface area.

  2. 6:03 – 8:00

    How Claude Code answers questions by traversing the full codebase

    Al explains his workflow: ask Claude Code to inspect specific repos (e.g., API + AuthZ) and expand to others as needed. This reduces constant Slack pings to engineering and helps Al learn the system while answering customers.

  3. 8:00 – 8:31

    Why “current code” beats docs as the source of truth

    Claire highlights that code changes daily and documentation often lags. Using the main branch as the canonical reference helps ensure answers reflect what the product actually does right now.

  4. 8:31 – 9:54

    The 16-line ‘pull all’ script to keep 15 repos up to date

    Al shares how he automated staying current: Claude Code generated a short script to git-pull main across every local repo. This replaced a manual, unscalable routine of pulling each repo one-by-one.

  5. 9:54 – 11:40

    Opening projects at the right level: multi-repo vs monorepo context strategy

    Claire and Al discuss an underused tactic: opening the IDE at the correct directory level for the question at hand. Going “up a level” can enable cross-service reasoning, though it may introduce context bloat if too broad.

  6. 11:40 – 13:25

    Live demo workflow: deployment Q&A using Confluence MCP + codebase

    Al demos a deployment-focused custom command (“DPL”) that starts by pulling relevant Confluence guidance and then falls back to scanning repos for missing specifics. The result is a step-by-step plan tailored to a customer’s constraints (e.g., no CRDs, Google Secrets Manager).

  7. 13:25 – 15:00

    The ‘customer quirks’ system: micro-documentation that drives personalized answers

    Al maintains an evolving Confluence page with per-customer constraints—air-gapped rules, secret storage, namespace patterns, encryption requirements, etc. Claude incorporates these quirks to generate responses that build trust and avoid generic guidance.

  8. 15:00 – 17:03

    Living with more chaos: AI as the cross-system information navigator

    Claire argues teams can be less rigid about “one source of truth” because AI can traverse Slack, Confluence, Notion, and code. Al reinforces: save valuable answers wherever they occur, then feed them back as context via MCPs and summaries.

  9. 17:03 – 18:20

    Competing on customer experience, not only product velocity

    Claire reframes AI leverage as a customer-experience differentiator, not just an engineering accelerator. Al’s approach helps customers feel heard through bespoke, environment-aware answers that reduce time-to-deploy and increase confidence.

  10. 18:20 – 20:05

    Should customers query the code directly? Open-source vs proprietary tradeoffs

    Al explores the logical endpoint: if customers could query the code, he wouldn’t be a bottleneck. He cites LangChain’s open-source support patterns, but notes proprietary code and security make direct access difficult—suggesting sanitized or limited-access alternatives.

  11. 20:05 – 25:46

    Where humans still add value: judgment, tone, verification, and relationships

    Al emphasizes he doesn’t blindly paste AI outputs—he edits for brevity, clarity, and human tone, and sanity-checks with engineers for edge cases or upcoming refactors. Claire adds the relationship factor: customers still want a trusted human counterpart.

  12. 25:46 – 32:07

    Reactive Slack support to knowledge base: Pylon workflow + the ‘and then’ loop

    Al shows how Slack-based customer support can become durable documentation using Pylon to generate help-article drafts from threads. Claire describes the broader “and then” mindset—turn each solved question into training, documentation, SEO, and roadmap signals.

  13. 32:07

    Scaling the workflow across the team + lightning round prompting tactics

    Al explains adoption is informal but driven by sharing results and coaching teammates to clone repos, index code, and use Claude Code effectively. In lightning round, they cover repo access concerns, raising technical literacy, and prompting techniques like ‘think harder’ plus sourcing/citations.

  14. Meet Al Chen: field engineering on the front lines of enterprise AI observability

    Claire Vo introduces Al Chen from Galileo’s field engineering team and frames the episode around using code as a customer-support advantage. The focus is on answering nuanced enterprise questions faster and more accurately by querying real product internals.

  15. Why documentation and generic AI answers failed customers

    Al describes the moment he realized public documentation—even when summarized by ChatGPT/Claude—still didn’t produce what customers needed. Customers wanted the real “how it works” flow across services, not a high-level docs response.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome