Skip to content
How I AIHow I AI

How Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan

Brian Scanlan is a senior principal engineer at Intercom, where he’s led the company’s transformation to AI-first engineering. In just nine months, Intercom doubled their R&D throughput while maintaining code quality, with 100% of engineers—plus designers, PMs, and TPMs—now shipping code via Claude Code. *What you’ll learn:* 1. How Intercom doubled their merged PRs per R&D employee in just nine months using Claude Code 2. The telemetry infrastructure they built to measure AI adoption and quality across hundreds of engineers 3. Why they built a skills repository with hooks that enforce engineering standards automatically 4. How they’re preparing their product for an agent-first world with CLIs, MCPs, and ephemeral APIs 5. The permission and accountability framework that enabled rapid AI adoption 6. Why backlog zero is now achievable and what that means for engineering culture *Brought to you by:* Celigo—Intelligent automation built for AI: https://celigo.com/howIAI Cursor—The best way to code with AI: https://www.chatprd.ai/howiai *In this episode, we cover:* (00:00) Introduction to Brian Scanlan (02:40) Why Intercom went all-in on AI for both product and engineering (05:01) The breakthrough moment with Opus 4.6 and Christmas break 2025 (07:02) Demo: Intercom’s merged PRs per R&D head (12:50) Agent-first work as a fundamental reimagining of technical workflows (14:27) The cost tradeoff: treating AI spend as an investment (16:47) Measuring quality (21:22) Demo: Shipping a redirect in the Rails monolith with Claude Code (24:03) Creating a custom PR skill (26:33) Building a software factory with predictable quality standards (30:15) Telemetry infrastructure: Honeycomb for skill usage tracking (32:10) Session data collection and personalized usage insights (36:08) Quick overview (39:20) Walking through Intercom’s skills repository (42:16) Deep dive: The flaky spec skill and how it reached 100x capability (46:44) The “and then” workflow for building comprehensive skills (52:31) The live website and overview of workflows (53:32) How internal AI experience informs customer product decisions (56:18) Making SaaS products agent-friendly with CLIs and helpful hints (01:03:49) Why conversion drop-off is invisible in agent-driven workflows (01:05:28) Lightning round and final thoughts *Detailed workflow walkthroughs from this episode:* • How Intercom Doubled Engineering Output: Brian Scanlan's 4 AI Workflows for Claude Code: https://www.chatprd.ai/how-i-ai/how-intercom-doubled-engineering-output-brian-scanlan-ai-workflows-for-claude-code • Design an Agent-Friendly CLI to Automate SaaS Product Onboarding: https://www.chatprd.ai/how-i-ai/workflows/design-an-agent-friendly-cli-to-automate-saas-product-onboarding • Build a Self-Improving AI Agent to Automatically Fix Flaky Tests: https://www.chatprd.ai/how-i-ai/workflows/build-a-self-improving-ai-agent-to-automatically-fix-flaky-tests • Automate High-Quality Pull Request Descriptions with a Custom AI Skill: https://www.chatprd.ai/how-i-ai/workflows/automate-high-quality-pull-request-descriptions-with-a-custom-ai-skill *Tools referenced:* • Claude Code: https://claude.ai/code • Cursor: https://cursor.com/ • Honeycomb: https://www.honeycomb.io/ • Snowflake: https://www.snowflake.com/ • Fin AI: https://www.intercom.com/fin • Vercel: https://vercel.com/ *Other references:* • Intercom GitHub Repo: https://github.com/intercom • Google API Go Client Repo: https://github.com/googleapis/google-api-go-client *Where to find Brian Scanlan:* X: https://x.com/brian_scanlan LinkedIn: https://www.linkedin.com/in/scanlanb/ Company: https://www.intercom.com *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Brian ScanlanguestClaire Vohost
Apr 19, 20261h 18mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Intercom doubles engineering throughput by going all-in on Claude Code

  1. Intercom committed company-wide to AI adoption in engineering after model capability inflections made agents feel “transformative,” then standardized on Claude Code to scale usage.
  2. They measure “merged PRs per R&D head” as a leading indicator of adoption and report roughly 2× throughput in nine months, with bottlenecks shifting from CI to code review.
  3. Intercom treats AI token spend as an investment phase (optimizing later) while building a software-factory approach with predictable standards enforced via skills and hooks.
  4. They built telemetry pipelines (Honeycomb, Snowflake, S3 session logs) to understand tool usage, coach individuals, and continuously improve skills rather than “flying blind.”
  5. Intercom claims quality has not degraded—citing faster idea-to-shipping times, no incident spike, and external research (Stanford) suggesting code quality trends upward.

IDEAS WORTH REMEMBERING

5 ideas

Model inflections change the constraint from tooling to imagination.

Scanlan describes a shift where engineers spend less time coaxing tools and more time delegating outcomes, enabling bigger bets and faster iteration when models reached a new capability tier.

Treat internal developer tooling like a product—with metrics, telemetry, and iteration loops.

Intercom instruments skills and sessions, mines usage data, and builds dashboards so they can debug adoption, improve workflows, and deliver “enablement” rather than just handing out an API key.

A simple throughput metric can drive adoption—if paired with high trust and quality guardrails.

They use merged PRs per R&D head as a crude but motivating leading indicator, while explicitly rejecting “quality slop” via standards, reviews, and targeted quality measures.

Enforce process quality upstream using skills and hooks, not wikis.

Intercom created a “Create PR” skill because AI-generated PR descriptions degraded; they blocked default PR creation paths and required the skill, raising PR-description quality per an internal LLM judge.

Agent-first work shifts bottlenecks—fix the new choke points.

As AI increased output, CI became overloaded and had to be optimized; after that, code review became the bottleneck, implying organizations must continuously rebalance systems around higher throughput.

WORDS WORTH SAVING

5 quotes

You have to think bigger about things, or that your imagination is now the barrier, not the tool.

Brian Scanlan

Today we are seeing twice the number of throughput as we did compared to nine months ago on our engineering team. Now it's like, why can't it be 10X?

Brian Scanlan

We are treating it as, like, an investment at this point… everyone just turn on Opus for everything… and care about the bill later.

Brian Scanlan

Backlog zero is a realistic thing for teams to be able to go after.

Brian Scanlan

Your conversion rate drop-off point is somebody pressing the escape button.

Claire Vo

AI-first company mindset and urgencyThroughput metric: merged PRs per R&D headAgent-first workflow redesignCost vs velocity tradeoffs (token spend as investment)Quality measurement and external validationSkills, hooks, and enforced standards (PR description skill)Telemetry stack: Honeycomb, S3 sessions, Snowflake warehouseFlaky test skill iteration to “100× capability”Making SaaS agent-friendly (CLIs, APIs, hints)Invisible conversion drop-off in agent-driven funnels

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome