Skip to content
How I AIHow I AI

How Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan

Brian Scanlan is a senior principal engineer at Intercom, where he’s led the company’s transformation to AI-first engineering. In just nine months, Intercom doubled their R&D throughput while maintaining code quality, with 100% of engineers—plus designers, PMs, and TPMs—now shipping code via Claude Code. *What you’ll learn:* 1. How Intercom doubled their merged PRs per R&D employee in just nine months using Claude Code 2. The telemetry infrastructure they built to measure AI adoption and quality across hundreds of engineers 3. Why they built a skills repository with hooks that enforce engineering standards automatically 4. How they’re preparing their product for an agent-first world with CLIs, MCPs, and ephemeral APIs 5. The permission and accountability framework that enabled rapid AI adoption 6. Why backlog zero is now achievable and what that means for engineering culture *Brought to you by:* Celigo—Intelligent automation built for AI: https://celigo.com/howIAI Cursor—The best way to code with AI: https://www.chatprd.ai/howiai *In this episode, we cover:* (00:00) Introduction to Brian Scanlan (02:40) Why Intercom went all-in on AI for both product and engineering (05:01) The breakthrough moment with Opus 4.6 and Christmas break 2025 (07:02) Demo: Intercom’s merged PRs per R&D head (12:50) Agent-first work as a fundamental reimagining of technical workflows (14:27) The cost tradeoff: treating AI spend as an investment (16:47) Measuring quality (21:22) Demo: Shipping a redirect in the Rails monolith with Claude Code (24:03) Creating a custom PR skill (26:33) Building a software factory with predictable quality standards (30:15) Telemetry infrastructure: Honeycomb for skill usage tracking (32:10) Session data collection and personalized usage insights (36:08) Quick overview (39:20) Walking through Intercom’s skills repository (42:16) Deep dive: The flaky spec skill and how it reached 100x capability (46:44) The “and then” workflow for building comprehensive skills (52:31) The live website and overview of workflows (53:32) How internal AI experience informs customer product decisions (56:18) Making SaaS products agent-friendly with CLIs and helpful hints (01:03:49) Why conversion drop-off is invisible in agent-driven workflows (01:05:28) Lightning round and final thoughts *Detailed workflow walkthroughs from this episode:* • How Intercom Doubled Engineering Output: Brian Scanlan's 4 AI Workflows for Claude Code: https://www.chatprd.ai/how-i-ai/how-intercom-doubled-engineering-output-brian-scanlan-ai-workflows-for-claude-code • Design an Agent-Friendly CLI to Automate SaaS Product Onboarding: https://www.chatprd.ai/how-i-ai/workflows/design-an-agent-friendly-cli-to-automate-saas-product-onboarding • Build a Self-Improving AI Agent to Automatically Fix Flaky Tests: https://www.chatprd.ai/how-i-ai/workflows/build-a-self-improving-ai-agent-to-automatically-fix-flaky-tests • Automate High-Quality Pull Request Descriptions with a Custom AI Skill: https://www.chatprd.ai/how-i-ai/workflows/automate-high-quality-pull-request-descriptions-with-a-custom-ai-skill *Tools referenced:* • Claude Code: https://claude.ai/code • Cursor: https://cursor.com/ • Honeycomb: https://www.honeycomb.io/ • Snowflake: https://www.snowflake.com/ • Fin AI: https://www.intercom.com/fin • Vercel: https://vercel.com/ *Other references:* • Intercom GitHub Repo: https://github.com/intercom • Google API Go Client Repo: https://github.com/googleapis/google-api-go-client *Where to find Brian Scanlan:* X: https://x.com/brian_scanlan LinkedIn: https://www.linkedin.com/in/scanlanb/ Company: https://www.intercom.com *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Brian ScanlanguestClaire Vohost
Apr 20, 20261h 18mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 5:01

    Meet Brian Scanlan & Intercom’s urgency to “meet the moment” with AI

    Claire introduces Brian and frames Intercom as a company that embraced AI both in customer-facing product and internally in engineering. Brian explains why the organization felt urgency: early tools hinted at value, but they were waiting for a true inflection point that would make adoption unquestionably transformative.

  2. 5:01 – 7:02

    The inflection: Opus 4.6, Christmas break, and deciding to go all-in on Claude Code

    Brian and Claire describe a sharp capability jump around late 2025 models (Opus 4.6), where prompting shifted from ‘tool babysitting’ to ‘ideas at speed.’ Intercom returned from the holidays convinced the world had changed and committed to standardizing on Claude Code over a mix of tools.

  3. 7:02 – 12:50

    Proving velocity: tracking merged PRs per R&D head and setting a 2× goal

    Brian shows how Intercom operationalized adoption with measurable goals, using merged pull requests per R&D head as a leading indicator. Claire contextualizes this as ‘treating the org like a product,’ and they discuss why PR throughput—while imperfect—can still be a useful adoption signal in a high-trust culture.

  4. 12:50 – 14:27

    Agent-first work: reimagining technical workflows from first principles

    Brian argues the real change isn’t ‘work faster,’ but redesigning how work happens when agents are the default. They describe an “agent-first” future where alarms, planning, and delivery involve agents doing the first pass, freeing humans for higher-level concerns and better quality.

  5. 14:27 – 21:22

    Cost tradeoffs: AI token spend as an investment (for now)

    Claire presses on the exploding bill as usage scales. Brian explains Intercom’s current stance: run the best models broadly (e.g., Opus with large context) to maximize learning and compounding gains, postponing optimization until after major benefits are captured.

  6. 21:22 – 24:03

    Demo: shipping a Rails monolith redirect with Claude Code

    Brian demonstrates a small change in Intercom’s large Ruby on Rails monolith: adding a redirect with a lobster emoji. The demo becomes a window into how Intercom uses AI to accelerate routine work while keeping guardrails for correctness and workflow consistency.

  7. 24:03 – 26:33

    Raising PR description standards: custom ‘Create PR’ skill + enforcement hooks

    Intercom discovered AI-generated PR descriptions were degrading, focusing on code translation instead of intent. They built a ‘Create PR’ skill that uses session context to produce better intent-driven descriptions, then enforced it via hooks that block PR creation unless the skill is used.

  8. 26:33 – 30:15

    Toward a ‘software factory’: predictable standards without killing craftsmanship

    Claire and Brian discuss how skills and hooks mirror CI/CD but move standards upstream into the act of building. Brian frames the approach as building a “software factory” that produces consistent, high-quality outputs—while Claire notes this can actually improve developer experience and morale.

  9. 30:15 – 39:20

    Telemetry stack: Honeycomb skill tracking + session collection to S3 + insights tools

    Brian shows how Intercom avoids ‘flying blind’ by instrumenting usage. Skill invocations are tracked in Honeycomb for visibility and adoption, while raw Claude Code session data is collected (with anonymization) into S3 for deeper analysis and personalized coaching feedback.

  10. 39:20 – 42:16

    Skills repository at scale: distribution via IT sync, core vs team-specific skills, evals

    Brian tours the internal GitHub repo powering Intercom’s plugin/skills ecosystem and explains how they reliably distribute it. They bypassed flaky plugin update mechanisms by syncing to laptops via IT, and they maintain a quality bar (including evals) for foundational ‘everyone gets it’ skills.

  11. 42:16 – 56:18

    Deep dive: ‘Flaky specs’ skill and the ‘and then’ workflow to reach ~100× capability

    Brian explains how a flaky-test-fixing skill evolved from ‘as good as a human’ to something approaching a distinguished engineer. The breakthrough is the iterative “and then” pattern: fix one, document the novel fix, update the skill, fan out to similar failures, and keep learning via feedback loops and real test runs.

  12. 56:18 – 1:03:49

    Customer implications: making SaaS agent-friendly (CLIs, hints, and onboarding flows)

    Intercom’s internal agent experience changes how Brian thinks about product UX: agents increasingly ‘decide’ solutions, sometimes building instead of buying. He argues SaaS must become agent-friendly through discoverable automation surfaces (CLIs/MCP/APIs) and helpful hints that guide agents through signup/onboarding steps like email verification and content setup.

  13. 1:03:49

    Invisible conversion drop-off in agent workflows + lightning round on culture and skeptics

    Claire highlights a new risk: in agent-driven onboarding, drop-off can be invisible—users just hit escape and switch approaches. In the lightning round, Brian describes improved team fun and energy, the importance of leaders granting permission and absorbing risk, and how faster feedback loops make work more varied and satisfying.

  14. Quality and customer value: shipping faster without ‘slop’

    They address skepticism that higher PR volume means lower quality. Brian shares leading and trailing indicators: reduced time from first code to customer-visible updates, increased feature volume, incident monitoring, and external analysis suggesting code quality is improving.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome