
How Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan
Brian Scanlan (guest), Claire Vo (host)
In this episode of How I AI, featuring Brian Scanlan and Claire Vo, How Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan explores intercom doubles engineering throughput by going all-in on Claude Code Intercom committed company-wide to AI adoption in engineering after model capability inflections made agents feel “transformative,” then standardized on Claude Code to scale usage.
Intercom doubles engineering throughput by going all-in on Claude Code
Intercom committed company-wide to AI adoption in engineering after model capability inflections made agents feel “transformative,” then standardized on Claude Code to scale usage.
They measure “merged PRs per R&D head” as a leading indicator of adoption and report roughly 2× throughput in nine months, with bottlenecks shifting from CI to code review.
Intercom treats AI token spend as an investment phase (optimizing later) while building a software-factory approach with predictable standards enforced via skills and hooks.
They built telemetry pipelines (Honeycomb, Snowflake, S3 session logs) to understand tool usage, coach individuals, and continuously improve skills rather than “flying blind.”
Intercom claims quality has not degraded—citing faster idea-to-shipping times, no incident spike, and external research (Stanford) suggesting code quality trends upward.
Key Takeaways
Model inflections change the constraint from tooling to imagination.
Scanlan describes a shift where engineers spend less time coaxing tools and more time delegating outcomes, enabling bigger bets and faster iteration when models reached a new capability tier.
Get the full analysis with uListen
Treat internal developer tooling like a product—with metrics, telemetry, and iteration loops.
Intercom instruments skills and sessions, mines usage data, and builds dashboards so they can debug adoption, improve workflows, and deliver “enablement” rather than just handing out an API key.
Get the full analysis with uListen
A simple throughput metric can drive adoption—if paired with high trust and quality guardrails.
They use merged PRs per R&D head as a crude but motivating leading indicator, while explicitly rejecting “quality slop” via standards, reviews, and targeted quality measures.
Get the full analysis with uListen
Enforce process quality upstream using skills and hooks, not wikis.
Intercom created a “Create PR” skill because AI-generated PR descriptions degraded; they blocked default PR creation paths and required the skill, raising PR-description quality per an internal LLM judge.
Get the full analysis with uListen
Agent-first work shifts bottlenecks—fix the new choke points.
As AI increased output, CI became overloaded and had to be optimized; after that, code review became the bottleneck, implying organizations must continuously rebalance systems around higher throughput.
Get the full analysis with uListen
Iterative ‘and then’ workflows turn a decent skill into an expert system.
Their flaky-spec skill improved by continuously incorporating newly discovered patterns, updating its own playbook, and fanning out fixes to similar failures—moving from “as good as me” to “distinguished engineer” level in their framing.
Get the full analysis with uListen
Make SaaS products agent-friendly or agents will route around you.
Scanlan argues agents often choose “build it yourself” because SaaS isn’t easily callable; he advocates CLIs/MCP/APIs and embedded hints so agents can complete onboarding flows (including email verification) without giving up.
Get the full analysis with uListen
Notable Quotes
“You have to think bigger about things, or that your imagination is now the barrier, not the tool.”
— Brian Scanlan
“Today we are seeing twice the number of throughput as we did compared to nine months ago on our engineering team. Now it's like, why can't it be 10X?”
— Brian Scanlan
“We are treating it as, like, an investment at this point… everyone just turn on Opus for everything… and care about the bill later.”
— Brian Scanlan
“Backlog zero is a realistic thing for teams to be able to go after.”
— Brian Scanlan
“Your conversion rate drop-off point is somebody pressing the escape button.”
— Claire Vo
Questions Answered in This Episode
What exactly counts as a “Claude-generated” PR in your tracking, and how do you attribute mixed human/AI work reliably?
Intercom committed company-wide to AI adoption in engineering after model capability inflections made agents feel “transformative,” then standardized on Claude Code to scale usage.
Get the full analysis with uListen AI
How did you prevent “more PRs” from turning into smaller, noisier changes—and what review norms changed once code review became the bottleneck?
They measure “merged PRs per R&D head” as a leading indicator of adoption and report roughly 2× throughput in nine months, with bottlenecks shifting from CI to code review.
Get the full analysis with uListen AI
Can you share the rubric behind your LLM judge for PR descriptions, and how you validated it didn’t just reward verbose or templated text?
Intercom treats AI token spend as an investment phase (optimizing later) while building a software-factory approach with predictable standards enforced via skills and hooks.
Get the full analysis with uListen AI
What are the most important security controls that made you comfortable letting agents touch Snowflake or production-adjacent workflows?
They built telemetry pipelines (Honeycomb, Snowflake, S3 session logs) to understand tool usage, coach individuals, and continuously improve skills rather than “flying blind.”
Get the full analysis with uListen AI
What anonymization/redaction steps do you apply to session logs in S3, and how do you balance insight mining with employee privacy?
Intercom claims quality has not degraded—citing faster idea-to-shipping times, no incident spike, and external research (Stanford) suggesting code quality trends upward.
Get the full analysis with uListen AI
Transcript Preview
Suddenly you started realizing that you have to think bigger about things, or that your imagination is now the barrier, not the tool.
How is this not happening in your organization? Like literally the physical limits of my ability to type code are unlocked by AI.
Today we are seeing twice the number of throughput as we did compared to nine months ago on our engineering team. Now it's like, why can't it be 10X?
This is a little bit more of what my instinct tells me is possible, which is if you go all in, if you prepare your team, if you prepare your code base, I think your overall product quality is gonna go up, I think your overall developer experience is going up. There's just so many good things that come out of using these tools and using them correctly.
Backlog zero is a realistic thing for teams to be able to go after. All the things that you wish you'd ever wanted to do, it's now just achievable.
I often advise a lot of CTOs and VPs of engineering when figuring out how to get their engineering team AI pilled, say, "Everything you hate about the code base, go spend a month fixing and see how fast we can speed run that. That's gonna feel really good."
I've been having the most amount of fun in my career over the last three months.
[upbeat music] Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today, I am showing how Intercom 2X'd the number of PRs that their R&D department is shipping in just a few months. Brian Scanlan is a senior principal engineer at Intercom, and he is gonna show us truly all of their secrets to getting a large product and engineering organization cooking on Claude Code. Let's get to it. This episode is brought to you by Celigo. Every company today wants AI to improve how work gets done. The fastest way is building it directly into everyday business processes, automating employee onboarding, keeping customer data accurate, managing orders and inventory, or resolving finance and operations issues. When AI lives inside the flow of work, it can update records, trigger approvals, route work, and kick off the next step across systems. That's how teams operationalize AI and deliver measurable results. Celigo makes this possible, and now with Celigo Aura, it's never been easier. Celigo Aura gives you access to the entire platform through natural language, connecting your systems and turning intent into action, all of it under your control. Companies like Databricks, PayPal, and Olipop rely on Celigo to run critical business operations at scale. Ready to operationalize AI? Visit celigo.com/howiai. That's C-E-L-I-G-O.com/howiai. Brian, welcome to How I AI. Why I am so thrilled that you agreed to join the podcast is I think Intercom has done it, which is you all have met the moment in sort of two ways. One, clearly met the moment from a product perspective. We're one of the first companies that had... Sorry, I don't wanna say legacy business, but had a, a going concern business that saw AI coming and really transformed how your product worked for customers, and I'm a happy Fin customer. They did not tell me to say that. And then second, what we're gonna talk about is the team met the moment in terms of really understanding AI was gonna change how, in particular, product engineering and design orgs and engineering organizations were going to work, and you just went full speed at changing how the team works. What, what drove sort of the urgency around meeting the moment? How did that come to be? Was it a single person? Was it everybody? What was your experience?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome