How I AIHow Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan
CHAPTERS
- 0:00 – 5:01
Meet Brian Scanlan & Intercom’s urgency to “meet the moment” with AI
Claire introduces Brian and frames Intercom as a company that embraced AI both in customer-facing product and internally in engineering. Brian explains why the organization felt urgency: early tools hinted at value, but they were waiting for a true inflection point that would make adoption unquestionably transformative.
- 5:01 – 7:02
The inflection: Opus 4.6, Christmas break, and deciding to go all-in on Claude Code
Brian and Claire describe a sharp capability jump around late 2025 models (Opus 4.6), where prompting shifted from ‘tool babysitting’ to ‘ideas at speed.’ Intercom returned from the holidays convinced the world had changed and committed to standardizing on Claude Code over a mix of tools.
- 7:02 – 12:50
Proving velocity: tracking merged PRs per R&D head and setting a 2× goal
Brian shows how Intercom operationalized adoption with measurable goals, using merged pull requests per R&D head as a leading indicator. Claire contextualizes this as ‘treating the org like a product,’ and they discuss why PR throughput—while imperfect—can still be a useful adoption signal in a high-trust culture.
- 12:50 – 14:27
Agent-first work: reimagining technical workflows from first principles
Brian argues the real change isn’t ‘work faster,’ but redesigning how work happens when agents are the default. They describe an “agent-first” future where alarms, planning, and delivery involve agents doing the first pass, freeing humans for higher-level concerns and better quality.
- 14:27 – 21:22
Cost tradeoffs: AI token spend as an investment (for now)
Claire presses on the exploding bill as usage scales. Brian explains Intercom’s current stance: run the best models broadly (e.g., Opus with large context) to maximize learning and compounding gains, postponing optimization until after major benefits are captured.
- 21:22 – 24:03
Demo: shipping a Rails monolith redirect with Claude Code
Brian demonstrates a small change in Intercom’s large Ruby on Rails monolith: adding a redirect with a lobster emoji. The demo becomes a window into how Intercom uses AI to accelerate routine work while keeping guardrails for correctness and workflow consistency.
- 24:03 – 26:33
Raising PR description standards: custom ‘Create PR’ skill + enforcement hooks
Intercom discovered AI-generated PR descriptions were degrading, focusing on code translation instead of intent. They built a ‘Create PR’ skill that uses session context to produce better intent-driven descriptions, then enforced it via hooks that block PR creation unless the skill is used.
- 26:33 – 30:15
Toward a ‘software factory’: predictable standards without killing craftsmanship
Claire and Brian discuss how skills and hooks mirror CI/CD but move standards upstream into the act of building. Brian frames the approach as building a “software factory” that produces consistent, high-quality outputs—while Claire notes this can actually improve developer experience and morale.
- 30:15 – 39:20
Telemetry stack: Honeycomb skill tracking + session collection to S3 + insights tools
Brian shows how Intercom avoids ‘flying blind’ by instrumenting usage. Skill invocations are tracked in Honeycomb for visibility and adoption, while raw Claude Code session data is collected (with anonymization) into S3 for deeper analysis and personalized coaching feedback.
- 39:20 – 42:16
Skills repository at scale: distribution via IT sync, core vs team-specific skills, evals
Brian tours the internal GitHub repo powering Intercom’s plugin/skills ecosystem and explains how they reliably distribute it. They bypassed flaky plugin update mechanisms by syncing to laptops via IT, and they maintain a quality bar (including evals) for foundational ‘everyone gets it’ skills.
- 42:16 – 56:18
Deep dive: ‘Flaky specs’ skill and the ‘and then’ workflow to reach ~100× capability
Brian explains how a flaky-test-fixing skill evolved from ‘as good as a human’ to something approaching a distinguished engineer. The breakthrough is the iterative “and then” pattern: fix one, document the novel fix, update the skill, fan out to similar failures, and keep learning via feedback loops and real test runs.
- 56:18 – 1:03:49
Customer implications: making SaaS agent-friendly (CLIs, hints, and onboarding flows)
Intercom’s internal agent experience changes how Brian thinks about product UX: agents increasingly ‘decide’ solutions, sometimes building instead of buying. He argues SaaS must become agent-friendly through discoverable automation surfaces (CLIs/MCP/APIs) and helpful hints that guide agents through signup/onboarding steps like email verification and content setup.
- 1:03:49
Invisible conversion drop-off in agent workflows + lightning round on culture and skeptics
Claire highlights a new risk: in agent-driven onboarding, drop-off can be invisible—users just hit escape and switch approaches. In the lightning round, Brian describes improved team fun and energy, the importance of leaders granting permission and absorbing risk, and how faster feedback loops make work more varied and satisfying.
Quality and customer value: shipping faster without ‘slop’
They address skepticism that higher PR volume means lower quality. Brian shares leading and trailing indicators: reduced time from first code to customer-visible updates, increased feature volume, incident monitoring, and external analysis suggesting code quality is improving.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome