Skip to content
ClaudeClaude

Running an AI-native engineering org

When agentic coding goes from individual tool to org-wide default, the tool isn't the hard part...your processes are. Fiona Fung, Director of Engineering for Claude Code, walks through what broke at Anthropic (review, ownership, hiring) and the norms we had to rewrite to keep shipping.

Fiona Fungguest
May 8, 202628mWatch on YouTube ↗

CHAPTERS

  1. Context: Building Claude Code & Cowork as AI-native products

    Fiona Fung introduces her role leading Claude Code and Cowork engineering/product and frames the talk as lessons learned scaling an AI-native engineering team. She previews five themes: shifting bottlenecks, rewriting norms, rolling out changes, measuring progress, and open questions to keep auditing.

  2. The big shift: Engineering bandwidth is no longer the primary constraint

    She argues that historically, coding throughput was expensive, so planning and process optimized for scarce engineering time. With modern AI-assisted development, coding is rarely the slow part; the industry has seen similar distribution/platform shifts before, but this one changes how teams operate day-to-day.

  3. New bottlenecks: Verification, review, security, and maintenance

    As code output increases, the bottleneck moves downstream: validating correctness, keeping up with reviews, coordinating cross-functional checks, and managing long-term maintenance. Leaders increasingly ask how humans can review fast enough and ensure quality doesn’t degrade.

  4. Processes that quietly stop working (and tend to pile up)

    Fiona describes how processes rarely remove themselves; teams accumulate layers of SLAs, rituals, and documentation until overhead becomes invisible but costly. Many pre-AI practices become less effective or misaligned when roles blur and code generation accelerates.

  5. Planning becomes lighter and more just-in-time

    Claude Code reduced heavy upfront roadmapping and design-doc-before-code habits because the product/tech landscape changes too quickly. Instead, the team leans into rapid prototyping, internal dogfooding, and shipping to users earlier to get real feedback.

  6. Technical debates: Let code (and prototypes) settle arguments

    With building becoming cheap and debating expensive, Fiona encourages generating multiple implementation options quickly and comparing real impacts. She shares an example of producing three PR variants to evaluate API design and downstream caller effects rather than relying on whiteboard debates alone.

  7. Double down where it matters: shift-left verification and confidence

    As throughput rises, the team reduces certain rituals but increases automation and earlier bug detection. Fiona emphasizes “shift left” verification so issues are caught closer to the source and all contributors—including non-engineers shipping code—can merge with higher confidence.

  8. Rethinking code ownership: focus on the real question behind 'who changed this?'

    Instead of treating authorship as the key signal, Fiona recommends clarifying what you’re actually trying to learn: who introduced a regression, who has expertise, or what context is missing. She notes that AI tools (and routines) can automate context gathering, summaries, and triage workflows.

  9. Scaling code review with AI: what to delegate vs what must remain human

    The team leans heavily on Claude for style, linting, PR babysitting, bug finding/fixing, and generating tests. But Fiona draws hard lines where human judgment remains essential: legal and security reviews, risk boundaries, and product taste.

  10. Team makeup in an AI-native org: prioritize product sense and deep systems expertise

    With raw throughput less scarce, Fiona looks for engineers who are creative builders with strong product instincts, as well as specialists in deep systems (e.g., distributed systems for remote execution). She also highlights how AI helps fill cross-functional gaps—engineers can do content-like work and PMs can code more.

  11. Org design and leadership: keep it flat, scrappy, and dogfooding-heavy

    Fiona advocates a flatter org structure to stay agile, plus a strong expectation that leaders dogfood and remain hands-on. She describes a norm where managers start as ICs to earn credibility and keep close to the code, enabled by AI reducing the overhead of context switching and tooling.

  12. Rolling out new norms: mandate the must-dos, enable pod-level autonomy

    She explains how they balance standardization with flexibility: align on a small set of core principles, but allow pods to choose workflows and rituals that fit their domain. A standout principle is explicit permission to kill outdated processes, since process debt accumulates fast.

  13. Signals it’s working: faster ramp-up, shorter PR cycles, quality-minded metrics

    Fiona shares practical indicators: onboarding time drops, PR cycle time shortens, and Claude-assisted commits rise (to near-default). She warns that faster PRs can expose other scaling limits (CI/build capacity) and emphasizes measuring outcomes like reliability—not just percent AI-generated code.

  14. Ongoing questions and a takeaway: audit your noisiest workflow

    She closes with open questions about future org structure (e.g., iOS vs Android separation), how far to push automated review, and how to keep everyone productive as roles blur. Her suggested action: pick the most expensive or dreaded workflow and ask whether it still serves its purpose—or should be automated or removed.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome