Skip to content
How I AIHow I AI

How Devin replaces your junior engineers with infinite AI interns that never sleep | Scott Wu (CEO)

Scott Wu is the co-founder and CEO of Cognition Labs, the creators of Devin, an AI agent designed to function as a junior engineer on software development teams. In this conversation, Scott demonstrates how his team uses their own product to accelerate development workflows, reduce engineering toil, and handle routine tasks asynchronously. Scott walks us through real examples of how Devin integrates into Cognition’s daily operations—from researching and implementing new features to responding to crashes and handling frontend fixes. He explains how Devin differs from traditional AI coding assistants by functioning more like a team member than a tool, allowing engineers to delegate well-scoped tasks while focusing on higher-level problems. *What you’ll learn:* 1. How to use DeepWiki to research your codebase and generate better prompts for AI engineering tasks 2. A workflow for treating AI agents as asynchronous junior engineers who can handle multiple tasks while you attend meetings 3. Why public channels create better learning environments for both humans and AI when implementing engineering solutions 4. The top five engineering tasks AI excels at: frontend fixes, version upgrades, documentation, incident response, and testing 5. How to implement a “first line of defense” system where AI agents analyze crashes before humans need to intervene 6. A technique for bringing voice AI into meetings as an additional participant to answer questions without disrupting flow *Brought to you by:* Google Gemini—Your everyday AI assistant: https://ai.dev/ Vanta—Automate compliance. Simplify security: https://www.vanta.com/howiai *Where to find Scott Wu:* X: https://x.com/ScottWu46 LinkedIn: https://www.linkedin.com/in/scott-wu-8b94ab96/ *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Scott Wu and Devin (03:53) Where Devin excels (06:08) Using DeepWiki to research codebases and create better prompts (10:27) Prompting tips (11:24) The asynchronous nature of working with Devin (13:38) Multithreading tasks (14:43) Using Devin to implement an MCP server integration (18:38) Setting up workflows in Slack for first-line responses (23:22) Encouraging AI adoption in public Slack channels (25:50) Top five engineering tasks for Devin (32:17) Using ChatGPT voice as a meeting participant (35:57) Lightning round *Tools referenced:* • Devin: https://devin.ai/ • DeepWiki: https://deepwiki.org/ • ChatGPT: https://chat.openai.com/ • Windsurf: https://windsurf.ai/ • Slack: https://slack.com/ • Linear: https://linear.app/ • GitHub: https://github.com/ *Other references:* • MCP (model context protocol): https://www.anthropic.com/news/model-context-protocol • TanStack Router: https://tanstack.com/router/ _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Scott WuguestClaire Vohost
Sep 8, 202541mWatch on YouTube ↗

CHAPTERS

  1. Devin as an async “junior engineer”: the core mental model

    Scott frames Devin not as a copilot inside an IDE but as a teammate you delegate to. The key idea is “tasks, not problems,” with Devin excelling when work is clearly scoped and verifiable.

  2. Where Devin excels: backlog triage, engineering toil, and fast execution

    Scott outlines practical categories where Devin shines in real teams. These include chewing through issue backlogs, handling repetitive engineering toil, and accelerating routine fixes.

  3. DeepWiki for codebase comprehension: turning repos into navigable knowledge

    Scott demonstrates DeepWiki as an AI-generated documentation layer over a repo. It provides natural-language explanations alongside exact file references and code snippets to accelerate orientation.

  4. From research to execution: generating a high-context Devin prompt via DeepWiki

    They show a workflow where DeepWiki context is used to construct a better, more actionable Devin task prompt. This reduces ambiguity and improves odds of a correct PR on the first attempt.

  5. Prompting tips: the “sync then async” handoff

    Claire and Scott highlight that a small synchronous planning loop dramatically improves async agent outcomes. The analogy is a quick 2-minute alignment with an intern before sending them off for hours.

  6. Async work style and multithreading: running multiple Devins in parallel

    They discuss how Devin enables a new work cadence: kick off several tasks, attend meetings, then review results. This supports “multithreading” across projects without constant supervision.

  7. Live demo: implementing an MCP server integration (ChatPRD) end-to-end

    Scott initiates a Devin session to research and integrate ChatPRD’s MCP server into their marketplace list. The demo shows Devin doing external research, repo edits, and preparing a pull request for review.

  8. Slack as the control plane: workflows for first-line responses and delegation

    Scott explains how their org operationalizes Devin by embedding it into Slack workflows. Devin is consistently tagged as the first responder across channels (web app, infra, crashes), creating an “institutional” habit.

  9. Driving adoption by working in public channels (and why it helps)

    Claire argues that public use of agents accelerates organizational learning and adoption. Scott adds that public threads benefit both the humans (shared learning) and the agent (accumulated codebase knowledge).

  10. Multiplayer collaboration pattern: humans + Devin iterating on a PR

    Scott showcases a real thread where teammates provide implementation tips (e.g., router link element), and Devin updates the PR accordingly. This illustrates Devin as an active participant in team review cycles.

  11. Top five engineering tasks Devin is best at (Scott vs. Claire)

    Scott lists five high-value task categories and Claire adds her own variations, emphasizing polish and documentation automation. Together they map concrete “grab-and-go” use cases most teams can try immediately.

  12. ChatGPT Voice as a meeting participant: faster than search, socially inclusive

    Scott describes using voice mode in meetings to answer questions without disengaging on a phone or laptop. Claire reframes it as adding a new participant so everyone hears the same information in real time.

  13. Lightning round: the future interface for AI engineering + handling frustration

    Scott predicts agentic interfaces will evolve beyond IDE/terminal into a new human-computer interaction layer focused on manipulating the product directly. For frustration, he recommends reviewing the agent’s action history to diagnose failure modes and then re-instruct precisely.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome