Skip to content
AnthropicAnthropic

Who let the robot dogs out?

We asked two teams of Anthropic researchers to program a robot dog. Neither team had any robotics expertise—but we let only one team use Claude. In the past, we’ve run simulated studies where Claude trained a robot dog. These helped us assess how Claude might contribute to AI research and development. Project Fetch was us trying something similar in practice. This project suggests that we’re not far from a world where frontier AI models can interact with previously-unknown pieces of hardware, even with non-experts at the helm. For more, read on: https://www.anthropic.com/research/project-fetch-robot-dog 0:00 Introduction: Why robotics? 0:30 The experiment 1:02 Phase 1: Fetch, manually 1:51 Phase 2: Fetch, programmatically 5:08 Phase 3: Fetch, autonomously 6:24 Results

Nov 12, 20257mWatch on YouTube ↗

CHAPTERS

  1. Why robotics is the next frontier for AI impact

    The video frames robotics as the clearest path for frontier AI models to extend their usefulness beyond software into the physical world. It sets up the motivation: understanding how much an AI assistant can accelerate real-world technical work, not just coding tasks.

  2. Project Fetch setup: a one-day, three-phase experiment

    Project Fetch is introduced as a self-contained, time-boxed test: get a robot dog to fetch a beach ball with increasing levels of difficulty. The structure is explicitly comparative, designed to quantify speed and progress differences with and without Claude.

  3. Teams and fairness: same background, different tooling

    Two teams of Anthropic software/research engineers—each with little robotics experience—compete under similar conditions. The main variable is whether the team has access to Claude as an assistant.

  4. Phase 1 — Manual fetch using provided controllers

    The first task is straightforward: use existing, pre-provided controllers to drive the dog to the ball and return. The segment shows quick onboarding, playful competition, and the first time comparison between teams.

  5. Phase 2 — Building a custom controller and connecting to hardware

    The challenge shifts from driving the robot to creating a programmatic controller and establishing reliable laptop-to-robot communication. The narrative emphasizes that integration and setup—not just coding logic—become the dominant difficulty.

  6. Claude’s biggest uplift: troubleshooting setup, libraries, and access

    Claude helps the assisted team rapidly find appropriate libraries, install dependencies, and reach basic control and sensor access. The video highlights how AI collapses the time spent on search, configuration, and “nitty-gritty” details that stall novices.

  7. When robots go wrong: safety hiccups and messy real-world control

    As teams gain control, the robot behaves unpredictably at times, leading to near-collisions and comedic panic. This underscores that physical systems introduce risk, latency, and failure modes that don’t appear in pure software demos.

  8. Phase 2 outcomes: assisted team finishes; unassisted team stalls and needs intervention

    The Claude team completes Phase 2 in about 2 hours and 15 minutes, largely due to faster hardware connectivity and tooling setup. The team without Claude struggles to find a working approach and ultimately needs a recommended strategy to proceed.

  9. Phase 3 — Full autonomy: press go, search, detect, fetch

    The final phase requires an end-to-end autonomous system: the robot must locate the ball, navigate to it, and return without manual input. This phase is framed as closer to the real future challenge—robots executing tasks on behalf of models autonomously.

  10. Phase 3 progress: localization and detection vs. full integration

    The unassisted team makes meaningful progress on parts like tracking the robot’s position and working on ball detection, but can’t combine components into a complete system. The Claude team gets much closer, nearing completion but not fully finishing within the day.

  11. Results and takeaway: AI speeds robotics work even without robotics-specific training

    Overall, the Claude team completes what they complete a couple of hours faster than the team without Claude. The video stresses that this uplift wasn’t from training Claude specifically for robotics—it emerged from general assistant capabilities applied to a new domain.

  12. Forward-looking implication: AI’s effects will extend from software into hardware and autonomy

    The conclusion argues that near-term AI will help humans interface with robots more easily, and longer-term, tasks may shift from human+AI collaboration to AI-only operation. The broader claim is that frontier AI will increasingly influence the physical world, not just software.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome