How I AIHow I AI

Claude Code Just Got WAY More Powerful

Claire Vo on anthropic boosts Claude Code with routines, agents, memory, higher limits.

Claire Vohost
May 7, 202611mWatch on YouTube ↗
Claude Code Routines (cron, HTTP, GitHub webhook triggers)Local vs cloud routine executionConnectors (Slack, GitHub) inside routinesManaged Agents “Outcomes” and rubric-based gradingUp-to-20-iteration self-improvement loopMulti-agent orchestration (orchestrator + delegates, shared container)“Dreams” memory consolidation and session summarizationUsage limit and rate-limit increases (Pro/Max/Team/Enterprise, Opus API)
AI-generated summary based on the episode transcript.

In this episode of How I AI, featuring Claire Vo, Claude Code Just Got WAY More Powerful explores anthropic boosts Claude Code with routines, agents, memory, higher limits Claude Code now supports “Routines,” letting you trigger scheduled jobs (cron) or event-driven runs via HTTP and GitHub webhooks, locally or in the cloud.

At a glance

WHAT IT’S REALLY ABOUT

Anthropic boosts Claude Code with routines, agents, memory, higher limits

  1. Claude Code now supports “Routines,” letting you trigger scheduled jobs (cron) or event-driven runs via HTTP and GitHub webhooks, locally or in the cloud.
  2. Claude Managed Agents adds “Outcomes,” where you define success with a rubric and a grader that can iterate up to 20 times until the work meets the standard.
  3. The API now supports explicit multi-agent teams (up to ~25 agents) with an orchestrator/delegates hierarchy and per-agent tool access working on a shared filesystem/container.
  4. “Dreams” (research preview) introduces an on-demand way to synthesize long-term agent memory by reviewing many prior sessions and writing key takeaways to disk.
  5. Claude usage capacity increases: five-hour limits are doubled across plans, peak hours are removed for Pro/Max, and Opus API rate limits are raised.

IDEAS WORTH REMEMBERING

5 ideas

Automate recurring Claude Code work with first-class scheduling.

Routines bring cron-like automation directly into Claude Code, replacing manual weekly tasks (e.g., generating a newsletter draft from a changelog). You can run routines locally on your laptop or remotely in the cloud.

Treat agent success as a measurable spec, not a one-shot prompt.

Outcomes formalize “done” via a rubric (markdown provided inline or via Files API) plus a grader, enabling the agent to self-evaluate and retry up to 20 iterations to meet your bar.

Use multi-agent teams to encode different viewpoints and responsibilities.

You can define an orchestrator with delegate agents—each with distinct tools and roles (e.g., strategy voice, critic, technical reviewer)—to collaborate in parallel on the same filesystem/container.

Memory management is becoming an explicit platform primitive.

Dreams shifts memory from “write on close/hook” to an on-demand consolidation step across many sessions, producing curated markdown memories that improve future runs.

Event-driven routines enable deeper integration with developer workflows.

Because routines can be triggered by GitHub webhooks or generic HTTP, you can tie Claude actions to PR events, CI signals, or external systems (and then post results to Slack via connectors).

WORDS WORTH SAVING

5 quotes

One of the updates that I know we've all been waiting for is routines, the ability to trigger events or actions on a schedule.

Claire Vo

You define what done looks like for an agent. It can self-grade and iterate until it gets there.

Claire Vo

Define a rubric, give the agent the task, let it bang its head against that at least 20 times till it gets it right.

Claire Vo

So now you can, through the API, explicitly define a multi-agent team that's going to work against the same container, the same file system, up to 25, which is kind of amazing.

Claire Vo

Side note, I think we think a lot about agent memory, but not a lot about agent forgetting, so I'm looking forward to, like, the purge version of this, which is dreams that tell you what to forget.

Claire Vo

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

For Claude Code Routines, what are the practical differences and tradeoffs between running locally versus in the cloud (security, reliability, access to files)?

Claude Code now supports “Routines,” letting you trigger scheduled jobs (cron) or event-driven runs via HTTP and GitHub webhooks, locally or in the cloud.

How should a rubric be structured for Outcomes so the grader is consistent (e.g., scoring, pass/fail gates, weighted criteria), and what failure modes have you seen?

Claude Managed Agents adds “Outcomes,” where you define success with a rubric and a grader that can iterate up to 20 times until the work meets the standard.

When an Outcome iterates up to 20 times, what’s the cost/latency impact and how do you cap or early-stop iterations safely in production?

The API now supports explicit multi-agent teams (up to ~25 agents) with an orchestrator/delegates hierarchy and per-agent tool access working on a shared filesystem/container.

In the multi-agent setup, how do agents avoid stepping on each other’s changes when they share the same filesystem/container—are there recommended coordination patterns?

“Dreams” (research preview) introduces an on-demand way to synthesize long-term agent memory by reviewing many prior sessions and writing key takeaways to disk.

Which tools/connectors can each delegate agent access, and how do you enforce least-privilege (e.g., critic can’t push to GitHub, reviewer can)?

Claude usage capacity increases: five-hour limits are doubled across plans, peak hours are removed for Pro/Max, and Opus API rate limits are raised.

Chapter Breakdown

What launched at “Code with Claude” and why it matters

Claire recaps attending Anthropic’s first developer event and previews five practical updates across Claude Code and Claude Managed Agents. She frames the video as a fast tour of what shipped, how it works, and what she’d build with it.

Claude Code Routines: scheduled or event-driven automations

The first major update is “Routines” in the Claude Code app—automations you can trigger on a schedule or via webhooks. Claire highlights this as a long-awaited feature for turning repeated manual workflows into dependable background jobs.

Building a weekly changelog-based newsletter routine

Claire walks through creating a routine that reads a project changelog and drafts a customer-facing newsletter every Monday morning. She explains the prompt constraints to keep the content focused on customer value rather than internal work.

Routines + connectors: Slack/GitHub integrations and team workflows

She expands the routine idea with connectors and team-oriented triggers. The point is that routines can be invoked by other systems and can post results where teams already work.

Managed Agents “Outcomes”: rubric-based self-iteration until done

Next, Claire introduces “Outcomes” in Claude Managed Agents, similar to Codex’s /goal concept. You define what “done” means via a rubric, and the agent self-grades and iterates until it meets the target.

Concrete Outcomes use case: shipping a “ship-ready PRD”

She makes Outcomes tangible by describing PRD creation cycles that typically require repeated feedback and alignment. With a rubric, an agent could iteratively improve a PRD until it matches a “ship-ready” standard.

Multi-agent teams in Managed Agents: orchestrator + delegates

Claire highlights a new multi-agent framework where you can programmatically define a team of agents working in the same container/file system. It supports an explicit hierarchy and per-agent toolsets.

Example multi-agent setup for PRDs: strategy, critic, and implementation review

She proposes how a PRD product could benefit from specialized agent roles. The orchestrator coordinates while sub-agents provide distinct perspectives (product strategy, critique, technical review).

Dreams: on-demand consolidation of agent memory across sessions

The “Dreams” feature focuses on agent memory—writing helpful markdown artifacts to disk so future sessions perform better. Dreams provides an explicit API primitive to review many sessions and synthesize what should be remembered.

Research preview and the bigger idea: memory (and forgetting) as primitives

Claire notes Dreams is in research preview and not broadly accessible yet, but it signals how labs are productizing agent memory. She also raises the complementary need for “forgetting” and purging stale or harmful memories.

Usage and rate limit increases: more time, fewer peak-hour constraints

The final announcement is practical: higher Claude Code usage limits and improved rate limits. Claire emphasizes this as the change many users will feel immediately day-to-day.

Wrap-up: the five takeaways and what they signal about Anthropic’s strategy

Claire quickly reiterates the five launches and why they’re useful now: automation (Routines), goal-converging agents (Outcomes), coordinated teams (multi-agent), improved memory workflows (Dreams), and higher limits. She closes by positioning Anthropic as aiming to be a leading agent platform for builders.

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome