Skip to content
How I AIHow I AI

Claude Code Just Got WAY More Powerful

I break down the biggest announcements from Anthropic’s “Code with Claude” event and what they actually mean for builders shipping AI products today. From scheduled AI routines to outcome-based agents, multi-agent orchestration, and new memory systems, I walks throug the features she’s most excited to use immediately—and how they could reshape the future of agentic software. *What you’ll learn:* 1. How Claude Code routines let you automate recurring workflows on schedules or webhooks 2. What “Outcomes” are and how rubric-based agent grading works 3. How multi-agent orchestration enables specialized AI teams with different roles and tools 4. Why Anthropic’s new “Dreams” memory system matters for long-term agent behavior 5. The biggest launch today (higher rate limits!) 6. How I think about building practical agentic products today *Links and resources:* • Code with Claude: https://claude.com/code-with-claude • Claude Code Routines Docs: https://code.claude.com/docs/en/routines • Define Outcomes Docs: https://platform.claude.com/docs/en/managed-agents/define-outcomes • Dreams Docs: https://platform.claude.com/docs/en/managed-agents/dreams • Multi-Agent Docs: https://platform.claude.com/docs/en/managed-agents/multi-agent • Managed Agent Webhooks Docs: https://platform.claude.com/docs/en/managed-agents/webhooks#supported-event-types • Codex (OpenAI): https://openai.com/codex • GitHub: https://github.com _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire Vohost
May 7, 202611mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Anthropic boosts Claude Code with routines, agents, memory, higher limits

  1. Claude Code now supports “Routines,” letting you trigger scheduled jobs (cron) or event-driven runs via HTTP and GitHub webhooks, locally or in the cloud.
  2. Claude Managed Agents adds “Outcomes,” where you define success with a rubric and a grader that can iterate up to 20 times until the work meets the standard.
  3. The API now supports explicit multi-agent teams (up to ~25 agents) with an orchestrator/delegates hierarchy and per-agent tool access working on a shared filesystem/container.
  4. “Dreams” (research preview) introduces an on-demand way to synthesize long-term agent memory by reviewing many prior sessions and writing key takeaways to disk.
  5. Claude usage capacity increases: five-hour limits are doubled across plans, peak hours are removed for Pro/Max, and Opus API rate limits are raised.

IDEAS WORTH REMEMBERING

5 ideas

Automate recurring Claude Code work with first-class scheduling.

Routines bring cron-like automation directly into Claude Code, replacing manual weekly tasks (e.g., generating a newsletter draft from a changelog). You can run routines locally on your laptop or remotely in the cloud.

Treat agent success as a measurable spec, not a one-shot prompt.

Outcomes formalize “done” via a rubric (markdown provided inline or via Files API) plus a grader, enabling the agent to self-evaluate and retry up to 20 iterations to meet your bar.

Use multi-agent teams to encode different viewpoints and responsibilities.

You can define an orchestrator with delegate agents—each with distinct tools and roles (e.g., strategy voice, critic, technical reviewer)—to collaborate in parallel on the same filesystem/container.

Memory management is becoming an explicit platform primitive.

Dreams shifts memory from “write on close/hook” to an on-demand consolidation step across many sessions, producing curated markdown memories that improve future runs.

Event-driven routines enable deeper integration with developer workflows.

Because routines can be triggered by GitHub webhooks or generic HTTP, you can tie Claude actions to PR events, CI signals, or external systems (and then post results to Slack via connectors).

WORDS WORTH SAVING

5 quotes

One of the updates that I know we've all been waiting for is routines, the ability to trigger events or actions on a schedule.

Claire Vo

You define what done looks like for an agent. It can self-grade and iterate until it gets there.

Claire Vo

Define a rubric, give the agent the task, let it bang its head against that at least 20 times till it gets it right.

Claire Vo

So now you can, through the API, explicitly define a multi-agent team that's going to work against the same container, the same file system, up to 25, which is kind of amazing.

Claire Vo

Side note, I think we think a lot about agent memory, but not a lot about agent forgetting, so I'm looking forward to, like, the purge version of this, which is dreams that tell you what to forget.

Claire Vo

Claude Code Routines (cron, HTTP, GitHub webhook triggers)Local vs cloud routine executionConnectors (Slack, GitHub) inside routinesManaged Agents “Outcomes” and rubric-based gradingUp-to-20-iteration self-improvement loopMulti-agent orchestration (orchestrator + delegates, shared container)“Dreams” memory consolidation and session summarizationUsage limit and rate-limit increases (Pro/Max/Team/Enterprise, Opus API)

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome