At a glance
WHAT IT’S REALLY ABOUT
Asana AI teammates: multiplayer agents built using Claude Managed Agents
- Asana positions its AI teammates as “actors in the system” that collaborate with multiple humans to complete complex, multi-step work rather than one-off, single-user interactions.
- A core differentiator is shared enterprise memory and rich contextual grounding via Asana’s 17-year “work graph” (goals→portfolios→projects→tasks/approvals), while preserving RBAC, guardrails, and auditability.
- Claude Managed Agents are used to accelerate prototyping and improve output quality through built-in multi-step execution, verification loops, and an outcomes grader.
- A demo shows a marketer generating a campaign brief plus an HTML landing-page mock, then iterating via comments with additional teammates reviewing in a shared, fully auditable thread.
- In Q&A, the team explains how Asana context is passed into outcome definitions, rubric design is treated like prompt engineering with rapid iteration, skills are largely “shrink-wrapped” for GA quality control, and third-party integrations are implemented both in Asana’s loop and via MCP with Managed Agents.
IDEAS WORTH REMEMBERING
5 ideasMost enterprises are still using agents in “single-player mode.”
Asana argues that one-off agent chats don’t compound knowledge or support multi-person workflows; their aim is persistent agents that participate across multiple stakeholders and steps.
Shared memory turns agents into reusable institutional knowledge.
Agent interactions and learned preferences (e.g., campaign color changes) are retained so future users benefit, creating an “enterprise memory” that improves with use.
Context quality comes from the work graph, not just prompts.
Asana grounds agents in structured objects—goals, portfolios, projects, tasks, approvals, historical decisions—so outputs reflect how work actually gets done and why choices were made.
Security and governance are treated like onboarding a real teammate.
Agents operate with RBAC controls, auditable histories, and designated human “sponsors/managers” who can manage access and delete memories to prevent leakage across projects.
Managed Agents reduce engineering overhead while improving reliability.
Compared to using a raw messages API, Asana highlights not needing to build manual loops for file management/code execution, gaining built-in verification and a grader to iterate toward higher-quality outcomes.
WORDS WORTH SAVING
5 quotesOur vision at Asana is to bring forward this promise of the Agentic Enterprise, where human beings and AI agents can work together to get complex multi-step work done.
— Arnav
But most, most companies are still using AI agents in this single-player way, where an individual can go ahead and interact with the agent, get an outcome, and then they pass it on to someone else to complete those multi-step processes.
— Arnav
You don't end up with compounding knowledge. You don't end up with this concept of a, a shared enterprise memory.
— Arnav
So our vision is, like, full multiplayer mode. You have agents that are real actors in the system. They've got sharing and RBAC controls, just like you would be onboarding a new teammate, a new human teammate into the system.
— Arnav
So it helped us reduce our prototyping costs. Like, we got much faster prototyping for these actions and skills.
— Arnav
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome