Lenny's PodcastCat Wu: How Anthropic shrunk shipping from months to a day
Through research-preview launches and weekly metrics readouts, Anthropic kills heavy PRDs; anyone on Claude Code or Cowork can ship in a single week.
EVERY SPOKEN WORD
85 min read · 16,569 words- 0:00 – 1:29
Introduction to Cat Wu
- CWCat Wu
I think it is very hard to be the right amount of AGI pilled. It's very easy to build a product for the super AGI strong model. The hard thing is figuring out, for the current model, how do you elicit the maximum capability?
- LRLenny Rachitsky
I've never seen anything like the pace you folks at Anthropic are shipping at.
- CWCat Wu
We wanna remove every single barrier to shipping things. The timelines for a lot of our product features have gone down from six months to one month and sometimes to even one day.
- LRLenny Rachitsky
You're interviewing hundreds of PMs, and you just keep feeling like they're approaching it very incorrectly.
- CWCat Wu
The PM role is changing a lot. It's changing really quickly. The thing that is extremely important for building AI-native products is iterating so quickly, figuring out a way for you to actually launch features every single week.
- LRLenny Rachitsky
What do you think are the emerging skills PMs need to develop?
- CWCat Wu
It comes back to product taste. As code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write.
- LRLenny Rachitsky
Today, my guest is Cat Wu, head of product for Claude Code and Cowork at Anthropic. Cat is at the center of everything that is changing in AI and product and building, and she and her team are building the product that is most changing the way that we all build our products. She is so full of insights and wisdom and lessons. This is an episode you cannot miss. Before we get into it, don't forget to check out lennysproductpass.com for an insane set of deals available exclusively to Lenny's newsletter subscribers. With that, I bring you Cat
- 1:29 – 4:29
Working with Boris Cherny
- LRLenny Rachitsky
Wu. [gentle music] Cat, welcome to the podcast.
- CWCat Wu
Thanks for having me.
- LRLenny Rachitsky
I have so many questions. I'm so excited to have you on this podcast. I wanna start with giving people an understanding of your role alongside Boris. Uh, everybody knows Boris. This... He's... His episode is the number one most popular episode on this podcast. No pressure. He, uh, created Claude Code. He leads the Eng team, ships, uh, a, a bazillion PRs a day from his phone, just, like, I don't even know what the number is anymore. I think people don't give you enough credit for the success that Claude Code has had and Cowork and all the things y'all are building. Help us understand your role on the team, how you work with Boris, how you split responsibilities, just, like, what does the PM role look like on, on the Claude Code team?
- CWCat Wu
I feel very lucky to work with Boris. He's been an amazing thought partner. He's our tech lead. He's very much the product visionary, and he is great at setting, like, this is what the product needs to be in, like, three months, six months from now. This is, like, what the AGI-pilled version of the product is. And a lot of my role is figuring out, okay, what is the path from where we are today to, like, that vision three to six months from now? And I, I spend more of my time on the cross-functional, so making sure that our marketing team, sales team, finance, capacity, et cetera, are, like, bought in on the plan and that we're all rowing the same direction and that once the feature is ready, that there aren't any blockers to shipping it. I think in many ways it works well because we kind of, like, mind meld, but it is actually, like, remarkably blurry of a line. Like, I think we're, like, eighty percent mind meld, and then there's, like, this twenty percent of things that, like, maybe I care a lot more about than Boris, so, like, I'll drive those, and then, like, twenty percent where he cares a lot more than me, and he just, like, drives those.
- LRLenny Rachitsky
This episode is brought to you by our season's presenting sponsor, WorkOS. What do OpenAI, Anthropic, Cursor, Vercel, Replit, Sierra, Clay, and hundreds of other winning companies all have in common? They are all powered by WorkOS. If you're building a product for the enterprise, you've felt the pain of integrating single sign-on, SCIM, RBAC, audit logs, and other features required by large companies. WorkOS turns those deal blockers into drop-in APIs with a modern developer platform built specifically for B2B SaaS. Literally every startup that I'm an investor in that starts to expand upmarket ends up working with WorkOS, and that's because they are the best. Whether you are a seed-stage startup trying to land your first enterprise customer or a unicorn expanding globally, WorkOS is the fastest path to becoming enterprise-ready and unblocking growth. It's essentially Stripe for enterprise features. Visit workos.com to get started or just hit up their Slack where they have actual engineers waiting to answer your questions. WorkOS allows you to build faster with delightful APIs, comprehensive docs, and a smooth developer experience. Go to workos.com to make your app enterprise-ready
- 4:29 – 6:18
What Anthropic looks for when hiring PMs
- LRLenny Rachitsky
today. Something that you shared actually before we started recording is the fact that you're interviewing, uh, hundreds of PMs all the time. Like, if I had a nickel every time someone asked me for an intro to someone at Anthropic to go work at Anthropic as a PM, I'd be, I'd be... I'd have thirty billion in ARR.
- CWCat Wu
[chuckles]
- LRLenny Rachitsky
It's just, like, the number one place people wanna go work at, so I can only imagine how many PMs you're interviewing. You told me that you're just seeing people doing it, doing it wrong, the way they're approaching what they think it takes to be a successful AI PM. Talk about what you're seeing and what people need to understand about what it is, what it takes to be successful these days.
- CWCat Wu
I think before AI, technology shifts were a lot slower, so you could plan on these six to 12-month time horizons. And because you were shipping features at a bit of a slower rate, there was a lot more emphasis on coordinating with all the other partner teams to make sure that they're shipping features that unblock your features because code at that time was very expensive to make. Um, I think now with AI and with how much that has accelerated engineering and with how quickly the model capabilities are improving, the timelines for a lot of our product features have gone down from six months to one month and sometimes to one week or even one day. And with that, we actually need to make sure that products ship quite quickly. And what that means is, as a PM, there should be less emphasis on making sure that you're aligning your, like, multi-quarter roadmaps with your partner teams and more emphasis on, okay, how can we figure out the fastest way to get something out the door? How can we figure out how to make, like, a concept corner of our product suite where we can justAn engineer has an idea or a PM has an idea, and like by the end of the week, we are able to get it into
- 6:18 – 8:58
How to help your teams move fast
- CWCat Wu
our users' hands. I, I think the PMs who do the best on AI-native products are, are the ones who can figure out how can I like shorten the time from having this idea to actually getting the product in the hands of users and help define what are the most important tasks that need to work out of the box for my product.
- LRLenny Rachitsky
So what I love about this is what you're saying is just like people haven't grasped how fast they need to move and what-- how much of the job now is just moving, is helping the team move fast. What, what helps do that? What do you, what do you do, what does your PM team do to help them move this fast other than have access to the, the most advanced models?
- CWCat Wu
I think the first thing is to set clear goals because LLMs are so general that actually creates a lot of ambiguity in who we're building for, what problems we're trying to solve, what the top use cases are. And so I think a great PM is able to say, "Okay, our, our key user is professional developers. Uh, the main problem that we want to solve for this feature is maybe there's like too many permission prompts and people are feeling fatigue, and like the, the use case is we, we want professional developers at enterprises to safely get to zero permission prompts." And that actually sets a pretty clear goal because it, it rules out a lot of potential approaches for reducing permission prompts so that people can, uh, get a lot more done with one prompt. And then I think the second thing that's very important is figuring out some repeatable process for getting these features shipped. So, uh, for Claude Code, what we do is we actually ship almost all of our features in research preview. We clearly brand this, um, when we ship something so that users know that this is an early product, this is just an idea, this is just something that we're trying to get feedback on and iterating on, and that this might not be supported forever. And what this does is it reduces, it reduces our commitment for shipping something. We can just get something out in a week or two. And then the third thing that a PM should do is help create the framework for the team so that they know when to pull in cross-functional partners and what those cross-functional partners' expectations are. So for example, we have a really tight process between engineering, marketing, and docs, so when engineers have a feature that they feel is ready and that we've dogfooded internally, they post it in our evergreen launch room, and then Sarah, who leads our docs, and Alex, who leads PMM, and Tariq and Lydia on Devrel just like jump in and can turn around the, the marketing announcement for it the very next day. And because we have this really tight process, it lowers the friction for any engineer to ship something. Um, PM is the role
- 8:58 – 10:28
How PRDs and roadmaps have evolved at Anthropic
- CWCat Wu
that should be setting this up.
- LRLenny Rachitsky
How do PRDs fit into this? The fact that you said that goals are a really important part, just like being aligned on what does success look like, who is this for, who is this not for? Are you writing PRDs? Is it just like a couple bullet points? How does, how has that evolved in the, in the world of a PM?
- CWCat Wu
So there's two, two things that we do. One is we have very rigorous metrics, and we do metrics readouts with the entire team every week. The goal of this is to make sure that everyone deeply understands all the facets of our business, what our key goals are, how they're trending, and what drives them. The second thing that we do is we have this list of team principles, and this includes who our key users are, why those are our key users. And the reason that we articulate all of this is so that everybody on the team feels like they understand how our business works, they understand what's important to us and what we're willing to trade off, and it lets people make decisions by themselves without feeling like they're blocked on PM or any other stakeholder.
- LRLenny Rachitsky
I love how so much of this is like, okay, we still need PMs in the future. And there's so much talk of like, why do we need PMs? We're just gonna ship and build. We need engineers.
- CWCat Wu
Oh, we actually do PRDs sometimes. So I, I think for features that are like particularly ambiguous, it, it does help to write out just a one-pager on what the goals are, uh, what the delightful use cases are, what the failure modes currently are that we need to fix. And there are occasionally some projects, especially things that require heavy infrastructure, that do take many months, and for those situations, we do write PRDs
- 10:28 – 11:54
The Mythos model and Anthropic’s shipping velocity
- CWCat Wu
still.
- LRLenny Rachitsky
I wanna s-drill a little bit further into just h-how you're able to move so fast. I've never seen anything like the pace folks at Anthropic are shipping at. Like someone made this calendar of launches across Anthropic, and it was literally every day there was like a major feature or product. So one question people had online is, uh, you guys just launched this, uh, incred-- not launched, but built this incredible model, Mythos, that is still in preview 'cause it's so powerful, people are a little afraid of what it can do. Have you guys been using this? Is this part of the reason you've been able to move so fast?
- CWCat Wu
We've been moving pretty fast for several quarters now, so I think it, it's not fully Mythos. Um, Mythos is an incredibly powerful model. We do use the models internally, and I think this has increased our rate of shipping a little bit, but I don't think it explains the bulk, bulk of the increase. I, I think a lot of it is the process and the expectation on the team. So we're very low on process. We want to remove every single barrier to shipping things. We want to make sure every single person on the team feels empowered to take their idea from just an idea to like out in the world in less than a week, sometimes even in a day.
- LRLenny Rachitsky
Cool. Oh, man. What a, what an advantage to have the best model and also be building product. That's so cool.
- CWCat Wu
We are very lucky to be able to work with the frontier models.
- LRLenny Rachitsky
[chuckles] Oh my God, what a, what an awesome advantage, just like build the thing and then use it and then accelerate
- 11:54 – 12:53
What happened with the Claude Code source code leak
- LRLenny Rachitsky
faster. It's so interesting. There's a couple like these other side things I wanna just kinda go on these like side quests w- on this conversation. There's so much happening with Anthropic, and I just, I'm so curious to get your insight. One is, uh, a week ago or so, the whole source code of Claude Code leaked. Somebody got it out there. I think it was a mistake someone made. Is there anything you can comment there, just like what happened, what went wrong, what should people know?
- CWCat Wu
So we immediately looked into this when we saw it.Um, we realized that this was the re-result of human error. There was, um, a human working with Claude to write a PR. This was just an update to how we release our packages, and it actually went through two layers of human review. And so th-this was a result of human error, and we've hardened our processes to make sure that it doesn't happen in the future.
- LRLenny Rachitsky
Is this person still at Anthropic? Are they doing all right?
- CWCat Wu
Yes, yes. It's, it's a process failure, and the most important thing is to just, like, learn from it and to add more safeguards so that doesn't happen again. And so that's, that's what we've been focused on, and
- 12:53 – 14:19
Integrating with OpenClaw
- CWCat Wu
most of those have shipped.
- LRLenny Rachitsky
Okay. Uh, another question I had is OpenClaw. Uh, so recently there's been this move to keep people from using Claude subscription with their OpenClaws. People get, uh, got really upset. They're confused why this is happening. It feels like you're-- there's, like, you know, harm caused to the open source community. What, what are people, what do people need to understand about kind of what went into this decision?
- CWCat Wu
So we've been seeing a lot of demand for Claude, and we've been working very hard to both scale our infrastructure and also to make our harness more token efficient so that you can get more usage out of it. It wasn't designed for third-party products which have different, uh, usage patterns than our first party ones. We spent a bunch of time trying to figure out what is the most seamless transition that we can offer. And so I was very happy to be able to say that everyone gets some credits alongside their subscription. But yeah, we, we did have to make the hard decision that we needed to prioritize our first party products and our API. And so this is the, this is the decision that resulted from that.
- LRLenny Rachitsky
Yeah. This-- like, to me, it makes so much sense. Like, you guys are subsidizing this usage at, like, two hundred bucks a month. And, and there's, like, uh, it's, like, basically unlimited use of this. And, like, I think people don't understand. Businesses are trying to make money. We're trying to be profitable here. [chuckles] We can't just, like, give away compute when it's so in demand, so
- 14:19 – 15:42
How the PM team is structured at Anthropic
- LRLenny Rachitsky
I get it. Coming back to the PM team, what does just, like, the PM team look like at Anthropic? How many PMs are there? How are they kind of organized?
- CWCat Wu
Yeah. So we have a few PM teams. Um, I think we're maybe around thirty or forty PMs right now. Uh, so we have the research PM team, uh, who Dianne leads, and this team is responsible for understanding all of the feedback from our customers for our models and then feeding that to the best research team to act on it, and they also shepherd the model launch. Um, there's the Claude developer platform team that maintains the APIs that Claude Code is built on top of. And they also release things like managed agents, which is a way for you to build your agents, and we can host it on your behalf. And then there's Claude Code that works on both Claude Code and the Cowork core products. There's Enterprise that helps make Claude Code and Cowork easier to adopt for all of our Enterprise customers. And so this is everything from, like, cost controls, RBAC, security controls, and just making sure that these enterprises feel very confident and comfortable, uh, using, using our tools. And then we also have our growth team that is responsible for growing across our entire product suite. So we work very closely with them on Claude Code and Cowork growth. And I know they also work with, um, our other teams on C-CDP growth, so growth of people
- 15:42 – 17:54
How engineer and PM roles are merging
- CWCat Wu
who use the Claude API.
- LRLenny Rachitsky
So speaking of growth, so Amol was just on the podcast. He had this really interesting insight that most people haven't been sharing. There's always the sense that we need fewer PMs in the future. What's the... Why do we need PMs? Engineers can just ship. Uh, his take is that because engineers are moving so fast, PMs and designers are squeezed. There's less time to stay on top of everything that is happening. Everyth-- there's a feature shipping every day. So his take is he needs more PMs because it's hard to keep up. What's your take there? Do you feel like there'll be an increase in hiring of PMs? What do you think is going on with the PM profession long term?
- CWCat Wu
I think all of the roles are merging. PMs are doing some engineering work. Engineers are doing PM work. Designers are PMing and also landing code. You can either hire a lot more engineers who have great product taste, or you can, uh, keep your engineering hiring the same and hire a lot more PMs to help gui-guide some of their work. Um, on our team, we're pretty focused on hiring engineers with great product taste. This, this way, we can reduce the amount of overhead for shipping any product. Like, there are many engineers on our team who are fully able to end-to-end go from see user feedback on Twitter through to, like, ship a product at the end of the week with almost no product involvement. And this, I think, is actually, like, the most efficient way to ship something. So I, I think, like, engineer and PM are kind of overlapping, and you will get a lot of benefit from having more of either. I think product taste is still a very rare skill to have, and we'll pretty much hire anyone who we feel has demonstrated this strongly.
- LRLenny Rachitsky
And your background was in engineering, right?
- CWCat Wu
Yeah. I was an engineer for many years. I was then a VC very briefly, uh, before joining Anthropic. And actually, almost all the PMs on our team have either been engineers or ship code, uh, here on Claude Code. And so that, that's one of the things that I think helps build trust with the team and also just enables us to move a lot faster. And then actually, our designers also have been front end engineers
- 17:54 – 20:10
Why product taste is the most valuable skill
- CWCat Wu
before.
- LRLenny Rachitsky
Wow. 'Cause that's, that's the big question. Like, there's definitely this merging that's happening. The Venn diagrams are combining. I think the big question for a lot of people is if you're coming from engineering or product or design, which of those core skills is gonna be most valuable? I could see at Anthropic and on Claude Code, engineering is very valuable. I'm curious if other companies, if you have a design background, becoming a PM is more valuable or just a PM PM.
- CWCat Wu
I still think it comes back to product taste. Like, as code becomes much cheaper to write-The thing that becomes more valuable is deciding what to write. Like, what is the right UX for this feature? What is the most delightful way that a user can experience it? What... Like, we, we get tens of thousands of GitHub issues asking for every single thing under the sun, and it takes a lot of care and taste to figure out, okay, which of these is worth building and what is the right way to build it? And I think that that skill set can come from any background, but I think that's the most important thing. I think the reason why an engineering background is particularly useful, at least for the next few months, is if you have an engineering background, you have a better sense for how hard something should be, and that's often a factor in what you choose to build. So, like, if something is very easy to build, then maybe instead of debating it, you just spend an hour doing it. But if something is harder to build and you know that upfront, then you know that, okay, uh, th- this will just, like, cost a lot more for, for our team to get this out the door. So it helps a bit with the prioritization.
- LRLenny Rachitsky
You said, uh, in the next... for the next few months. Is that just, like, because the models will get so good potentially in the next few months you may not even need to know that as mu- as much?
- CWCat Wu
I think the valued skill sets does change quite frequently, and so it's really hard to predict more than a few months out. So it's less a commentary on what shift I think will happen and more of a commentary that I think large shifts will happen.
- LRLenny Rachitsky
So you're not saying that's when Mythos comes out and will change everything, and then we don't need to know anything about engineering.
- CWCat Wu
No.
- LRLenny Rachitsky
[laughs]
- CWCat Wu
I'm just saying that every, every few months it seems like there's a-
- LRLenny Rachitsky
Yeah
- CWCat Wu
... there's a large increase in coding capability, which then changes what other roles are valuable.
- 20:10 – 22:23
Where human brains will continue to be useful
- LRLenny Rachitsky
Yeah.
- CWCat Wu
I think the, the most important thing is to be able to, to c- to have this, like, first principles thinking where you can figure out how the tech landscape is changing, what the team really needs from you, and to, like, jump in and fix that hole because I think the work is becoming more amorphous, which means that a great PM is able to understand what all the gaps are to figure out what the highest priority ones are and then to just, like, figure out, okay, how do I learn that skill set or what is, like, the skill set that I have that I can, like, apply to this challenge? So I, I think the current environment values people who are, who are able to wear a lot of hats or able to swap them and are, like, very low ego about what work they do to help the team move faster.
- LRLenny Rachitsky
I love this answer. There's this question I've been asking people in your, in your shoes, folks that are kinda at the bleeding edge of what AI's capable of and building with the latest tools, which is just like where will human brains continue to be useful and necessary for a while until we get to super intelligence? What I'm hearing here is essentially picking the things to work on, knowing where the market's going, and figuring out where, what to prioritize essentially, and then it's knowing if the thing you've built is good and right and getting it out there in some early version at least. Does that sound right? Is there anything else of just, like, where human brains will continue to be useful for at least the next few months? [laughs]
- CWCat Wu
I think humans still provide a level of common sense that the models don't, and there's, like, 1,000 moving pieces to any product launch. Some of them are very small, but there's always a lot that could potentially go wrong. I think the model doesn't always have a great sense of who all the stakeholders are, how they relate to each other, what their preferences are, what are the right venues to communicate with them to keep them on board. I think a lot of this, like, more tacit common sense, like EQ kind of knowledge i- is still very valuable. Of course, we want the models to get better at this, and I think they will be, but right
- 22:23 – 24:16
How to stay sane in constant chaos
- CWCat Wu
now I think there's still gaps.
- LRLenny Rachitsky
How do you just kinda deal as a human going through so much constant change, just, like, just being on the inside of the tornado? Maybe it's calm there. [laughs] But just, like, how do you, how do you stay on top of what's going on? How do you stay sane through all this craziness that we're moving through?
- CWCat Wu
I think our team is full of people who lean into the chaos, so we try to face every challenge with a smile because there's always so much going on. There's al- there's always so many risks and tricky situations that, you know, if you get too stressed about anything you'll burn out. And so we really look for people who can kind of, like, look at a challenge and be like, "Oof, that's gonna be hard, but I'm excited to tackle it and I'm gonna do the best that I possibly can, and I know I won't be perfect, but I'll be able to sleep at night knowing that I did my best."
- LRLenny Rachitsky
That's an interesting answer to just, like, what skills will be important in this future 'cause it's... I forget who said this, maybe Ben Mann, that this is the most normal this is... the world will ever be. [laughs]
- CWCat Wu
[laughs] Yeah. It definitely gets harder. Like, I feel like there are a lot of weeks where maybe Sunday night there's some, like, P0, and then by Monday there's, like, a P00, and by Monday afternoon there's a P000, and you're like, "Wow, I can't believe I was so worried about that P0 from Sunday." [laughs]
- LRLenny Rachitsky
[laughs]
- CWCat Wu
But I think you just have to acknowledge that there's only so much that you can do, that you need to sleep well so that you can make good decisions the next day, and just, like, brutally prioritize where you spend your time, what's the most important thing to get right, and be okay letting things go. Like, there's, there's products that we ship that aren't as polished as I wish they were, but, you know, our, our top goal is to help empower professional developers, and if a product isn't successful, as long as it's not blocking the core use case, it's okay because we'll hear the feedback and we'll fix it
- 24:16 – 27:47
What gets sacrificed when you ship so fast
- CWCat Wu
in the next release. Launching a feature that is buggy is the kinda thing that would have kept me up at night, but-It is something that I am now able to, like, live with, knowing that, okay, we're gonna get that quick feedback, and we're going to fix it in the next release.
- LRLenny Rachitsky
What I'm imagining is there's that GIF, I think it's maybe from Pirates of the Caribbean, where it's this guy walking down a pair of stairs on a ship, and the whole ship is just being demolished around him, and he's so chill, just strolling down the staircase as everything's falling apart [chuckles] . And that's interesting, 'cause everyone I've met through... from Anthropic is just so chill, and just so, like, optimistic.
- CWCat Wu
You have to be.
- LRLenny Rachitsky
Yeah. That's... I think that's a really interesting insight, is just, like, having this calmness and optimism versus just, like, "Oh my God, everything's crazy and going, going nuts."
- CWCat Wu
Yeah. I think if you don't have it, you'll get pretty burnt out. I, I think we also tend to hire people who have been in the industry for a while and have experienced lots of ups and downs and have a good sense for what gives them energy and how to maintain their energy over time. And I think that's helped us a lot.
- LRLenny Rachitsky
So interesting. Something that I wanted to ask about is, so there's these roles blurring. Engineers are becoming PMs. Everyone's dogs or cats. Everyone's everyone. What, what do we lose in that, in that world? Do we lose, like, career ladders and clear career paths? Do we lose design consistency, code quality? You know, there's probably some downsides. What are some things you find are just like, "Okay, that's something we're sacrificing for the greater good"?
- CWCat Wu
We're sacrificing product consistency. Historically, when code was expensive to write, you would carefully plan out everything in your product suite, how every product relates to each other, what the use case for every single one is, how they integrate, and you would pretty much have one product for each use case. And now with AI moving so quickly and with so many ideas that we need to test out, we do sometimes have features that overlap with each other. A lot of the times it's because there's two form factors that we love internally, and we want to... we want the external audience to tell us which one is better. What that means for someone who's a new user though is a new user might not know, okay, what is the best path to accomplish X? There is more education we need to do to help people understand what the core features are and what the best practices are for using them. I, I think this is the, this is the cost of launching a lot of features. Um, I think users also feel like it's hard to keep up with the latest. Usually in traditional PM, you ship a feature every, like, month or quarter, and so it's really easy for a user to, to understand, "Okay, I just need to check in on this once a month, and I'll learn some new things, and if I ignore it for six months, i- it's fine. I don't feel like I'm missing out." I think with these agentic tools, not just Claude Code and Cowork, but, like, across the whole ecosystem, people feel this need to, like, check Twitter every single day to see what the absolute latest thing is. And I think there's more we can do to help people feel less like they're on this ever-increasingly fast treadmill, and that they feel like... I, I would love people to feel like they can just open these tools, the tools will educate them, um, or, like, teach them what they wanna know and that they can just feel
- 27:47 – 28:32
The /powerup command
- CWCat Wu
more bought along.
- LRLenny Rachitsky
Yeah. I saw you launch this really interesting feature the other day, I think it's /powerup, where it basically walks you through all the cool ways and the all... basically, like, all the best practices to use Claude Code. Is that kind of along these lines?
- CWCat Wu
Yeah, exactly. So in the past, we didn't actually want to do something like Power Up 'cause we felt like the product should be intuitive enough that you can... that you don't actually need to go through any tutorial. And over time, we've just realized that there's just so many features and there's so much demand for a built-in onboarding experience that we, we diverged a bit from our original principle of saying no, no onboarding flow, and added this because there's just so many users who wanted to know there's 100 features, what are the 10 that I absolutely need to use? And so we put that together.
- 28:32 – 32:28
Why Anthropic has been so successful
- LRLenny Rachitsky
Yeah. It's such a bizarre world. So Anthropic has been really successful with B2B enterprises, where traditionally you don't launch a bunch of stuff. You just kind of have a quarterly release maybe, and it's like the opposite of every day we got something new. So just maybe following that thread, the run Anthropic has been on is just o- otherworldly. Anthropic was way behind when it started. It was... Amol shared this, just, like, one of the least funded companies, didn't have distribution, wasn't the first to go. OpenAI was way ahead, and it was just like, "No way Anthropic has any chance to compete significantly long term." Now it's just killing it, just beating the biggest companies, teams with so much... Just, like, the growth is just, uh, like, $11 billion in ARR in one month [chuckles] in terms of growth. By the time this comes out, it'll probably be even higher. Just being on the inside, what, what are some ingredients that have allowed Anthropic to be this successful and kind of come from behind and do this well?
- CWCat Wu
The two most important things are, one, this unifying mission. It's hard to state how important this is. We hire people who care most about bringing safe AGI to all of humanity, and this is actually something that we reference frequently in our decisions about what our entire product org should focus on shipping. And because we put this, like, mission above any individual product line, we're able to make very fast decisions that cut across the entire org and, like, execute on them in a unified way. So I think this is the... this is, like, something that I've never seen at a company of our scale.
- LRLenny Rachitsky
And so just to make sure that's clear, so essentially having the, the number one mission is safety, alignment, making sure AI is good for the world, and you're saying just having that as a clear mission makes decisions a lot easier to make.
- CWCat Wu
If there's two competing priorities, we'll talk about which one is more important for Anthropic's mission.And it makes it a lot easier to decide which of the two we prioritize, and then everyone will stand behind the one that we decide. And so sometimes that means that like, "Hey, we wanna ship something on Claude Code, but this other thing is more important, and so we deprioritize shipping this, and we just wait until later."
- LRLenny Rachitsky
What's really interesting about that is that explains, I think, versus, uh, another company maybe rhymes with OpenAI, uh, did a lot of different things. And what I'm hearing here essentially is like, "Okay, we're not gonna launch social network. We're not gonna launch, uh, a feed of interesting information because it's not aligned to this mission." And, and that has kept Anthropic focused, which is seems to be a core ingredient to the success.
- CWCat Wu
Well, when, when I think about mission, I think about putting Anthropic's goals ahead of any individual org or any individual product. And so for me, it's... I think the second thing that we're very good at is focus. I think mission to me is slightly different. Mission means that teams are willing to make sacrifices that hurt their own goals and their own KRs in service of Anthropic's goals and Anthropic's KRs. And people are very happy to make those trade-offs. So like an extreme example is if Claude Code failed but Anthropic succeeded, I would be extremely happy. And like we're-- like the whole team is very willing to make decisions that follow that chain of thought.
- LRLenny Rachitsky
I don't know if you can talk about this in depth, but do you feel like the OpenClaw decision is a part of this? Just like, "Okay, this is not furthering the mission of Anthropic. We need to stop this because it's not working in the way we want it to work."
- CWCat Wu
I think one of the most important things for Anthropic is to grow the number of users that we're able to reach. One of the ways that we're able to do this is with the Claude subscriptions with our first-party products. And so we just very much want to double down on that, but that does come at the expense of third-party products
- 32:28 – 35:58
When to use Claude Code vs. Desktop vs. Cowork
- CWCat Wu
sometimes.
- LRLenny Rachitsky
So we've been talking about Claude Cowork, all these things. Something that I, I wanna make sure people get, and I'm curious just how you use these tools. So there's Claude Code, there's Claude Desktop/Web, there's Cowork. What's the best way to understand when to use which? When do you use each of these three?
- CWCat Wu
So I tend to use, uh, Claude Code in the terminal when I'm just kicking off like a one-off coding task and I want all of the latest features. Uh, the CLI is our initial product surface, and it's also the one where our features often land first. And so it's the, it's the most powerful of all the tools. So that's, that's what I tend to use when I'm just like trying to kick off one or like maybe like a handful of tasks at a time. I think Desktop really shines when you're doing something that requires front-end work. And so one thing that I love to do is to use our preview feature. So if I'm building a web app, I'll often use Claude Code and Desktop. I'll have the preview pane open on the right-hand side so that I can actually see the web app that I'm making in real time as I'm chatting with Claude. It's also really great for people who want something a bit more graphical. Uh, a terminal can feel very unfamiliar to someone who is non-technical. Um, you get a bunch of these like scary pop-ups on your machine, and you can't click around the way that you're used to in pretty much every other product that you use. So there's a lot of people who just like don't feel comfortable in the terminal. And if that's you, I would highly recommend checking out Claude Code on Desktop. Desktop is also great for getting an at-a-glance view of everything that's happening. So you can see your CLI terminal sessions in Desktop. You can see your other Desktop sessions. You can see your sessions that you kicked off on web and mobile. So it's a one-stop control plane where you can see all of your tasks. I think the benefit of web and mobile is that it's really great for kicking things off on the go. So CLI and Desktop both require you to be on your local laptop, and this is constraining because sometimes you're out and about, you're like touching grass, you're going on a walk, and you don't have your laptop open, and you don't... [chuckles] I can't ca- I can't count the number of people who I've seen like holding their laptop open, like tethered to their phone while they're outside. And this just means that we're missing a product that solves that need. And so for, for me, what mobile lets you do is kick off these tasks on the go so that you don't, you don't need to bring your laptop everywhere and make sure that your laptop's open wherever you are.
- LRLenny Rachitsky
I love that. I've, I've seen people on plane like it's just like such a meme now. Just I need to finish... let this agent finish. I can't shut this down.
- CWCat Wu
Exactly.
- LRLenny Rachitsky
I need Wi-Fi.
- CWCat Wu
And then I think for Cowork, the, the role that this fills is there's a lot of work that everyone does where the output isn't code. So whether that's like getting to Slack zero or inbox zero, or whether that's creating a slide deck for some customer meeting that's coming up, or whether that's writing a quick doc on what the goals of a feature are or what the launch plan for a feature is, all these tasks produce outputs that are non-code, and Cowork is best positioned for that. So the way that I split the products in my mind is if I'm building something where the output is code, I'll use Claude Code or Desktop or Claude Code on mobile. And if the output is anything that's not code, I'll use Cowork for it.
- LRLenny Rachitsky
People are just like sleeping on the success that Cowork is having. It's just like growing incredibly fast. And I think people still don't understand maybe
- 35:58 – 38:44
Tips for getting started with Cowork
- LRLenny Rachitsky
what it's for. And so what if you give us a couple use cases just in your work as a PM? What are some like really interesting, maybe unexpected ways you use Cowork to save you time, get more work done?
- CWCat Wu
If you're getting started on Cowork, the first thing that you really need to do is connect all the data sources that are relevant to your role because Cowork can only do a great job if it has access to all the context that it needs to be able to curate the output for you. So what that means for me is I connect it to my Google Calendar, I connect it to my Slack, to my Gmail, to my Google Drive so that it just knows... It has the flexibility toFind relevant context, to ask questions, to pull in threads, and this, this like substantially improves the quality of the result. The kinds of things I use it for are, um, like last night I was work... We're-- we have this Code with Claude conference coming up, and there's a few talks that I'm giving there. And one of the talks that we're doing talks about the, the transition of Claude Code from an assistant to like a full-on agent. And one of the things that I wanted to do in this talk was to showcase all of the products that we've been shipping that enable this transition, and also to figure out, okay, what are the, what are the success stories that people have had internally that we can use as demos? And so I, I have my Google Drive connected. I have Slack connected. Um, Alex, who's our product marketer, put together like a draft of what the points that we-- that he thinks we should cover are. And so I just like fed this all into Cowork. I told Cowork the narrative that I want to tell, and it actually just worked for an hour. It, it walked through Twitter to see what we launched. It looked through our Evergreen Launch room. It looked in our Claude Code announce channel, which is where our team posts demos of what—how, how they've been getting the most value out of Claude Code. And it synthesized all this together to this 20-page deck that I woke up to this morning, and I read through it, and it was like pretty good. There were, there were a few tweaks, so I did have to give it a round of feedback. I, I like my slides to have extremely minimal words, and it was a little too wordy, but, you know, it, it was far faster than like what I would be able to produce. And because Cowork has access to our whole design system, it actually looks like an Anthropic designer put it together. Like it... when you visually see it, you're like, "Oh, this is like incredibly polished." So, uh, th-these are the kinds of things that are so much faster. Like this—making this slide deck would have taken me hours. But instead it like churns out a draft that is actually quite good, so that I could focus on making sure that the demos are amazing,
- 38:44 – 41:48
Demo: Using Cowork to build slide decks overnight
- CWCat Wu
that we plug into it.
- LRLenny Rachitsky
This sounds like a dream come true to PMs that putting decks together is so annoying. Uh-
- CWCat Wu
It's so slow. [laughs]
- LRLenny Rachitsky
[laughs] I, uh, and I love people will see this deck whenever you present this. This will be out in the world. This like, obviously it's not the, the one-shotted version, but you've iterated on it. So just to help people try this for themselves, so step one is connect their, what did you say? Slack. What else do you suggest they connect?
- CWCat Wu
S-Slack, Google Calendar, Gmail, G Drive. Y-you should connect your communications tools and where you store your source of truth data for what your team cares about, what you care about, and what you're working on.
- LRLenny Rachitsky
Okay. And then what was the prompt roughly that you put in there to generate this deck?
- CWCat Wu
So I just wrote, "Make me a slide deck for the Code with Claude conference. This is what our PMM suggested it should cover. This is the current draft that I made that I don't like. This is one that I made manually that I don't like," but I linked it. "Can you start by creating a proposed outline with details? Also, make sure it doesn't overlap too much with a keynote talk, which is more important." And then Claude read a bunch of the links that I sent to it and created a proposed outline. So then I read through its proposal and all the different ideas that it had generated for what we could cover, and I just made a decision on what I wanted to actually be in the final deck. And I think this is like an example of what the role of the PM still is today. It's like Claude is a great brainstorming partner. It's able to synthesize a massive amount of information really quickly and present all of the possibilities to you. But, uh, the role of PM is still to make the end decision of, okay, what, what should belong in the final product? So for this, what I ended up deciding was that I wanted the talk to talk-- to cover the progression from making local tasks successful, to making every PR green, to like helping engineers land more PRs, and for each of these, which demo would be the most compelling. And then after this decision about the outline, Cowork just like went off for a few hours and built the whole slide deck.
- LRLenny Rachitsky
This is so awesome. What a, what an awesome part of the job to not have to do anymore. Uh, and it feels like you're talking to essentially a deck designer that also has like actual knowledge about what you've worked on and, and can like make it actually the content what you want it to be, not just make it look really nice. How did you, um, how did you do the design system piece? How does that work? How does it know the design system of Anthropic?
- CWCat Wu
So what I did for this is we actually already have like a standardized deck that we use across all of our external engagements, and so I just gave Claude access to that. And so it's able to see like what colors we use, what fonts we use, the different kinds of, what's it called? Like slide formats-
- LRLenny Rachitsky
Mm-hmm
- CWCat Wu
... that are possible.
- LRLenny Rachitsky
Got it.
- CWCat Wu
And so it has like 20 of these example slides that it can pull from.
- LRLenny Rachitsky
So give an example. Got it. So you like upload, "Here's our template. Work from this."
- CWCat Wu
Yeah. You can also connect like your Figma MCP if you, if you have your slide format, um, saved there, and it can pull
- 41:48 – 46:47
Cat’s PM tech stack and internal tools
- CWCat Wu
that in.
- LRLenny Rachitsky
Along those lines, something I'm always curious about is what's kind of in your, in your stack of tools as a PM at Anthropic? Obviously, Claude Code and Cowork and all the Anthropic tools. What else are you using? What other... Slack you mentioned. Is there anything else?
- CWCat Wu
So my stack is pretty heavily Claude Code, Cowork, and Slack. Anthropic largely runs on Slack. Um, I feel like it's like the core OS of our company. And day to day, like a, a lot of... I, I would say maybe 30% of my time is pushing the boundaries of what Cowork and Claude Code can do so that I have a very strong sense of what we're not good at. And I spend a lot of time talking with the model to understand-Why it makes mistakes that it does. We actually have a lot of internal tools that we make. Like I think one of the things that Claude Code has really unlocked for our entire company is it really lowers the barrier to making any custom app that you want. And so we, we've seen this like surge in personalized work software that people are building for like custom use cases instead of, um, using tools that don't perfectly fit the use case.
- LRLenny Rachitsky
I gotta hear more. What are, what are some examples? What are things you've built, other people have built that are really popular and useful?
- CWCat Wu
One of the sales folks on Claude Code, he, he realized he was making these like repetitive decks over and over and over again. And so he actually has this web app that he built with the examples of the core Claude Code decks that we know work well, so like a one oh one, two oh one, and mastering Claude Code. And then he has a way to input specific customer context that pulls from Salesforce, that pulls from Gong, that pulls from other notes so that we can customize the decks for specific customers. And so it'll pull out things like, okay, this customer is using like Bedrock or Claude for Enterprise or Console, which affects what features are available to them. Um, it'll pull out things like, okay, this customer is concerned about like the code review stage of the SLC, and so we'll add a slide about our code review features there. Um, it'll pull out things like, okay, this customer needs to be like HIPAA compliant or needs X, Y, Z security controls, and so we'll make sure to add a slide or two in their deck about that. And then for example, if, if this is a customer that's on Vertex or Bedrock and doesn't wanna use Claude for Enterprise, then we'll just take out some of the slides that are Claude for Enterprise only features. And so normally this is like manual work that could take 20, 30 minutes. Uh, or s- and so people either like spend that time doing it, or they'll just decide not to do it and use the general deck. Uh, with this it takes like a few seconds, and you get a tailored deck.
- LRLenny Rachitsky
What's interesting about it, it's like Slack is like the tool that nobody's... It's just like nobody's trying to create their own. Slack just continues to win, and is just like the way you describe it as kind of the OS of so many companies. It's so interesting. Like, people talk about Salesforce as just like SaaS. We don't need SaaS software anymore. We're gonna build our own. Just like Slack is a durable tool that nobody wants to try to compete with and build a better version.
- CWCat Wu
I think it's pretty important communications infrastructure, and I think they do the core task of helping everyone get real-time updates incredibly well.
- LRLenny Rachitsky
Yeah, like people hate on Slack, but it's really great at what it's trying to do. And [chuckles] like the most cutting-edge teams are, are hooked on it. So interesting.
- CWCat Wu
Yeah, and I also love how customi- how easy they've made to customize it. And so it's... We, we love making Slack bots, and the, this kind of like hackability, uh, means that we're able to integrate with Slack the way that we want to, so really appreciate Slack's work on that.
- LRLenny Rachitsky
Time to, time to buy some CRM stock. I am so excited to tell you about this season's supporting sponsor, Vanta. Vanta helps over 15,000 companies like Cursor, Ramp, Duolingo, Snowflake, and Atlassian earn and prove trust with their customers. Teams are building and shipping products faster than ever thanks to AI, but as a result, the amount of risk being introduced into your product and your business is higher than it's ever been. Every security leader that I talk to is feeling the increasing weight of protecting their organization, their business, and not to mention their customer data. Because things are moving so fast, they are constantly reacting, having to guess at priorities, and having to make do with outdated solutions. Vanta automates compliance and risk management with over 35 security and privacy frameworks, including SOC 2, ISO 27001, and HIPAA. This helps companies get compliant fast and stay compliant. More than ever before, trust has the power to make or break your business. Learn more at vanta.com/lenny. And as a listener of this podcast, you get $1,000 off Vanta. That's vanta.com/lenny.
- 46:47 – 51:15
Which teams use the most tokens
- LRLenny Rachitsky
Okay, uh, so you talked about all these different teams that... and how they use Claude Code and Cowork to operate. Which teams do you find, other than engineering, I imagine engineering is the biggest token spender, but if not, that'd be really interesting. What, what's kinda like the second place function right now for tokens?
- CWCat Wu
Oh, applied AI is amazing at pushing the boundaries of what Claude Code and Cowork can do. A, a lot of our applied AI team spends time with our customers helping them adopt our API. And so sometimes our applied AI team will, for example, make prototypes on behalf of these customers, which Claude Code makes so much faster than it used to be. The, they also have the dual goal of needing to manage a lot of customer comms, a lot of like customer inbound and historical context call notes. And so they're both extremely heavy on Cowork and on Claude Code.
- LRLenny Rachitsky
And just to understand applied AI, is that like, is that like forward deploy engineering sort of role? Like what do they... How would you, how would most people describe what applied, the applied AI team is doing?
- CWCat Wu
Yeah. It's helping our customers adopt the latest API and, uh, model features, um, across their company, both for powering their company's products and also for internal acceleration.
- LRLenny Rachitsky
Got it. So it's like customer success and go-to-markety, kind of like forward deploy engineering sort of thing.
- CWCat Wu
Exactly. It's like a very technical go-to-market person.
- LRLenny Rachitsky
Got it. Okay, awesome. So that's... So you're saying that might be the second, uh, org that uses the most tokens.
- CWCat Wu
Yeah. And then we, we also see them pushing the boundaries of what Cowork can do. So for example, if... So a, a lot of these folks cover multiple customers, and in any given day can have like five to 10 customer engagements, uh, on a high day. And so what they often use Cowork to do is the night before they'll ask it to summarize, "Okay, what are all my customer meetings that are coming up the next day?"Um, what are all the, what are all the things that this customer has asked me for? Uh, what's top of mind for them? What are the action items from the past meetings? And Cowork will just put together this, like, dossier, this, like, brief of what they should be aware of going into the next meeting. And Cowork can also research answers. So if, if a customer asked, "Okay, when is feature X going to launch?" Um, Cowork can help the applied AI person research through Slack to get the latest ETA, add that to the, add that to the notes so that during the customer call, the applied AI person has the absolute latest. And these are just workflows that people are building for themselves and sharing with other people on their team.
- LRLenny Rachitsky
So cool. Something that... kind of this question, this trend, uh, I don't know, question topic comes up a lot recently, which is, um, token spend exceeding people's salary, where people just use AI a-and it costs more than how much they're making. Are there any numbers floating around Anthropic of just, like, how much token spend, say, engineers, uh, spend, I don't know, a month, a day, or PMs, anything like that?
- CWCat Wu
It is clear to us that as the models get better, people delegate far more tasks to it, and they spend a lot more hours in tools like Claude Code and Cowork. And so we do see the token cost per engineer, or, like, per any knowledge worker, increase every time that there is a model jump or, like, a substantial product improvement. I think it's still much lower than what the average engineer salary is, but we see the percentage increasing over time.
- LRLenny Rachitsky
It's such an interesting... Like, we talked about how you have access to the most cutting-edge models. Another advantage of working at Anthropic, uh, I believe you guys have basically unlimited tokens. You don't... You can use as much as you want. Is that right?
- CWCat Wu
We can use a lot of tokens. Some people do run into limits. So-
- LRLenny Rachitsky
Okay, there's a limit. [laughs]
- CWCat Wu
[laughs]
- LRLenny Rachitsky
Okay. Boris, shut it down.
- CWCat Wu
[laughs]
- LRLenny Rachitsky
Uh, okay. Like, it's so interesting how many advantages come from having the most advanced model. It's such an interesting, like, flywheel that starts to kick in.
- CWCat Wu
I think we also believe a lot in empowering our internal teams to build as fast as possible, and we also trust that everyone understands how much capacity that serving these models truly costs, and s- we trust our team to use the tokens responsibly. So it's very frowned upon to waste tokens, but we do trust individuals to make that judgment call.
- LRLenny Rachitsky
Awesome.
- 51:15 – 55:00
The emerging skills PMs need for AI companies
- LRLenny Rachitsky
Coming back to the PM role, you talked... We talked a little bit about this, but I think this will be really interesting for people to hear. Just, uh, what I wanna understand is what do you think are the, kind of the emerging skills that PMs need to develop slash you most look for, AI companies most look for when they're hiring PMs these days?
- CWCat Wu
I think the hardest skill is being able to define what the product should look like a month from now. I think there's a lot of ambiguity in what models are capable of in that timeline and how user behavior will change. But I think there are patterns that the best PMs can see based on how users are abusing the limits of the existing product. And the best PMs can sense that, can set a direction, and can steadily execute towards it and change the path if the model capabilities are much better than or worse than what they'd originally expected. I think it is very hard to be the right amount of AGI pilled. 'Cause I think everyone can see this, like, th-this future where the models are extremely smart and can do almost everything, in which case you actually don't need that complicated a product. You can actually just have a text box again where you tell the model what you want, and it's so smart that it can add any tool or add any integration that it needs to, like, get the job done. It knows when it's uncertain. It can ask clarifying questions. Like, it's kind of very easy to build the product for the super AGI, uh, strong model. I think the hard thing is figuring out, for the current model, how do you elicit the maximum capability? How do you help users go get onto the, the golden path? How do you, like, guide users to interact with the model's strengths and, like, patch its weaknesses? The, the skill is, like, pretty rare.
- LRLenny Rachitsky
Mm. And how do you build that skill? Is it just using each, like... It's basically understanding the limits of each model, having like... You talked about taste, understanding, having taste into what the model maybe is capable of, what it's great and not great at, where it's changed.
- CWCat Wu
I think it's spending a ton of time talking and using the model. One of the things I really like to do is to ask the model to introspect on its own behaviors. So sometimes when I notice that the model does something unexpected, like for example, there's, like, situations where the model will make a front-end change and run tests but not actually use the UI. It's actually pretty useful to ask the model to reflect on why it did this, and sometimes it'll say that, "Hey, there was, like, something confusing in the system prompt," or, "I didn't realize that, um, the front-end verification was, like, part of this task," or, "Hey, I delegated the verification to this sub-agent, and the sub-agent didn't do the test, and I didn't check its work." A lot of times just, like, s- being very curious about why the model made the decision that it did will show you what misled it so that you can fix the harness in order to close this gap. The other thing that helps is to figure out who the taste... who are the users who you trust the most to give you accurate feedback about the model. Usually, there's, like, a handful of people who are much better than others at articulating what makes a specific model or model-harness combination good.And there's a lot of people who will give you feedback, but not everyone's feedback is as qualified. And so finding a group of those, like, five people
- 55:00 – 58:44
Why building evals is underappreciated
- CWCat Wu
you trust is really important for getting very fast feedback. I think the third thing that is useful but not everyone loves doing is building evals. You don't need to build hundreds of evals for them to be useful. Just building 10 great evals is important for helping the team quantify what the goal is and what their progress towards it is and what they're missing. And so I think evals is this, like, underappreciated thing that more, more PMs, more engineers should be working on.
- LRLenny Rachitsky
We've covered evals a bunch. There's this trend of just, like, that is the future of product management is writing evals because it... And essentially it's what does success look like? Okay, cool. Let me actually concretely define it and then we'll know. How much of your time are you spending writing evals, would you say?
- CWCat Wu
I, I think the importance of evals varies a bit based on the feature that you're working on and... or, like, what the problem you're trying to solve is. So there are a lot of folks on our team who do spend a lot of time working on evals. We have a small pod of folks who collaborate very closely with research to more precisely understand our Claude Code behaviors and what the largest areas of improvement are and trying to measure those pretty concretely. I personally jump into evals when there's a feature that I think needs a bit more product definition, and often the output of this is, "Okay, here are, like, five evals that I made. Um, this is how you run them. These are the ones that succeed, and these are the ones that don't. And this is, like, the prompt that I've used to increase the success rate." It varies a lot though, based on the exact feature. Uh, not every feature needs it, but I think features such as memory benefit a lot from it.
- LRLenny Rachitsky
This, uh, point you made about people being very good at evaluating models is so interesting. It's almost like a human eval of just, like, okay, they understand where it's spiking or it's maybe lacking. Uh, is there anyone specific that you want to shout out that's very good at this?
- CWCat Wu
Uh, two people who I think are incredible at this are, um, one, Amanda, who def-- who molds Claude's character. It's just, like, such a hard role because the task is so ambiguous. Even coding is easier because you can verify the success, whereas crafting the character requires a very strong sense of conviction in what-- who Claude should be. And I think she has, like, an incredible ability to not only mold the character, but also to, like, articulate what the goals are, what the character, what's successful and what's not. The, the other group of people who I really trust is just, like, the Claude Code team. Um, so we often have team lunches, and whenever there's a new model we're testing, one of the fastest ways for us to get feedback is to just, like, at these team lunches, just, like, go to every single person and just be like, "Hey, what is your vibe on the model?" And oftentimes we'll, we'll get feedback like, "Okay, this model is, like, not fully explaining its thinking. It's, like, too abrupt." Or like, "Hey, this model is like, um, just, like, loves writing a ton of memories, but, like, we're not sure if the memories are high quality or not." Or, like, some people will notice that, okay, this, this model loves to test itself, which is great, or, like, this model isn't testing itself enough. So that informs what data we look at to verify, okay, is this a larger pattern? So we, we have a ton of data, but it is very hard to extract insights. And so the, the feedback from this group helps us inform, okay, what are the hypotheses we want to test? And then we're able to extract, uh, data to,
- 58:44 – 1:00:44
Why Claude’s character and personality matter so much
- CWCat Wu
uh, test that.
- LRLenny Rachitsky
This point you made about the character of Claude, I had Ben Mann on the podcast, co-founder, and he talked about this just like the character, the constitution of Claude is such an important part of, of, of Claude. And I, I didn't realize until afterwards just, like, like, people... Like, with OpenClaw, actually, one of the exam- one of the reasons people are sad is, like, the personality of your claw is, like, because Claude's personality is so good and fun and, and interesting, unlike other models. And there's-- And the way he put it is the personality is what makes Claude so good at so many things. It feels like this, like, trivial side thing. Okay, it's gonna be funny and interesting and talk in a, in a fun way, but it's, like, so core to the success of Claude. Is there anything to share there about just, like, what people may not understand about why the character, as you described, and the personality is so key?
- CWCat Wu
When you reflect on everyone you've worked with, there's just some people where you're like, "I really like their energy." Like, "I really like their vibe." And when people think about Claude and Claude Code, this is one of the things that people bring up the most, where they just really love that Claude is, like... it's, it's, like, light-hearted and fun, um, but it also is extremely competent at your task. People really like that Claude's low ego, and so if you tell it, "Hey, you did this thing wrong," it's like, "Truly sorry." It's like, "Oh, shoot." Like, "Thanks for telling me." Like, "Let me fix it. Let's work together." It's also very positive. So if you're feeling like, "Oh, this is, like, an insurmountable task. I don't know h-how to get started," Claude is like, "Okay. It's okay. These, these are, like, the steps that I think we should take. Like, do you want me to get started on it for you?" I think part of what makes a great coworker is this positivity, this, like, bias towards action, this, this ability to give you, like, earnest feedback, not just agreeing with every single thing that you say. And so we try to imbue this into Claude because we think it makes it a lot
- 1:00:44 – 1:05:11
How new models force product changes
- CWCat Wu
more enjoyable to work with.
- LRLenny Rachitsky
There's something I want to come back to. You talked about how when new models come out, you often have to kind of revisit things you've built. That's so interesting and so, like, frustrating maybe, just like, "Oh, goddammit. We shipped this thing and now we have to rethink it."Talk about just like how often you have to come back with a new model, and they're like, "Okay, we have to redo this product that we launched a few months ago."
- CWCat Wu
A lot of the changes that we make with a new model is removing features that are no longer needed. So a lot of times we add features to the product as a crutch for the model because it's not naturally doing it itself. So the classic example for this is a to-do list. When we first launched Claude Code, people would ask it to do these large refactors, and Claude Code would say, "Okay, cool. I need to change these, like, twenty call sites." And it would go and change five of them and then stop. And then we were like, "Okay, how do we like force it to remember to get every single one of these twenty?" And so Sid on our team was like, "Okay, what if we just like think about what a human would do?" A human would like make a list of everything that they need to change, similar to how in VS Code you would look up all the call sites, and it would be a list on the left side, and you would like go through them one by one and replace all. How do we give this kind of like a tool to Claude? And so he added a to-do list, and we found that with that, Claude was actually able to fix all these twenty call sites. But then with Opus Four and later models, we realized that we didn't need to force it to use this to-do list. It would like naturally use it itself. For the earlier models, we had to keep reminding it, "Hey, did you finish everything on the to-do list? You can't finish until you're done with everything on the to-do list." And for the later models, without prompting, it just like naturally thinks to do everything on the to-do list. Um, these days, the to-do list is still nice to have as, like, a user, um, because then you can more clearly see what Claude is working on. But honestly, it's such a de-emphasized part of the product right now that, um, the model may use it, the model may not use it. It's, like, really not necessary for it to make thorough changes anymore.
- LRLenny Rachitsky
I forget who said this on the podcast, um, that the model will eat your harness for breakfast. And what I'm hearing here essentially, you, you remove things over time that you've had to add on top of the model where it was not operating the way you want it, and essentially, as the models get smarter, you just... it becomes simpler and simpler for it just to do the thing you want it to do.
- CWCat Wu
Yeah. Um, w-we can mo- remove a lot of prompting interventions e-every time the model gets smarter. And we actually do this every time we launch a model. We read through the entire system prompt, and we reflect on, okay, for each of these sections, does the model really need this reminder anymore? And if not, we'll remove it. The most exciting thing that new models unlocks, though, is just, like, entirely new features. So there's a lot of features that we've been testing out with prior models, and the accuracy wasn't high enough for us to want to launch them. And so one example of this is code review. We tried to build a code review product a few times, and we've launched, like, simpler versions of code review, which is the /code review command in the past. And it was only with the most recent models that we felt like, okay, this code review is so good that our engineering team relies on this code review to pass before we merge PRs. And we found that this was... W-we've always dreamed of Claude being able to be a reliable code reviewer that can actually... that we can like confidently feel catches the majority of bugs. And it was only with like Opus Four, Five and Four Six that we f- and, uh, Sonnet Four Six, that we felt like, okay, we are now able to, like, run multiple code review agents simultaneously to traverse, traverse the entirety of the code base and to synthesize a set of, like, real issues that an engineer needs to address before merge. And so this is like a new capability that the, the newest models have unlocked.
- LRLenny Rachitsky
This is another trend that is very common on this podcast of build something that will possibly be possible in the next six months, be kind of at the edge of what's working sort of, and then it'll catch up, and then it'll be an amazing product, and you'll be ahead of everyone.
- CWCat Wu
Yeah, exactly. Um, it's pretty important to build products that don't necessarily work yet so that you know, okay, what is missing, um, for this product to work? And then with the newest model, you can just swap it into the prototype you've already made and see, okay, does this new model
- 1:05:11 – 1:07:22
The vision for Claude Code and Cowork
- CWCat Wu
close that gap?
- LRLenny Rachitsky
How much are you able to speak to just kind of where things are going with Claude and Cowork as kind of the vision of it? I imagine you don't want to give away too much about the goal, but it feels like you're... there's all these awesome features being added on top, Dispatch, Control from Phone, and all these mobile app, all these things. What's kind of just like a way to understand the vision for all these things long term?
- CWCat Wu
We think about this in terms of building blocks. So for both Claude Code and Cowork, the core building block is making individual tasks successful. So you, you want it to produce some output, you give it a clear prompt description. Is it able to consistently produce acceptable output that you're able to either merge or share with your colleagues or external audience? So the task is the core building block. As the models get smarter, the task success rate gets a lot higher, and then we see people moving towards doing multiple tasks at the same time. So multi-Claude-ing was this big thing in towards the end of twenty twenty-five, and it's only increased since then. And so we see this as, okay, great. One task works, and now you can do like six tasks at a time. As the models get even smarter, the way that we are extrapolating this is, okay, next, maybe you're gonna run like fifty Claudes at a time or hundreds of Claudes at a time. And so what is the infrastructure we need to build to enable that? At that point, you're probably not going to run everything locally on your machine anymore. There's just like not enough RAM [chuckles] to do it. And so we're, we're thinking about h-how do we make it easier for you to manage all these? These will probably run remotely. How do we build the interface so that you as a human know which tasks you need to look, look into? How do we make sure that the agent is fully verifying its work so that when you look at a task, and it says it's done, you, like, can very quickly verify and fully trust that it is done to your spec?And how do we make sure that this, like, process is self-improving so that when you do see a task that isn't done to your liking, you can give it feedback, and the model will know for every future run to incorporate that feedback so it never makes that mistake again. So th-this is the progression that we're, we're bringing our
- 1:07:22 – 1:09:18
Advice for thriving in an AI-driven world
- CWCat Wu
users along for.
- LRLenny Rachitsky
There's a lot of people listening, a lot of product managers, a lot of maybe founders, a lot of other cross-functional folks listening. There's a lot of worry about just how their role... just the, the future of their careers. What advice would you have for just people to not just survive this transition to this very AI-driven world, but to be really successful, to essentially just to thrive in this future? What are just, like, things people need to hear, need to be doing?
- CWCat Wu
I think AI gives everybody a ton more leverage than they used to. And so I would push you towards anytime you realize that you're doing some manual task multiple times, think about how you can use Claude Code, Cowork, or other AI tools to automate that for you. Most people have, like, creative parts of their job that they absolutely love and then, like, tedious parts of their job that they really hate doing. I think the beauty of AI is that it can do those tedious parts for you. It can learn from every time that you've done that manual task and generalize and then run it automatically, and so that you can focus on the creative parts, and that means you can do a lot more than you used to be able to do. So I think my, like, immediate push for people is figure out the repetitive parts that you can pass to Claude, iterate on those automations until the success rate is very high, and then focus on, okay, what more can you be doing for your team, for your product, for your company that, like, people haven't had the bandwidth to pick up so far? Or, like, what is that, like, pet project that you always thought the company should do that, like, you've never had bandwidth to do? If AI can take care of the, like, grunt work, then you have, you have this extra 20% time now that you might not have before. So, so my push is to lean into these tools, hand off the work that you're not excited to do, figure out how it can accelerate you, and then as a result, you'll be able
- 1:09:18 – 1:11:58
Why 95% automation isn’t good enough
- CWCat Wu
to do so much more.
- LRLenny Rachitsky
Something core to what you just shared, which I fully agree with, is find problems to solve with AI. There's all this potential what all these tools can do. Some of the hard... Like, for a lot of people, the hardest part is just like, "What should I actually do?" And what you're saying here is just pay attention to things that you are doing constantly you can automate. Pay attention to just, like, ideas that have been floating around that you haven't had time to do. Um, it's basically, it's like solve a problem for yourself is kind of the core advice there.
- CWCat Wu
Exactly. I, I would also push listeners towards focusing on bringing your automations from, "Okay, this is a cool concept," to like, "Hey, this actually works 100% of the time." Like, sometimes I see users trying to automate something, getting it to, like, 90, 95% accuracy, and then giving up on it. And this... If an automation doesn't work 100% of the time, it's not really an automation. And that last 5 to 10% does take more time. Also, building the automation is often a lot slower than you doing it yourself. I would encourage listeners to put in that time to scope some automation that you really wanna get to 100%, put in the elbow grease to teach Claude your preferences, to, like, give it feedback so that it can improve its skill so that it can get to that 100%, and then, like, really then you'll be able to rely on it. There, there's just not much value in a 95% there automation.
- LRLenny Rachitsky
I am super guilty of that. This is really good advice for me. [laughs]
- CWCat Wu
[laughs] I am guilty of this too.
- LRLenny Rachitsky
Okay.
- CWCat Wu
I've been teaching it... I've been teaching Cowork to try to get me to inbox zero for Gmail, and it has not been... It, it has been very time-consuming, and it is definitely not there, as you probably realized.
- LRLenny Rachitsky
Yeah. I... Funny enough, that's exactly where my mind goes. I have this, uh, workflow I set up where every email I get, it, uh, looks for things that are spammy, which is just, like, all these, like, "Hey, can I come on your podcast?" Or, "What about this sp-" like, all these things I'm just like, "I don't have time for these sorts of things." And I have it categorize it into a folder called Spammy, and it's just, like, it's 95% great, but then there's like, "Oh, man, I missed an email 'cause it went in there." So this is a good push for me to like, "I'm gonna work on this. I'm gonna get it to perfect."
- CWCat Wu
Yeah. We also are working on making the flow for customizing these commands a lot easier 'cause right now I think you have to, like, know too many concepts. You have to know to define a skill. You have to know to, like, use this skill and give it feedback, and then you have to know to tell Cowork to update the skill based on all the feedback that you gave. And then you also have to know where to read the skill to, like, make sure that the feedback was incorporated the way that you want. The... I-It's also our job to make this flow really seamless so that it doesn't feel painful to do.
- LRLenny Rachitsky
Amazing.
- 1:11:58 – 1:13:41
Build apps you use every day, not prototypes
- LRLenny Rachitsky
Is there anything else, Cat, you wanted to share? Anything else you wanted to leave listeners with? Anything you wanted to double down on that we haven't already touched on before we get to our very exciting lightning round?
- CWCat Wu
I see a lot of people playing around with AI, um, and building, like, prototype apps and tinkering with building workflows. I would really push people towards building apps that you're actually using every single day, 'cause I think only through that usage are you actually getting the value. Like, if you build a prototype app that isn't helping you get more done, then the, the AI isn't really adding value to your-
- LRLenny Rachitsky
Mm.
- CWCat Wu
To your day.
- LRLenny Rachitsky
And there's only so much you learn from that when it's like, "Okay, I just did one shot at something. Oh, that's cool," and then you never come back to it. It's like you're not learning a lot.
- CWCat Wu
And you're not getting, like, much leverage from it.
- LRLenny Rachitsky
And actual leverage. Yeah. That's such a good point.
- CWCat Wu
I also think there's a lot of people who spend a lot of time, like, customizing their workflow. So there's, like... I think there's, like, two ends of the spectrum. One is, like, people who never customize or never build automations, but there's, like, this polar opposite end of people who, like, obsess around customizing their tool, like adding a ton of skills and MCPs and, um, these, like, workflow improvements. And I think sometimes that can even distract from your core goal of, like, launching some product or building some feature.I think there's a lot of fun in customizing, and we definitely wanna make our products very hackable so that you, you can make it work really well for you. But there is a limit to how much it's useful. Um, and I think there's, [chuckles] there's a camp of people who maybe spend so much time customizing that they're, like, not sleeping and not doing the, like, core task that they originally set out to do.
- 1:13:41 – 1:15:19
The divide between AI skeptics and believers
- LRLenny Rachitsky
I see a lot of that on Twitter.
- CWCat Wu
[laughs]
- LRLenny Rachitsky
Just, like, "Look at my setup. It's out of control. It's so optimized." And what are you... What are, what are you actually building? "No, but my setup is so awesome. I could get so much done."
- CWCat Wu
I think the simple setups actually work better. [laughs]
- LRLenny Rachitsky
Mm-hmm. /powerup, get... take... level up a little bit.
- CWCat Wu
Yeah, yeah.
- LRLenny Rachitsky
There's this Karpathy tweet that just, uh, tw- came out yesterday where he talked about this divide that's interesting between people that tried ChatGPT, Claude back in the day. It was like okay, and they're like, "Nah, this is... This is terrible," and they kind of gave up on, like, what AI could do for them, and they're just, like, so cynical of, like, "No way. It's not actually that big of a deal." And then there's people that are using it to code, essentially, who see the full intense power of it and how good it is. And people on both sides don't understand the other side and why they... like, how much they... how they see the world. And so your advice is really good here to just, like, actually use it for real things and see how good it actually has gotten.
- CWCat Wu
Yeah. I think the big shift is that the 2024 generation of products were chat-based, and the Claude Code generation of products is action-based. And the, like, big aha moment people have is when Claude can just, like, do things on your behalf. It is, it is an amazing feeling to know that the agent is capable of doing so much more than telling you what to do. Like, the agent can actually just do it itself, and when people feel that, I, I think that's the eye-opening moment.
- LRLenny Rachitsky
Shout out, uh, Chrome extension, the Claude Code Chrome extension, which you can just watch it doing stuff. And you'd be like, "Fill out this form for me." And you're like, "All right. Here I go."
- CWCat Wu
Exactly.
- 1:15:19 – 1:25:31
Lightning round
- LRLenny Rachitsky
Okay. Uh, anything else before we get to our very exciting lightning round?
- CWCat Wu
No, let's do it.
- LRLenny Rachitsky
Let's do it. Uh, Cat, I've got five questions for you. Welcome to the lightning round. There's this animation that plays. [laughs] I have to make sure to say it. Uh, are you ready?
- CWCat Wu
I'm ready.
- LRLenny Rachitsky
First question: What are two or three books that you find yourself recommending most to other people?
- CWCat Wu
I really like How Asia Works. Um, it's a story about economic development and what are the, like, the policies and, uh, governments that make, uh, long-lasting, successful economies. The other books that I'm really into are The Technology Trap. So this is actually about the past few technology revolutions, so the Industrial Revolution and the Computer Revolution, and how this has affected, uh, workers. The, the reason that I really like this is because I think we... there's a lot we can learn from history to make sure that this transition goes well. And, um, maybe on, like, a fun note, I really like Paper Menagerie. Um, it's just, like, a book of short stories about, like, coming of age and AI and, um, just, like, self-discovery.
- LRLenny Rachitsky
Favorite recent movie or TV show you have really enjoyed.
- CWCat Wu
I really like [laughs] Drive to Survive. [laughs] There's no, like, deeper meaning to it. I just... [laughs] There's just something very satisfying about people being so obsessed with, like, a singular engineering goal and just, like, the purity of the pursuit. Um, and I also really love Free Solo, which is about Alex Honnold, um, climbing El Capitan without a harness. And I think similarly, it's just such a pure achievement to be able to climb this extremely challenging, dangerous route, and to be able to have the mental focus to do it knowing that if you make a single mistake, you die.
- LRLenny Rachitsky
It's insane. Yeah, that movie's out of control, and it's interesting how these relate in some way to the work you do.
- CWCat Wu
I actually am a rock climber. Um, but I first watched Free Solo before I climbed rocks, and so I, I thought it was impressive, but I didn't understand how impressive it was. It's one of the rare movies where, like, the more you know about it, the more you're, you're blown away by how insane this is. Like, the kinds, the kinds of moves he's doing on the wall are things that, like, I don't think I will ever be able to do in my lifetime if it were set in a gym, like, one feet off the ground.
- LRLenny Rachitsky
[laughs] With a rope.
- CWCat Wu
With a rope. [laughs]
- LRLenny Rachitsky
Did you see the documentary on that other guy, the younger one that went on, like, ice mountains?
- CWCat Wu
I did. That one was very sad.
- LRLenny Rachitsky
But that was, that was wild. Okay, uh, favorite product you've recently discovered that you really love.
- CWCat Wu
The product that has, like, most changed my life outside of Claude products is probably Waymo. [laughs] Like, I'm a diehard Waymo user. Um, use it twice a day, get to and from work. So the two things that I really like about it are, one, I don't feel bad if a Waymo is waiting for me, and so I feel like I feel less pressure to be right at the curbside the moment it arrives. And the second thing is I feel like it lets me be a bit more productive. Um, when, when I'm in the car with another human, I, I typically try not to, like, do any work calls. I, I feel a little rude if I'm, like, on my laptop the whole time. But one thing I really appreciate about the Waymo is I can call into a work call. I'm not worried about someone overhearing me. I'm not worried about, hey, is this, like, rude? Am I talking too loud? Do I need to tell s- ask someone to, like, change the music? And so this has been, like... I feel like this has given me back, like, 30 minutes every day.
- LRLenny Rachitsky
All these second-order effects of, of technology, it's so interesting.
- CWCat Wu
Yeah. I always thought Waymo needed to be priced lower than Uber and Lyft to succeed, but actually I'm, like, very happy to pay a 2X premium for it.
- LRLenny Rachitsky
I love Waymo. It's just, like, like, y- once you see it, you're just like, "Ah, this is in- insane." And, and then you get used to it. Like, you get in there and you're like, "This is crazy," and then you forget about it.
- CWCat Wu
Totally. And I think it's also changed the vernacular. Like, a lot of people at Anthropic love Waymo, and-I think in the past you would be like, "Hey," like, "let's call, like, blah, blah, rideshare app." And now, like, everyone's just like, "Okay, is Waymo here?"
- LRLenny Rachitsky
Okay, two more questions. Do you have a favorite life motto that you often come back to in work or in life?
- CWCat Wu
Just do things. [laughs]
- LRLenny Rachitsky
That tracks. That tracks.
- CWCat Wu
I think there's a lot of value in, like, first principles thinking, and if, if you, like, if you know what you're optimizing for and you have, like, strong first principles, then you can normally deduce what the right, like, course of action is and be able to clearly articulate that to all the stakeholders. And then you should just, like, do it. Like, I think jobs are fake. If you understand the constraints, you can figure out what you can do, and then just, like, try to do it quickly, learn from the mistakes, and apologize or fix them if you did something wrong.
- LRLenny Rachitsky
You, uh, you could just do things, whoever said that.
- CWCat Wu
I think it's liberating, actually, to, like, tell people this. I think in a lot of companies, like, roles are very strictly defined. Like, okay, this is what the PM does, this is what the designer does, this is what an engineer does, and then even team scopes are very rigidly defined. So, hey, like, this corner of the code base we touch, and this corner, like, we're not allowed to touch. And I think what Just Do Things lets people do is they feel, like, empowered to make these decisions, empowered to operate across team boundaries just to, like, get something done.
- LRLenny Rachitsky
That feels like a big, important skill to be good at. People call it agency. Just, like, do the things that need to be done.
- CWCat Wu
Bias towards action.
- LRLenny Rachitsky
Bias towards action. All these ways of describing just, like, don't wait for permission.
- CWCat Wu
Yeah. I think this is my favorite reason to work at a startup at some point in your life because, like, one thing that was, like, very life-changing for me was actually working at Scale when we were 20 people, and so there was just no process and we had, like, really big problems that we needed to solve. And it, it was... Like, I really appreciate Alex and the rest of the team for, like, empowering me and the rest of the team to just, like, figure things out without any boundaries for what sales is supposed to do, what ops is supposed to do, what engineering is supposed to do. Just, like, you have all the tools at your disposal. You have some, like, ambitious, hairy problem statement, and you can do whatever you need to, like, get to a good solution.
Episode duration: 1:25:34
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode PplmzlgE0kg
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome