Nikhil KamathThe AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF
EVERY SPOKEN WORD
65 min read · 13,164 words- 0:00 – 6:13
Introduction
- NKNikhil Kamath
[upbeat music] So I started playing with Claude. It's getting to that point where sometimes it surprises me by how much it knows me. I don't know if that makes sense.
- DADario Amodei
It is surprising to me that we are, in my view, so close to these models reaching the level of human intelligence, and yet there doesn't seem to be a wider recognition in society of what's about to happen. It's as if this tsunami is coming at us, and, you know, it's so close, we can see it on the horizon, and yet people are coming up with these explanations for, "Oh, it's not actually a tsunami. It's just a trick of the light." Like, there hasn't been a public awareness of the risk.
- NKNikhil Kamath
[upbeat music] What is India's role in all this?
- DADario Amodei
Many other companies come here as themselves, a consumer company, and they see, they see India as, as a market, right? A place to obtain consumers. We actually see things a little bit differently.
- NKNikhil Kamath
[upbeat music] What did you do before founding Anthropic?
- DADario Amodei
Yeah. So I was, I was actually originally a biologist. Um, I, uh, you know, did my undergrad in physics, my, uh, PhD in biophysics, and, you know, I wanted to understand biological systems so that I could cure disease. Uh, and, uh, the, the, you know, the thing I noticed about studying biology was its incredible complexity. That, uh, you know, you know, for example, if you look at the, the protein mass spec work that I did, right? Trying to find protein biomarkers, it's, it's just really incredible how much complexity there is, right? You have a given protein, it's like, you know, the RNA gets spliced in a whole bunch of different ways, depending on where it is in the cell. Then it gets post-translationally modified, phosphorylated, complexed with a whole bunch of other proteins. And, and I was starting to despair that it was too complicated for humans to understand. And then as, as I was doing this work on biology, I noticed a lot of the early work around AlexNet, which is one of the first neural nets, like, you know, almost fifteen years ago now. Uh, and, and I said, "Wow, like, you know, AI is actually starting to work. It has some things in common with how the human brain works, but, you know, has the potential to be, be larger and scale better and learn tasks like biology. Maybe this is ultimately gonna be the solution to, uh, you know, to, to solving our problems of, of- solving our problems of biology." So, you know, I, I went to work with Andrew Ng at Baidu, then I was at Google for a year. Then I joined OpenAI a few months after it, uh, started and was, uh, you know, was, was basically led, led, um, all of research there for, for, for, for several years. But then eventually, you know, myself and a few other of the, of the employees just kind of had our own vision for, you know, how, how we wanted to, how we wanted to make AI and what we wanted the company to stand for, and so we went off and founded Anthropic.
- NKNikhil Kamath
How was it-- Was it like a fork in how OpenAI was thinking into what An- Anthropic eventually did?
- DADario Amodei
Yeah, you know, I would say, you know, my conviction and the conviction of my co-founders when we, we founded Anthropic, were two of them. And I think, one, we were starting to convince OpenAI of. The other I was, you know, not... I didn't feel that we were convincing of. So the first was the, you know, the conviction in the scaling laws, and the idea that, you know, if you scale up models, you give them more data, more compute. Again, there are a few modifications like RL, but not really very much. It's pretty close to pure scaling. Um, you, you find that, you know, when you, when you do that, you, you find, you know, incredible increases in performance. And, you know, I was finding that in, like, twenty nineteen with, with GPT-2. Um, you know, when we just first saw the first glimmers of the scaling laws, and of course, there were a lot of folks, you know, inside and outside, who didn't believe it at all, and we really made the case to leadership, like, "This is, this is important. This is gonna be a big deal." And I think they were kind of starting to believe us and ultimately went in that direction. And there was a second, um, you know, conviction I had, which is, look, you know, if, if these models are gonna be kind of general cognitive agents, like general cognitive tools that match the capability of, like, the human brain, we, we better get this right. The economic implications are gonna be enormous, the geopolitical implications are gonna be enormous, the safety implications are gonna be enormous. It's gonna transform how the world works, and so we need to do it in the right way. And then, you know, I think despite a lot of, you know, kind of language, verbiage about doing it in the right way, I, I was, for a variety of reasons, just, just not convinced that at the, y- you know, institution that, that I was at, that, that there was a real and serious conviction to, to, to, to do it in the right way. And so, you know, my, my view is always, you know, don't argue with someone else's vision. Don't try to get someone to do things the way, the w- the way you want to. If you have a strong vision and you share that vision with, uh, you know, a, a few, a few other people, you should just go off and do your own thing, and then you're responsible for your own mistakes. You don't have to answer for anyone else's, and, and, you know, maybe your vision works out, maybe it doesn't, but, you know, it, you know, at least it's, at least it's yours.
- NKNikhil Kamath
Didn't OpenAI believe in scaling laws? 'Cause they went down the same path themselves, too, right?
- DADario Amodei
Well, the... Yeah, yeah, we, we succeeded-
- NKNikhil Kamath
Can you, can you explain what scaling laws are in very simple terms?
- DADario Amodei
... Um, it's like if, if, you know, you want a chemical reaction to produce
- 6:13 – 13:27
Scaling laws explained simply
- DADario Amodei
oxygen or start a fire or something like that, um, you need different ingredients. And, you know, if you don't have one- enough of one ingredient, the, the reaction stops. But if you, you know, if you put, if you put ingredients together in proportion, you know, you get your, you know, your explosion or your fire or fire or whatever. And, and for AI, those ingredients are data, compute, the si- you know, the, the, the si- the size of the AI model. And so the scaling laws just tell you that, like, the... You know, if you put in the ingredients to the, to the chemical reaction, the ingredients of data and model size, that what you get out is, is intelligence. Intelligence is the product of the chemical reaction.
- NKNikhil Kamath
And what is intelligence?
- DADario Amodei
Intelligence as measured by the ability to translate language, or the ability to write code, or, uh, you know, the ability to answer questions correctly about a story. Basically, any cognitive task we can think of. Any, any, any, you know, task that exists in text or in images, any, any task that you can, you can do on a computer.
- NKNikhil Kamath
How is the intelligence of today, as you are describing it, different from what a computer could do, like, five years ago?
- DADario Amodei
Yeah. You know, I would say... Well, I mean, for example, five years ago, a, a computer- you could not ask a computer a question and have it write a one-page essay on that question. Um, you could not, uh, ask a computer, uh, to, you know, implement a feature in code and have it implement that feature in code. None of those things were possible. You could not generate an image. You could not generate a video. You could not analyze a video. You know, I could, could get one of those, uh, you know, uh, uh, you know, v- v- you know, videos of, like, you know, a monkey juggling or something and, you know, say, "What's going on in this video? How many times did the ball change hands?" And right now, you could get Claude or another AI model to, to, to, to give you an answer on that. Um, and, and five years ago, you know, n- none of those things were possible.
- NKNikhil Kamath
Well, I'm, I'm trying to figure out, has the definition of intelligence changed, per se?
- DADario Amodei
Well, you know, what I would say is, five years ago, you know, you could, you could Google, and there might be a website that, you know, would tell you a little bit about this, right? But, you know, you're just, you're just looking up some text that exists, exists on the web, right? You know, maybe it's not about how to get a monkey to juggle. Maybe, you know, maybe it's about how to get a, a, a seal to jug- You know, it is... It's not quite exactly the same thing, 'cause maybe exactly the same thing doesn't exist. Um, but you know, as, as, as we see when, when people use these models, uh, you know, you can ask, and you can actually get an intelligent response. You can ask a specific question and have the model write, you know, one page about it. Or you can give it a, you know, you can give it a, you can give it a hypothetical. You know, "What if I had, you know, the monkey juggle clubs instead of balls?" Or, you know, "What if I did this thing?" And, and that information doesn't exist anywhere, w- you know, whereas the model is able to kind of think for itself and, and come up with an answer on its own. So it's, it's, it's something, um, you know, it's, it, it- it's something totally new. It's just... It's not just matching some of the text that exists on the internet.
- NKNikhil Kamath
Fair. So, uh, you know, this is more like a conversation, so feel free to, like, talk about what you want to talk, not necessarily related to the questions that I'm asking. You look very animated when you speak. Did you ever teach?
- DADario Amodei
Uh, you know, I, I was originally an academic. And, uh, you know, I thought that I might become a professor. You know, I, I got my PhD. I went all the way to being a, a postdoc at Stanford Medical School and, you know, I was- I was aiming to become a, become a professor. Um, so if I had become a professor, you know, I, I would- would have, uh, would have done that. Um, uh, but, you know, a- a- as I mentioned, uh, you know, I got interested in AI, and to work in AI required a lot of computational resources, and that was mostly happening in industry. So that took me off the academic path and, and then, you know, into industry. And of course, you know, ultimately, through several steps, led me, led me to start a company. But, you know, sometimes I think I'm still, like, a professor at heart.
- NKNikhil Kamath
At this point, Dario, if AI is the most relevant thing in the world, uh, if the world is realigning in a way and AI is determining who gets what and who doesn't get what, I'm talking about industries, you today are probably the most relevant person in the world if Anthropic, in this last cycle, in this minute, is sitting on top of this pile. For somebody who, who was going on the path of being a teacher, to have arrived to where you are today, are you best equipped for where you are today?
- DADario Amodei
Well, I mean, you know, first, I would say a couple of things. You know, I, I, I think there's a lot of, there's a lot of folks who are, who are, right, relevant in different ways, right? You know, even within industry, there's the different layers of the stack. There's, like, the folks who make chips. There's the folks even earlier, who make semiconductor manufacturing equipment. There's the folks who make models, like us, and then there are other players who make models. There's the folks who make kinda applications after the models. Um, uh, you know, and then, then there's a bunch of other folks who have a say. There's, you know, governments. There's, like, civil society. So, you know, m- my, my hope, y- you know, isn't that there's, uh, you know, just one tiny set of people that's, that's relevant. I think we're trying to broaden the set of people who are relevant and, you know, turn it into a, turn it into a broader conversation. Um, but, you know, I think at the same time, your, your question is a fair one, and one way I could interpret it is like, you know, there's, there's a certain randomness to how, you know, kind of a, you know, a few people, you know, end up leading these, you know, you know, leading these companies that kind of, you know, grow so fast, and it seems like, you know, in the near future will power so much of the economy. Um, and you know, I've said openly, publicly, not for the first time, that I'm, I'm at least somewhat uncomfortable with the amount of concentration of power that's happening here, I would say, almost overnight, almost by accident. Um, and, and, you know, w- we think, you know, about that in a bunch of ways. You know, one is we have an unusual governance structure, something called the Long-Term Benefit Trust. Um, you know, it's, it's a body that, that kinda ultimately appoints-... you know, the majority of the board members for Anthropic and is, you know, made up of, of financially disinterested, financially disinterested individuals. So that's some, you know, check on what one, one single person is doing. And then, you know, I think, I think, as always, the government should play some role here. You know, I've been an advocate of, you know, proactive, although, you know, sensible, that doesn't slow down the technology, sensible regulation of the technology. Because, you know, I think, I think, like, the people should have a say. Like, government, you know, governments and the, the people who elect them should have a say in, in how this goes. So I, I actually think of a lot of what I'm trying to do as, as kind of trying to, trying to preserve a balance of power, um, you know, a, a, a kind of, you know, against the, the, the, the, the natural grain of this technology.
- 13:27 – 22:44
Trust, humility, and corporate motives
- NKNikhil Kamath
For someone like me who's sitting on the outside and doesn't have a bone in this competition, when I watch OpenAI talk about how they're- how they were a not-for-profit company, or how you are projecting humility in the conversation that you're having right now, or how the American companies are competing with the Chinese companies which are coming about, this projection of humility, where it is for the larger good and not necessarily for how I view the world as companies with shareholders, with investment and revenues, and seeking profit, is this par for the course? Is this something you have to do?
- DADario Amodei
So, you know, I, I would, I would put it in the following way. You know, I, I would say the philosophy of Anthropic from the beginning has been that we, we try not to make too many promises, and we try to keep the ones that we make. So, you know, we, we set ourselves up as, you know, a for-profit but public benefit corporation with this LTPT governance, and we've maintained that. We've said that, you know, our goal is to, you know, stay on the frontier of the technology but, you know, to work on, uh, uh, you know, to work on, u- um, you know, the safety and security aspects of the technology. We've pioneered the science of interpretability. We've, uh, you know, pioneered the science of alignment. I don't know if you saw, but we recently released a constitution for Claude, the ability to align models in line with the Constitution. And, you know, we've done a bunch of policy advocacy and warning about risks, right? Warning about risks is not in our commercial interest, right? Like, people can come up with conspiracy theories, but, you know, I will tell you, saying that the models we build could be dangerous, w- whatever people might say, that's not [chuckles] an effective marketing strategy, and that's not the reason that we do it. And, you know, speaking up on when we disagree, even with the US administration, on, uh, you know, on, on, on policy matters, right? We've, we've, we've spoken up, right? We're willing to say, you know, "We disagree on this issue." Like, you know, we, we've said that there should be regulation of AI when all the other companies and the administration have said there shouldn't be regulation of AI. And so that's both... You know, the regulation of AI, of, of AI holds, you know, holds us back commercially as a company, even though I think it's the right thing to do, and it's, you know, it's, it's, it's, it's difficult to go against the government and the other companies and say this. We're really sticking our neck out. So we've, we've taken a number of actions that, you know, I see as really, you know, putting our, putting our money where, where, where, where our mouth is here. I can't speak for the other companies. You know, it's... Again, it's quite possible that some people say these things, uh, you know, and they don't really mean them, but I wouldn't look at what people say. I would look at what people do.
- NKNikhil Kamath
If what you're saying gets the government to act via regulation, as the incumbent leaders in this space, you get some kind of a regulatory capture, where it becomes harder for the new people coming in as well, right?
- DADario Amodei
I, I don't agree with that at all. The regulation we've advocated for, for example, SB, uh, fifty-three in California, um, uh, exempted everyone, uh, who makes under five hundred million dollars a year in, i- in, uh, in, in revenue, right? SB- SB fifty-three was a transparency law, which, um, you know, uh, uh, basically requires companies to, um, you know, to show, um, you know, the, the, the, the, the safety and security tests that they've run, um, and it exempts all companies under five hundred million in revenue. So it really only applies to Anthropic and three or four, three or four other companies. So it only applies to the companies that, that have the resources. And, and everything that we've advocated for here, uh, uh, not just SB fifty-three, but all the proposals that we've made, the ones that we've made in the past and the ones that, that we plan to make in the future, have this character. We're constraining ourselves and a very small number of additional, um, um, um, companies. We're, we're not, uh... People, people who say that need to look at the actual content of, of, of what we're proposing because it doesn't match that idea at all.
- NKNikhil Kamath
Fair. I read your paper, "Machines of Loving Grace and the Adolescence of Technology," and you seem to have had a one-eighty-degree shift in perspective almost, from optimism to skepticism over, like, two years, from 2024 to 2026.... Is there one moment in the last two years that changed this for you? Did you see something change?
- DADario Amodei
Yeah, I actually wouldn't agree with the question. I don't think I've had a shift in perspective.
- NKNikhil Kamath
Mm.
- DADario Amodei
Um, I think the positive side and the negative side are always something that I've held in my head, and if you look at the history of, you know, the things that I've said, I mean, I've been talking about risks for a very long time. I've been talking about benefits for a very long time. Um, you know, it, it, it turns out that actually it takes me a while to write one of these essays. Um, you know, both-
- NKNikhil Kamath
They're really large as well, they're big essays.
- DADario Amodei
They're like three pages.
- NKNikhil Kamath
They're like mini books.
- DADario Amodei
Both, both of these, it's, it's, you know, it's taken me, like... I, I, I spent- for each one, I spent about a year having a kind of vague vision of the essay in my head and, like, trying to write it, but, like, not fully succeeding at writing it. And then, then, you know, in, in either case, I had to, I had to be on vacation or somewhere where I could, you know, where I could think, where the business, day-to-day business of running the company didn't, didn't occupy me. Um, and, and then I was finally able to, you know, to, to kind of write the essay. So all of that is to say, you know, I, I, I, I started thinking about what would be ad- an adolescence of technology almost the instant I finished Machines of Loving Grace, 'cause I was like, "Oh, you know, I want to inspire people with the good vision, but I also want to warn people with, you know, what can go, what can go wrong." And so it, it just, it just took me a year to write it. But really, both visions were in my head, and I think they're both... You know, I think they're both possible. They're two different visions of the future, and obviously I wanna get the Machines of Loving Grace one, right? You know, I wanna solve all the problems and have the, have the positive vision, but it's not a, it's not a shift in perspective. It's, um, it's me just, um, you know, finding the time to write the light and then the dark.
- NKNikhil Kamath
But have you had a change of perspective?
- DADario Amodei
You know, I would say overall I have- I'm about where I was before. I've not gotten more positive, more norm- more negative. There may be some places where I've gotten more optimistic or things have gone better than expected.
- NKNikhil Kamath
Mm-hmm.
- DADario Amodei
There may be places where I'm more pessimistic and where things have gone worse than expected, but on average, they sort of cancel each other out. I would say I feel very good about, you know, how things have gone with areas like interpretability. Interpretability is the science of seeing inside these neural nets. You know, as a human would, you know, look inside, you know, as we would scan the human brain with an MRI or a neural probe. Um, I've been amazed at what we've been able to find. We've been able to find, you know, neurons that correspond to very specific concepts, neural circuits that correspond to, you know, keep track of how to do rhymes in poetry, and so we're starting to understand what these models do, right? We, we, we don't. We just, we just train them in this kind of emergent way, as you would build a snowflake, but now we're starting to be able to look inside and understand them. I'm, I'm also very encouraged by some of the work on alignment and constitutions, um, you know, making sure that models behave in the way that we want and expect them to. I think that's going pretty well. Um, I felt pretty positive about that. Um, I think I felt maybe m- you know, have been a bit disappointed or felt a bit more negative about some of the things that are more, like, in the, you know, in the kind of public awareness and the actions of wider society. Um, you know, it, it is surprising to me that we are, you know, in, in, in my view, so close to these models reaching the level of human intelligence, and, and yet there doesn't seem to be a wider recognition in society of what's about to happen. It's as if this tsunami is coming at us, and, you know, it's, it's so close. We can see it on the horizon, and yet people are coming up with these explanations for, "Oh, it's not actually a tsunami. It's, you know, that, that, you know, that's just a trick of the light." Like it's some, you know... And I think along with that, there hasn't been a public awareness of the risks and, you know, therefore governments haven't acted to, to address the risk. There's even an ideology that, you know, we should just try to accelerate as fast as possible, which, you know, I understand the benefits of the technology. I wrote Machines of Loving Grace, but I think there hasn't been an appropriate realization of the risks of the technology, and there certainly hasn't been action. So I would say that the, the technical work on controlling the AI systems has gone maybe a little better than I expected, and kind of the societal awareness has gone maybe a little worse than I expected. So I'm, I'm about where I was [chuckles] a few years ago.
- NKNikhil Kamath
So in my own journey, I'm...
- 22:44 – 31:03
Using Claude personally, AI knowing you
- NKNikhil Kamath
You know, when something sounds complicated, and I'm not a programmer, I don't have a background in coding, so I used a bunch of tools for things like research and a conversation both ways, but I never tried to figure out if I could code using, uh, your tool, for example. Recently, I hired a developer just to, like, push me to sit for a couple of hours a day and teach me how to start becoming more familiar with it, [clears throat] largely because of, you know, something like FOMO, like the fear of missing out on how the world-
- DADario Amodei
Yes
- NKNikhil Kamath
... is changing. Uh, so I started playing with Claude. Uh, I connected, I used the connectors to connect my Google Drive, mail, and calendar, and a bunch of those things. I started using the CoWork, and then I started using Claude Code to write simple programs around the industry that I am in, which is financial services, uh, basically to research stock markets and stuff.
- DADario Amodei
We even have a, a optimized Claude for financial services. I don't know if you've tried that-
- NKNikhil Kamath
No
- DADario Amodei
... but we even have that.
- NKNikhil Kamath
No. And then I went into Claude Bot, which is now OpenClaude. I think Claude Bot became something else, and now it's OpenClaude, and I set it up on a, a Mac Mini and connected it to a Telegram account, and now I chat with it, and I, I try and move files from A to B, work on a server on remote. It's getting to that point where-... I'm not talking about OpenClau, but even Claude, with all the con- connectors, sometimes it surprises me by how much it knows me. I don't know if that makes sense.
- DADario Amodei
Yeah, you know, one of my, one of my co-founders, um, you know, he was writing this diary with his kind of, you know, his thoughts and his fears. Um, uh, and he fed it into Claude, and, uh, you know, he, he asked Claude to comment on it, and Claude said, "Here are some other fears you might have that, that I, you, you, that, you know, that you haven't written down." Um, and Claude ended up being mostly right about those. So it really gave this eerie sense of, like, you know, the model knows you-- the model knows you super well, that, you know, that from a relatively small amount of information, it can learn a lot about you and, and come to know you fairly well. And, you know, I, I, I, you know, like most things with the technology, right? We talked about the machines of loving grace and adolescence of technology. Uh, you know, on one hand, something that knows you really well can be a sort of angel on your shoulder that, you know, that helps to guide your life and make you a better version of yourself, and, you know, that's the version we can aim for. Of course, something that knows you really well, you know, can, um, you know, it can, it can, you know, use what it knows about you to, you know, to exploit you or manipulate you on behalf of some agenda or sell your data to someone else. I mean, you know, this is one reason we just, you know, don't like the idea of, you know, using ads, right? You know, this, this is because you're not paying for the product, like, you're the product. And, you know, in this case, the, the, the, the, you know, the product then would be all... you know, this model that knows you super well and, you know, could use that in, in all kinds of, in all kinds of nefarious ways. So, you know, we need to make sure we take the positive, um, uh, the positive road here and not the, not the negative road.
- NKNikhil Kamath
With Claude, I need to use the connectors to give it context to my life. With Google, for example, it already has the context to my life because I use their worksheets and their email and their drive and their chat and everything like that. For Anthropic, long term, will you also have to own the ecosystem?
- DADario Amodei
Yeah, I mean, you know-
- NKNikhil Kamath
Do you have to build mail and chat and-
- DADario Amodei
Yeah, yeah. I... You know, I don't think we need to build all of those things. Um, you know, i- it, you know, my, my thought would be, you know, we're gonna-- it's gonna be a mixture of things we make ourselves and integrating into others, right? Like, you know, we can, we can integrate Claude into Google Docs. We can integrate Claude into, into, you know, Google Sheets. Like, you know, we have external connectors there. We can... You know, we're starting to do that with, with CoWork. You know, same for Microsoft Office, same for other tools. So, you know, I think, I think we do whatever is, you know, easiest and fastest to do. You know, we, we integrate into the existing tools. Now, it might turn out at some point that the existing, you know, tools aren't enough, and we have kind of a different vision. You know, we wanna... We might wanna slice things differently, right? You know, maybe traditional email doesn't make sense or traditional spreadsheets don't make sense given what you can do in, in AI. So I, you know, I don't exclude that we could chop up products in a different way, but we're, we're happy to use the ecosystem that exists and work with anyone else, right? In many ways, we're a platform company. We allow many people to build on us, even though we sometimes also build things ourselves.
- NKNikhil Kamath
The, the one thing, this is a slight digression, but I think the one thing that you're missing, that also your peer group is missing, is in society today, people inherently distrust anybody who claims to be doing good or trying to do the right thing. So when you and your peers are out saying... I, I heard you and Demis speak at Davos. I was in the room when you guys were talking about how me, you, um-- I don't mean me. How Dario, how Demis, and a bunch of other people have to come together and prevent things from changing too quickly. Like, you need to, like, meter it to a certain extent. When a person who is not in your world, in society, on social media, hears a few people speak in a certain manner, uh, you're doing it in the manner that creates more distrust than trust, because nobody believes on social media that somebody wants to do the right thing or do good. So it might be counterintuitive, but I think it needs a change of strategy. If, if you were to be more capitalistic about this and own up to the fact that you have shareholders and you seek a profit, but this will help you win, maybe it'll work more. Just a thought.
- DADario Amodei
Yeah, I don't... No, I don't, I don't really, uh, I don't really agree with that. Um, I would again go back to the idea that, you know, y- y- you know, you, you need to judge us by the actions that we take. Um, you know, I think the company has taken a number of, of, of, of actions over its, you know, over its time that, you know, I think, I think, you know, show that it's really serious about these commitments. So back in 2022, um, you know, we had an early version of Claude, Claude One. This was before ChatGPT, um, and we chose not to release this, um, because we were worried that it would kick off an arms race and, and not give us enough time to, you know, to build these systems safely, right? It was, it was kind of a one-time overhang. Like, we could see the power of the models, a couple of other companies could see the power of the models, and so we didn't... You know, we decided not to do that, and that's public. That's well documented. And, and, you know, and then we waited until someone else did it, and then we're like: Okay, the arms race is kicked off, so, you know, now, now, now we can release our model. But probably the world gained a few months. Now, that was very commercially expensive. We probably, you know, ceded the lead on, you know, consumer AI because of that. Um, you know, we've, we've, you know-... advocated on chip policy in ways that have made some of the chip companies, who are our suppliers, very angry at us. You know, voicing our disagreement with the administration on, you know, AI policy and AI regulation on some, on some matters. You know, anyone who thinks we, we, we benefit from being the only ones to do that, um, you know, it's, it's really hard to come up with a- it's really hard to come up with a picture where that's the, where that's the case. You look at any one of these, and okay, fine, but, you know, you put, you put enough of them together and, uh, you know, uh, you know, I, I, I don't know. I just- I ask you to, to judge us by our actions.
- NKNikhil Kamath
Is- Dario, isn't this a bit like rich people saying capitalism is bad?
- 31:03 – 37:05
Rich people criticizing their own system
- DADario Amodei
Rich people saying capitalism is bad.
- NKNikhil Kamath
If rich people believed capitalism were truly bad or the income inequality is such a big problem, the simplest thing would be to do... The simplest thing to do would be to stop accumulating wealth, further wealth, and then nudge their friends to do the same.
- DADario Amodei
But, but I'm not saying AI is bad, right? We, we just talked about, um, you know, this, this, there's two sides of it. Um, my view isn't, my view isn't that AI is bad. That's not my view at all. My, my, my view is that, is that, you know, the market will deliver a l- a lot of really great things about AI, that it's good to build AI, but that there are dangers of AI, and that we need to steer AI in the right direction. You know, we're, we're steering this car, we're steering it towards a good place, but also there are trees, there are potholes, and so what we need to do is we need to steer away from the trees and the potholes. We might need to occasionally slow down a bit, probably temporarily, um, you know, i- i- kind of in order to, um, in order to, uh, you know, make sure that we steer in the right direction. You know, that, that isn't like... You know, the analogy wouldn't be a rich person saying capitalism is bad. It would be like if a rich person said, "Capitalism is a force for good, but the economy, it, it needs to be leavened, it needs to be moderated," right? You know, we need to deal with problems like pollution. We need to deal with problems like inequality, and, and then capitalism can be good. If we don't deal with those things, then capitalism might be bad. Um, uh, and, and so that is more analogous to the, to the position that I have here.
- NKNikhil Kamath
The concept of consciousness, where is that going, and what does a AI think it is? If AI truly were to... If a AI were to question itself, would you, would it, would-- Do you think it thinks it's consciousness? It has consciousness?
- DADario Amodei
So, you know, this is one of these mysterious questions that we really don't have any kind of, you know, answer to. We don't know what human consciousness is, and therefore, we don't know if AIs have it. Um-
- NKNikhil Kamath
What, what do you think it is?
- DADario Amodei
So, you know, I, I suspect that it's an emergent property of, you know, sy- systems that are complicated enough, that kind of reflect on their own decisions, um, that, you know, it's, it's, it's, it's, it's something that, uh, uh, uh, emerges from complex enough systems. And so, you know, I do think when our AI system, when our AI systems get advanced enough, I suspect they'll have something that, you know, resembles what we would call consciousness or moral significance. I do think it'll happen at some point. It may not be the same as human consciousness. You know, it may be different in how it works because the modalities are different, because the things it's learned are different. But, you know, having, having studied the brain and the, you know, the way it's wired together, the models are, you know, different in some ways, but I, I don't think they're different in the fundamental ways that matter. So I, I am someone who, who does suspect that, uh, you know, at some point, even, even if I don't think they are today, I, I suspect that at some point the models will, you know, w- we would indeed say under, you know, most definitions that we would endorse that, you know, the models will be conscious.
- NKNikhil Kamath
This is a question I keep asking myself when people talk to me about things like spirituality or consciousness. I feel like the world is very random. This is my view. And we are not far removed from cockroaches. When somebody stamps a cockroach, the cockroach dies. Uh, if there is something called consciousness, and if there is a collective consciousness, I've not been able to, A, either connect with it or derive anything from it. Do you believe differently?
- DADario Amodei
Um, I, you know, I, I, I don't think consciousness, you know, necessarily needs to me- n- needs to mean anything, you know, mystical, right? Like, uh, you know, I... Th- there's just some, there's some property of kind of being aware of your own existence and feeling things and, and, you know, um, uh, uh, uh, uh, you know, being able to take in kind of a lot of information and reflect on that information and to, you know, feel a certain way and to notice yourself noticing something. Um, you know, uh, the, the- I think that the, the, you know, we can tell self-evidently from our own experience that, that those properties, that those experiences exist. You know, what their, what their basis is, whether it's, you know, entirely materialistic or there's something more mystical going on, I think is, is, is, you know, obviously very hard to know and, and, you know, I, I think is ultimately not, not relevant to these questions. What, what does seem relevant to me is that, you know, these are... Because we have, can observe our own experience, these are properties of human brains. Um, and, you know, I suspect that these models we are building, as they get more sophisticated, are becoming enough like human brains, that they will have some of the same properties. That is, that is my guess as to, as to what will happen, and so we've take- we've taken various interventions with the models. You know, we've given the models, um, we, you know, uh, we call it a "I quit this job," um, button. Uh, uh, basically, where, you know, that we've given the model the ability to basically terminate its conversations by saying, "I don't want to be involved in the conversation." And, you know, models do that when, you know, when they, they, they have to deal with, you know, particularly violent or brutal content. Um, it usually only happens in very extreme cases.
- NKNikhil Kamath
... So I've grown up here. This is my city, Bangalore. Uh, I, I've grown up in the southern part. We're in the northern part of the city right now. As somebody who saw the boom of the IT services industry here, uh, big employer, employs a lot of people, a big part of how the city grew, what is India's role in all this?
- DADario Amodei
Yeah, so, you know, this is my
- 37:05 – 44:15
India's role and IT partnerships
- DADario Amodei
second time in India. I visited in, in October, and, you know, uh, um, uh, you know, the last time I came here, you know, I, I, I met with all the, you know, the major I- kind of Indian IT and, and just conglomerates more generally. I won't give names, but, you know, the usual ones you would, you would, you would, you would think of. Um, you know, and then we're beginning to work with, with most or, most or all of them. And, you know, one of the things I said is, "Look, Anthropic is an enterprise company. Its job is to serve other consumers." Um, you know, many other companies come here as themselves a consumer company, and they see, they see India as, as a market, right? A place to obtain consumers. We actually see things a little bit differently. We want to work with companies in India to provide our tools to them, to help them build those tools, um, uh, and, you know, help them do their job better. So, you know, if we, um, you know, work with a company here, they know the Indian market better, right? They're better at, you know, doing, doing what they do, you know, whether that's, you know, uh, uh, you know, consulting or systems integration or, you know, building IT tools. They're gonna be better at that than we are, particularly for the Indian market. And so our hope is that we can add AI to what they do and kind of enhance what they do, right? There's a lot of worry that, you know, AI could, you know, replace SaaS or, or all of these things, but, but my view is if we do this in the right way, if we work with all these companies, then, then, then, you know, AI can enhance what they're doing, can enhance their kind of, you know, their, their connection to the market, their go-to-market abilities, and their, and their specific know-how.
- NKNikhil Kamath
I really like the steam engine story. Uh, when the steam engine was invented, how the world changed, productivity went up, uh, people had more. The thing I worry about is at the beginning of a change, you need a human to operate the steam engine, then you have assembly lines and all of that. Eventually, the way the world is moving, the human becomes less and re- less relevant with time as these models get smarter. So if you here partner with the IT services companies today, and there is a use case for them, are they not much like the man behind the steam engine ten years from now, where the relevance... If the tool works so simply that you don't need an operator, eventually what happens to the operator?
- DADario Amodei
So, so, uh, I think a few things are true all at once. One is that definitely the scope of, of automation of the agents is going to expand over time.
- NKNikhil Kamath
Mm-hmm.
- DADario Amodei
That is definitely the case. You know, I think that's a problem for, for everyone. That's a problem for us. That's a problem for consumers. That's a... You know, it's not just a problem for the, for the IT, for the, for, for the IT companies. Um, what, what I think will happen, though, is other moats will become more important. For example, the models have not done a lot in the physical world. They may at some point. You know, I think r- you know, robotics will happen at some point, but I think it's- that's a distinct thing from what's happening now with, with AI. So, you know, a lot of this involves, you know, things in the physical world. Another thing is things that are human-centric, right? Some of these IT companies are also consulting companies, and they have a big web of relationships with, with other, you know, with, with other humans, with other institutions here in India or, you know, or across the world. Um, and I think those relationships are gonna become increasingly important, right? You know, s- you know, some of these are combined technology and sort of, you know, consulting or, or like, or like integration companies. And, and I think a lot of it is, you know, knowing how institutions work, and so being able to, you know, integrate things with institutions, being able to work with them to make things happen faster than they would have otherwise. And I think that, I think that element, you know, if, if nothing else, is, is... you know, is gonna continue to be valuable in the long run. You know, at the end of the day, it like, it just, it just comes down to humans, right? All of this is supposed to be being done for the benefit of humans. So it, um, uh, you know, there's, there's always gonna be some human-centric element of this that's gonna be important, and I suspect there will be other moats that we haven't thought about, you know? So, you know, uh, the, the- there's this concept called Amdahl's law, which is, you know, if you have a process that has many components, and you speed up some of the components, the, the, the components that haven't yet been sped up become the limiting factor. They become the most important thing. And, and, you know, you might not have thought about them at all, right? You might not have thought of them as moats or important components, but, you know, when writing software, what, uh, becomes a, a lot easier, you know, some of the moats that, you know, companies have will go away, but others will become even more important. So there will be a bunch of adjustment. Folks will have to say, "Oh, man, the stuff we thought was really important before isn't as important, whereas these other advantages that we never really thought of as advantages are now super important." So I guess what I would say is, you know, companies will need to adapt very fast and think about what really matters for them, what their real advantages are. Um, but, but I think some of those advantages are gonna, are gonna, are gonna stay around because, you know, while the technology is very broad, it does have its limits.
- NKNikhil Kamath
I don't know if I buy that fully. I think I see the diminishing returns for being a service provider, even if the moat is the network and relationships they hold today. Because if I am using OpenClaude to maneuver some of my relationships in the conversations, I don't know if it's too far-fetched to assume that most conversations tomorrow and relationships will be maintained by an agent like that.
- DADario Amodei
... but, you know, if you, if you, just think of the chain of companies, right? At the end of the day, you're dealing with consumers, right? Like at, like at the end of the day, you have to deal with people. You know, there's this story of like, you know, I think it was Geoff Hinton predicted, you know, that, that, that AI will replace radiologists. And indeed, AI has gotten better than radiologists at, you know, at doing scans, right? But what happens today is there aren't less radiologists. Um, uh, what the radiologist did- does is they walk the patient through the scan, and they kind of talk to the patient. So the, the most highly technical part of the job has gone away, but somehow there's still, still some demand for like, you know, the, the kind of, the kind of underlying human skill. Now, that may not be true everywhere, and, you know, perhaps over time, AI will advance in, in, you know, areas where it, where it hasn't, hasn't yet advanced, and, you know, may- maybe, maybe that'll happen fast. Um, but you know what? I think, I think what I will say is, like, you know, we should take it one step at a time, right? This is a very empirical science. This is a very empirical observation. Let's see what AI does, you know, today, and, like... we'll, we'll kind of try and adapt to, uh, you know, kind of try and adapt to that. The kind of system starts to figure it out, and then, then we'll see, then we'll see what happens next. I, you know, I do think, you know, in the long run, will AI be better than, than us at, at, at basically everything?
- 44:15 – 50:17
Will AI surpass humans at everything
- DADario Amodei
Will it be better than most humans, you know, including even the physical world and robotics and the human touch? Yeah, I, you know, I think that is... I, you know, I think, I think that is, uh, uh, uh, you know, possible, maybe even likely. It's something that goes beyond the country of geniuses in a data center I described, because that's purely virtual. Um, but, you know, building robots is something, you know, something... It, it's a skill. It's something you can do. So maybe the AIs will make us, will make us better at that as well. Um, uh, but, you know, the, the, the way I think about it is, you know, we need, we need to take the... We, we need to figure this out step by step and figure out how to adapt to it.
- NKNikhil Kamath
This might sound a bit self-serving to the people who know me, because I believe the reason so much risk capital exists in America, not the only reason, but one of the big reasons, is how big your stock market is and how much of an opportunity it is for this risk capital to exit eventually. Uh, it's a case for why India should really allow for our stock markets to flourish. The audience that I speak to is very much the wannabe entrepreneur in India. What can they do in AI? What is an actual opportunity?
- DADario Amodei
I think there's a lot of opportunities around building at kind of the application layer. We release a new model every two or three months, and so there's an opportunity every two or three months to build some new thing that wasn't possible before, that wouldn't have worked before because the models were weak. Um, people, in fact, say people were... You know, the majority of our revenue still comes from the API model. People say that, you know, API models aren't viable or that they'll be commoditized or whatever. I think what people are not seeing is there's this expanding sphere of what is possible with AI, and the API allows, you know, this new start-up to try making something that, you know, wasn't possible before. And, and this is why the API is such a flourishing business, and it's, it's constantly in motion, it's constantly in churn, and so, and so it doesn't, you know, it, it, it doesn't get commoditized. It's a very dynamic thing. And so I think there's an opportunity for lots of, lots of individuals to just say, "You know, what can I, what can I build? Well, you know, what, what, what can I build on top of these models with an API? Like, you know, what are the things that I can make that others cannot make? Um, uh, you know, what are some new ideas?" And, you know, we've, we've, we've seen that. You know, we see both with the API itself and with Claude Code, um, you know, I think, I think the, um, the number of users and the number of revenue we've seen in India has doubled since I last visited in October. So that was what? November, December, like three, three and a half months since I visited. It's doubled.
- NKNikhil Kamath
But I'm going to be candid here, Dario. Uh, you're a company which is worth, I don't know, four hundred billion or three eighty billion today. You've raised thirty-five billion. You do fifteen billion of revenue, but going up really, really fast. If I build an application on top of Claude, that for some reason I'm sitting in Bangalore, in JP Nagar and building this, that for some reason happens to work for a short period of time. Uh, it is but a matter of time before you would want to onboard that revenue and not let that lie with me, and you will probably better that application in a manner that I will never be able to. Uh, I've heard this argument for different people, like the Harvey, the legal AI company in, in, uh, New York. They're friends of mine, and they were talking about how they built on top of OpenAI, but eventually they don't know if it's an easy fix for OpenAI to do what they're doing. So even if I were to build it, say, you put out a model in three, three months or six months, what is to stop you from taking that revenue center away from me and onto yourself in a certain period of time?
- DADario Amodei
Yeah. So I, you know, I think, I think there's a few things here. You know, one is I would give the advice that I give to basically any business and say, like, you know, like, a b- a business should establish a moat. You know, your, your moat... You shouldn't be just a wrapper, right? Like, you know, I would not advise that, you know, you, y- you just say, "Oh, like, you know, here's a way to interact with Claude." Like, "I'm gonna prompt Claude a little bit," or, "I'm gonna build a little bit of a UI around Claude." Like, that, that doesn't have a moat, and, you know, you shouldn't be worried about Anthropic, in particular, eating that revenue. Anyone can eat that revenue, right? It's not, it's not super valuable. But, but, you know, what I would say is that in different fields, there are different kinds of moats, where you can do something that, you know, it would be difficult for Anthropic to do, and, you know, we, we don't want to specialize in it. So, for example, you know, there's a lot of stuff in the bio cross AI space.... that builds on our API. You know, they wanna do biological discovery. Like, I happen to be a biologist, but, like, you know, most people at Anthropic aren't biologists. They're, like, AI scientists, or they're product people or go-to-market people. So, like, it's just really inefficient for us to, like, step in that space and, like, do all that work. Um, you know, the same would be applied for, you know, dealing with, you know, financial services industry, right? Where, you know, there's a huge amount of regulation, like you need to know a bunch of stuff to comply with that regulation. Like, you know, it just, it doesn't make sense for us to do that. Now, there are some things that do make sense for us to do. Like, you know, we're not gonna promise never to build first-party products, right? That we should be, we should be honest about. For example, a bunch of people at Anthropic write code, and so, you know, we made this internal tool called Claude Code, and because we ourselves write code, we have, you know, I think, a special and unique insight into, you know, how to use the a- how to best use the AI models to write code. Um, so, you know, I think, I think, I think in the code space, you know, we've, we've become very strong, very strong competitors because this is something we use ourself, but I don't think that gener- generalizes to every possible industry.
- NKNikhil Kamath
Again, going back to my audience, which is the 20- or 25-year-old boy or girl in India,
- 50:17 – 56:38
Career advice for young Indians
- NKNikhil Kamath
what industry do you think will get disrupted, and what has a certain runway left? I'm asking from the lens of I'm trying to figure out what book to read, which college to go to, what skill set to learn, uh, if I'm starting a startup today. Uh, what has some kind of a tailwind?
- DADario Amodei
Yeah.
- NKNikhil Kamath
So for a short period of time is okay, as well, but what has tailwind?
- DADario Amodei
Yeah, I mean, you know, I would, I would think about tasks that are human-centered, um, uh, you know, tasks that involve relating to people. You know, I, you know, I think the, the stuff like code and software engineering is, you know, is becoming more and more kinda AI, AI-focused. You know, things like math and science.
- NKNikhil Kamath
Is that, is that coding or engineering? If I were to segregate coding and engineering to be two completely different things-
- DADario Amodei
Yeah.
- NKNikhil Kamath
... is coding go- going away, or is engineering element of software where you're an architect trying to figure out?
- DADario Amodei
I think coding is going away first, or coding is being, you know, done by the AI models first, and then the broader task of software engineering will take longer, but I think that is, you know, that... doing that end to end, I think that is gonna happen as well, I would say. Um, but you know, again, the elements of like, you know, design or making something that's useful to users or knowing what the demand is or, you know, managing teams of, like, AI models, like, you know, those things, uh, uh, may still be present. Again, like, there's this comparative advantage is surprisingly powerful, right? Even if you're only doing, like, you know, 5% of the task, like, you know, that 5% gets super amplified and levered because it's like you're only doing 5% of the task, the AI does the other 95%, and so you become, you know, 20, 20 times more productive. Again, at some point, you get to 99%, 99, a- and then it becomes harder, but, um, I think there's, there's surprisingly much in that, in that sort of, um, you know, in that zone of comparative advantage. But I would really think about the thing- the things that are human-centered. Like I t- I think there's, I think there's something to that. I think there's something to kind of the physical world or, or things that mix together human-centered, the physical world, one of those two, and analytical skills that somehow tie them together. You know, sim- similar to the radiologist example I gave.
- NKNikhil Kamath
So what would I study? Say, I'm... actual use case, I'm 25 years old. I'm trying to pick a profession for myself. I want some kind of tailwind. My outcome is a capitalistic win in the next decade. What industry would I pick outside of something which has a physical interface?
- DADario Amodei
Yeah. Again, anything where you're building on AI, like if AI is the tailwind, you know, if you can be part of some other, other part of the supply chain. You know, something in the semiconductor space, which, you know, I think is, you know, that's one example. You know, that ha- has an element of kind of, you know, physical world and more traditional engineering, not, not software engineering. Um, you know, again, the, the very kind of human-centered professions, like, you know, that is, that is something I would, I would think in terms of... And I think the other thing I always say is, like, in, in the world in which, you know, AI can kind of generate anything and, and, you know, create anything, having basic critical thinking skills may be the most important thing to, to success. I, I worry about, you know, these AI models that, that generate images and videos, and we, we don't make, you know, models that generate images and videos and, and for many reasons, but, you know, this is one of them. Um, it's really hard to tell what's real from what's not. Um, and, and so, you know, a, a significant part of success may be having the street smarts, you know, not to get, not to get fooled by, by, you know... I mean, hopefully, we can crack down on and, and regulate some of, some of, some of this fake content, but, but, you know, assume we can't, um, you know, critical thinking skills are gonna be really important, and, you know, you, you don't wanna fall for things that are, that are, that are fake. You don't wanna have false beliefs. You don't wanna get scammed. Like, you know, that's, that's really advice that I would give to someone.
- NKNikhil Kamath
If every innovation in the history of humanity killed a core human skills, uh, I'll give you an example. If calculators killed our ability to do arithmetic, if, uh, writing reduced the memory of human beings per se, what muscle is AI killing?
- DADario Amodei
So, you know, first of all, I'm, I'm, I'm not, I'm not so sure, like, you know, I, I, I still have- I still do math in my head quite a lot. I still find it useful to do math in my head e- you know, even, even without a calculator just because it's like, you know, it's more integrated into my thought processes, right? You know, I, you know, I, you know, I might wanna say, "Oh, yeah, you know, if, like, each user paid this amount, then, you know, then the revenue would be that..." You know, I wanna be able to close that loop in my head without having to, you know, without having to, to give the answer to a calculator. So I think a lot of these skills are still pretty relevant.... um, but, you know, I, I would say that if you don't use things carefully, that you can lose, you can lose important skills. Um, uh, and, you know, we, we, uh, you know, I think we started to see it with, you know, students where, you know, it's like, you know, they have the AI, like, write the essay for it. It's basically just cheating on homework, so, you know, we shouldn't do that. You know, we did some studies around code and showed that, you know, depending on how you use the model, you know, we, we can see de-skilling in terms of writing code, right? There are different ways to use the model, and some of them don't cause de-skilling, and some of them do. But, you know, definitely if folks are not thoughtful in how they use things, then, then de-skilling absolutely can happen.
- NKNikhil Kamath
Do you think humans will become stupider as a race in the next decade? Because we are, in a way, exporting thinking and cognition to systems.
- DADario Amodei
Yeah. I, I think if we deploy... Again, it's the machines of loving grace and adolescence of technology. I think if we deploy AI in the wrong way, if we deploy it carelessly, then yes, people could become stupider. Even if an AI is always going to be better than you at something, you can still learn that thing, right? You can still enrich yourself intellectually, and so that's, that's a choice we have to make as, as individual companies, as individual people, and as a society overall.
- NKNikhil Kamath
Dario, do you have a view on open-sourced versus closed?
- 56:38 – 1:02:40
Open source vs closed AI models
- NKNikhil Kamath
Uh, I, I was looking at some companies like ZAI, GLM5, or DeepSeek. If you spend all this money on IP creation, on research, if these guys are able to reverse prompt and engineer and get close to Anthropic-level answers— I'm not saying a hundred percent, but I was seeing the GLM5 numbers, and they seemed quite good— where does the IP cre-, uh, where does the IP value in the world of AI lie? And if I were to be building an application, can I make the assumption, it's a far-fetched extrapolation, but can I assume that eventually the AI model layers will get so democratized that I should pick open-sourced every time when I, I'm building an agent or an application layer because that helps me retain the, the revenue model that I might be working with?
- DADario Amodei
So I... There are a few things here. Um, one is, you know, a, a, a lot of these models, particularly the ones that come from China, are optimized for benchmarks and are distilled from, uh, you know, from kind of the big US labs. Um, so, you know, there, there was a test recently where, you know, some of these models scored very highly on the usual SWE benchmarks, the usual software engineering benchmarks. But then when someone made a held-back benchmark like that, you know, had not been publicly measured, the models did a lot worse on that. Um, and, and so, you know, I think those models are optimized for benchmarks much more than, uh, you know, for kind of real-world use. Um, but I think there's a broader point than that, which is that I think that the-- how things are being set up, the economics of the models are very different than any previous technology. What we find is that there is a very strong preference for quality. It's a bit like human employees, right? So, you know, it's like if, if, you know, if I said to you, "You can hire the best programmer in the world or the ten thousandth best programmer in the world," I mean, they're both very skilled, but, like, I think anyone who's hired a large number of people has this intuition that, like, there's this, like, power law, long-tail distribution of ability, and we find the same thing in the models. Like, within a range, price doesn't matter that much if, if a, if, if a model is, is the best model, the most cognitively capable model. Um, uh, price doesn't matter much. The form in which it's presented doesn't matter much. So I'm focused almost entirely just on having the smartest model and the best model for the task. Um, my view is that's the only thing that matters.
- NKNikhil Kamath
Long term, uh, geopolitics. If Anthropic were a restaurant, I would say the raw ingredients, the vegetables in this particular case, is data. Do you think the l- long term... This is also pertinent to me, the question, because we are investing in a data center business, which is Indian in nature. Do you think long term, the world moves to a place where every country owns its data, and you have to start paying more for the vegetables you use to cook?
- DADario Amodei
Yeah. So, I mean, I think, I think there are a few things. I, you know, I do think there will be demand to build data centers around the world, and we're, like, very supportive of that. Um, uh, I... You know, it's, it's, it's-- data's getting kind of interesting because, you know, a, a lot of the data that we use today is RL environments that we train on, right? So, for example, when you train on math or agentic coding environments, um, you're not really getting data. Like, you're getting some math problems, and the model like experiments with trying the math problems.
- NKNikhil Kamath
You mean it's more synthetic. You're creating the data.
- DADario Amodei
Yeah, you can think of it as synthetic data, or you can think of it as trial and error and environment. So I think data is becoming... Static data is becoming less important, and what we might call, like, dynamic data that the model creates itself is, you know, for reinforcement learning, is becoming more important. So, you know, I, I don't think data is, is qui- is quite the most central thing anymore, but it still matters. And, you know, I think to the extent that that, that is the case, you know, a lot of the data is just, just available, just kind of available on the open web. Although, if you're trying to get data in certain languages, optimized for certain languages, that, that, that can be important. You know, I, I, I do think if data means, like, the data given to you by customers, like, that, you know, you, you, you process the data for some other, for some other-... company, then countries will, and in the case of Europe, already have passed laws that say that that kind of customer, y- like, you know, personal proprietary d- personal proprietary data needs to stay within the boundaries of the, of the country. And that's one reason to kind of, you know, to, to build, you know, to, to operate data centers around the world at different, um, um, countries and, and, you know, to kind of, you know, keep the, the models performing of the, of the, of the, of the inference in those countries.
- NKNikhil Kamath
I really pushed Elon on this particular question. He was skeptical of answering it, but I asked him to pick one stock he would put money in which is not his own, and he said Google. I'm gonna ask you the question, and I know you're gonna be skeptical in it as well. If Dario had $100 today and you had to make the binary decision of investing in a stock to win in capitalism, which stock would you pick? [chuckles]
- DADario Amodei
Yeah, I, I had better not answer that question because I know so much about so many public com- Like, [laughing] I, I, I think I better not answer that question.
- NKNikhil Kamath
Maybe answer the question for a industry that you're not involved in, which I'm guessing today is seldom the case because you're involved in most industries.
- DADario Amodei
Yeah. No, it's, it's really, um... I mean, I don't know. I, I'm, I'm positive on, like... I'm, I, I, I think biotech is about to have a re-
- 1:02:40 – 1:08:34
Biotech as the next big bet
- DADario Amodei
renaissance, like, ultimately will be, will be driven by AI. Um, you know, I'm not gonna name a particular company, but, but, like, um, you know, n- nor will I say whether I think it's better to bet on the big pharma companies or, like, you know, emerging smaller biotechs. Um, uh, but, but, like, my, my instinct is we're about to cure a lot of diseases, and so, uh-
- NKNikhil Kamath
Can you give me a subset of biotech that I should focus on?
- DADario Amodei
Yeah. Um, I think this idea of stuff that's more programmable and adaptive, you know, from the mRNA vaccines, although those are having trouble in the US for dumb reasons, but, you know, I'm very optimistic about the technology, to kind of the peptide-based therapies, right? Where, you know, uh, uh, you know, again, if you have a small molecule drug, you're like, there's only so many degrees of freedom you have, and, you know, you kind of want... make one thing better, the other thing gets worse. Like peptides, it's- it has this almost digital property where you can say, "Oh, I'm gonna substitute in, you know, this amino acid here and this amino acid there," and so it allows for more continuous optimization. So, you know, I, I, I, I think, I think those kind, tho- those kinds of areas, um, you know, I would be optimistic about maybe also th- maybe also cell-based therapies, which is like a new, new, new-
- NKNikhil Kamath
Stem cell?
- DADario Amodei
No, no, no. So, so things like, uh, you know, like, I don't know, like the CAR T therapy, where, you know, you, you know, you, you kind of genetically engineer your, like, you know, take, take... basically take some, um, you know, cells, cells out of your body, genetically engineer them to, you know, to, to, to attack a particular cancer and put them back in the body.
- NKNikhil Kamath
Do stem cell therapies work? I spent the whole of last week doing this. I was at a do- at a hospital for three hours a day, getting nebulizer and stem cells into my, my veins.
- DADario Amodei
I am, I am not up on the latest of, uh, of, of, of, uh, of stem cell therapies. You'd have to ask a-
- NKNikhil Kamath
[chuckles]
- DADario Amodei
... currently practicing biologist.
- NKNikhil Kamath
But peptides, I think, will blow up, right?
- DADario Amodei
I, I, I mean, you know, again, the design space is very broad.
- NKNikhil Kamath
Right. When I tried to use Claude Code for the first time, I did struggle to get it to work. It was... For somebody who's very stupid and has no coding or programming knowledge, it's not, uh, it's not very, very easy. I think there's a learning curve. I heard someone say it well, it's like even prompt engineering is like playing a piano. You can't sit and start playing it. To my audience, I think it becomes increasingly relevant where to learn how to set context, uh, how to prompt, how to use Claude Code better. For somebody like me, who comes with zero knowledge, uh, can you recommend how one does that?
- DADario Amodei
Yeah, I mean, first of all, I would say, you know, we're trying in- we're trying increasingly to kind of, like, make that learning curve easier. So, like, one of the things that caused us to release Claude Cowork, um, which is basically Claude Code for non-coders, is, you know, oh, man, you know, like, we were noticing a bunch of non-technical people who really wanted to use Claude Code and were struggling through the command line terminal, um, to do that. Which, you know, it's like, like, coders use the command line terminal all the time, but, like, non-coders, you know, it, it just kind of, like, makes things unnecessarily complicated. Um, so, you know, Cowork was designed to be more of a, you know, uh, the, you know, the, the kind of, uh, you know, it was powered by, by the Claude Code engine on the back, but, you know, the idea was to kind of make it, um, you know, more, um, uh, more, like, user friendly and, and, like, easier to use. So, you know, we're, we're definitely trying to introduce interfaces that kind of make it, make it easier. But I, you know, I would also say, you know, that there's, um, you know, there, there's like, uh, you know, classes you can take that, you know, help you learn this thing. Now, I think it's a very empirical science, you mostly learn by doing. But, you know, it's like Anthropic has its, like, you know, part of the company that we call the Ministry of Education. And, you know, I think increasingly, you know, we'll put out videos on how to run effective agents and how to prompt models. You know, we've already done some of that, and I think we're gonna, we're gonna ramp it up, 'cause, you know, we do want everyone to be able to learn this.
- NKNikhil Kamath
Any fleeting thought, last question? Like, you want to leave us with something that we should bear in mind? What does Dario know that Nikhil and all of Nikhil's people do not?
- DADario Amodei
Yeah, I mean, I don't know that I know that many things, you know, particularly now that the, you know, the implications of the technology are kind of out there. So, I mean, you know, it can all be... I, I, I think most aspects of my worldview can be derived from what, from what's publicly visible now, from, from what we can see, you know, kind of, kind of outside in the world. But the thing I would say, and it's an experience I've had over and over again over the last 10 years, is, you know, there's this temptation to believe, "Oh, you know, that can't happen. It would be too weird. It would be too big a change." Like, you know, "I'm sure people are on that." Like, "It would be too crazy if that occurred. No one seems to think that'll happen." And, you know, o- over, over and over again, just extrapolating the simple curve or trying to reason out what will happen, like, leads you to these counterintuitive conclusions that almost no one believes. Um, and, and, you know, it's almost like you can predict the future for free just by [chuckles] you know, just, just by, just by saying, "Well, it stands to reason that..." A- and, you know, you need some empirical knowledge. You need some intuition. You can't reason from pure, from pure logic. I think that's another type of mistake that I see people make. But, but the right combination of a few empirical observations, um, uh, with, um, you know, just thinking from first principles, uh, can allow you to predict the future in, in ways that, uh, you know, are publicly available and anyone should be able to do, but, but that happens surprisingly rarely.
- NKNikhil Kamath
Thank you, Dario, for doing this, and hope to see you again soon.
- DADario Amodei
Thank you.
- NKNikhil Kamath
Thank you. Cheers.
- DADario Amodei
All right.
- NKNikhil Kamath
Yeah.
- DADario Amodei
Good.
- NKNikhil Kamath
Good. Was that okay?
- DADario Amodei
Yeah. Seemed great. [upbeat music]
Episode duration: 1:08:34
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 68ylaeBbdsg
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome