Skip to content
a16za16z

Dwarkesh Patel and Noah Smith on AGI and the Economy

In this episode, Erik Torenberg is joined in the studio by @DwarkeshPatel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how close are we to achieving it? They break down: - Competing definitions of AGI - economic vs. cognitive vs. “godlike” - Why reasoning alone isn’t enough - and what capabilities models still lack - The debate over substitution vs. complementarity between AI and human labor - What an AI-saturated economy might look like - from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robots - How AGI could reshape global power, geopolitics, and the future of work Along the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think—and perhaps one day, act—like us. Timecodes: 0:00 Intro 0:33 Defining AGI and General Intelligence 2:38 Human and AI Capabilities Compared 7:00 AI Replacing Jobs and Shifting Employment 15:00 Economic Growth Trajectories After AGI 17:17 Consumer Demand in an AI-Driven Economy 31:14 Redistribution, UBI, and the Future of Income 31:58 Human Roles and the Evolving Meaning of Work 41:21 Technology, Society, and the Human Future 45:43 AGI Timelines and Forecasting Horizons 54:04 The Challenge of Predicting AI's Path 57:37 Nationalization and the Global AI Race 1:07:10 Brand and Network Effects in AI Dominance 1:09:31 Final Thoughts and Preparation for What’s Next Resources: Find Dwarkesh on X: https://x.com/dwarkesh_sp Find Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatel Subscribe to Dwarkesh’s Substack: https://www.dwarkesh.com/ Find Noah on X: https://x.com/noahpinion Subscribe to Noah’s Substack: https://www.noahpinion.blog/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Erik TorenberghostDwarkesh Patelguest
Aug 4, 20251h 10mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:33

    Intro

    1. ET

      Are you dubious of the trope that, you know, labor provides meaning, and if people don't have a clear, uh, sense for labor, then it will be very difficult for them to obtain alternative sources of meaning?

    2. SP

      Humans have just adapted to so much.

    3. DP

      Agricultural revolution, Industrial Revolution, the growth of states. Once in a while, like a communist or fascist or good regime will come around or something. Like, the idea that being free and having millions of dollars is the thing that finally gets us, I'm just suspicious of.

    4. ET

      Dwarkesh,

  2. 0:332:38

    Defining AGI and General Intelligence

    1. ET

      Noah, welcome. Our first podcast ever together as a trio.

    2. SP

      Yes. Excited.

    3. DP

      I'm very excited.

    4. ET

      So Dwarkesh, it's almost... You, you came out of the Scaling Era.

    5. DP

      Uh-huh.

    6. ET

      It's almost like you're a future historian. You're sort of telling the history as it's, as it's being, as it's being written, and so it's, it's only appropriate to ask you, what is your definition of, of AGI?

    7. DP

      Mm.

    8. ET

      And, and how has that evolved over time?

    9. DP

      Um-

    10. ET

      Except for, like, superintelligence, you know. Brings it down for us.

    11. DP

      I feel like I'm like five decades too young to be a historian.

    12. ET

      [laughs]

    13. DP

      You gotta be, like, in your 80s or something-

    14. ET

      Yeah. [laughs]

    15. DP

      ... before you could, um-

    16. ET

      Exactly

    17. DP

      ... uh, um-

    18. ET

      But we're living in history right now.

    19. DP

      So the ultimate definition is can do almost any job, say, like, 98% of jobs at least, as well, fast, uh, cheaply as a human. I think the more, um... The definition that's often useful for near-term debates is can automate 95% of white-collar work because there's a clear path to get to that, whereas robotics, you know, there's, like, a, a long tail of things you gotta do in the physical world, and robotics is slower. So automate fi- uh, white-collar work.

    20. SP

      Hmm. So it's, that's interesting 'cause it's an economic definition. It's not sort of a definition about, like, how it thinks, how it reasons-

    21. DP

      Mm-hmm

    22. SP

      ... et cetera. It's about, like, what it can do.

    23. DP

      Yeah. I mean, we've, we've been surprised what capabilities have come first in AI. Like, they can reason already. Um, and h- why they seem to lack the economic value we would've assumed would correspond to that level of capability. Um, this thing can reason, but it's making OpenAI $10 billion a year, and McDonald's and Kohl's make more than $10 billion a year, right? Um, so clearly there's m- m- more things relevant to automating entire jobs than we previously assumed. So then it's just useful to, like... W- who knows what all those things are, but once they can automate it, then it's AGI.

    24. ET

      And so when Ilya or Meta is using the word superintelligence, what, what, what do they mean? Do they mean the same, the same thing or something totally different?

    25. DP

      Um, I'm not sure what they mean. Like, there's a spectrum between God and just something that thinks like a human but much faster. Um, yeah, m- uh, uh, do you have some sense what the, you think they mean?

    26. SP

      God.

    27. DP

      [laughs]

    28. SP

      I think probably they, they

  3. 2:387:00

    Human and AI Capabilities Compared

    1. SP

      mean something they would worship-

    2. DP

      Yeah

    3. SP

      ... as a god.

    4. DP

      Yeah.

    5. ET

      And so when Tyler says we've achieved AGI and, and you differ from him, wh- wh- where is the tangible difference there?

    6. DP

      Um, I'm just noticing that if there, there was a human who was working for me, they could do things for me that these models cannot do, right? And I'm not talking about something super advanced. I'm just saying, "I'm, I have transcripts from my podcast. I want you to rewrite them the way a human would, and then I'll give you feedback about what you messed up, and I want you to integrate that feedback as you get better over time, you learn my preferences, you learn my content." And they actually don't, they can't, like, learn over the course of six months how to become a better editor for me or how to become a better transcriber for me. And since they c- a human I, I hire would be able to do this. They can't, so therefore it's not AGI.

    7. SP

      Noah, I have a question. I am a natural general intelligence. You are a natural general-

    8. DP

      Mm-hmm

    9. SP

      ... intelligence, but we cannot easily do each other's jobs even though our jobs are fairly similar.

    10. DP

      Right.

    11. SP

      Um, put me in the Dwarkesh Podcast and I could not interview people nearly so well. If you had to write Substack, you know, articles, like, several times a week-

    12. DP

      Yes

    13. SP

      ... on economics, you might not do as well. So then, um, but we are general intelligences, and we're not exactly substitutable, so why should we use substitutability as the criterion for AGI?

    14. DP

      I mean, what, what else is it that we want them to do? I think w- with humans, we have more of a sense of, like, there's some other human who theoretically could do what you would do. A model is... An individual copy of a model might be, say, fine-tuned to do a particular job, and it would be fair to say then why expect this particular fine-tune to be able to do any job in the economy? But then there's a question of, well, there's many different models in the world, and each model might have many different fine-tunes or many different, um, instances. Any one of them should be able to do a particular white-collar job for it to count as AGI. I'm not... It's not that, like, every, any AGI should be able to do every single job, that, like, some artificial intelligence should be able to do this job for, like, this model to count as AGI.

    15. SP

      I see. Okay. But, so let's take another similar example. Let's take Star Trek.

    16. DP

      Yeah.

    17. SP

      Okay, you got Spock.

    18. DP

      Very Spock. [laughs]

    19. SP

      He's very logical. He can do stuff that the, that, you know-

    20. DP

      Yeah

    21. SP

      ... Kirk and whoever can't do. But then those guys can do stuff that Spock can't do.

    22. DP

      Right.

    23. SP

      Like get in touch with their emotions, intuition, stuff like that. They're both general intelligences, but they're alien to each other. So, you know, AI feels alien to me.

    24. DP

      Mm-hmm.

    25. SP

      It, sometimes it, it talks just like us. It, it was built off of our thoughts obviously. But then, um, you know, sometimes it talks just like us, and sometimes it's just, like, very alien. And so, but, but should we ever expect that to change such that it's n- it's no longer an alien intelligence?

    26. DP

      I think it'll continue to be alien, but I think eventually we will gain capabilities which are necessary to unlock, um, the trillions of dollars of economic value that are implied by automating human labor, which these models are clearly not generating right now. Um, so you could say, well, like, if I just put, if, if we substituted jobs right now, immediately there'd be a huge productivity dip. But over time, we would learn to do s- you know, start doing them better. I mean, maybe a better example is just that, like, you hire people to do things for you. I, I don't know if you actually hire people, but I, I assume-

    27. SP

      Occasionally, yeah.

    28. DP

      Okay. Like, why are, why are you still having to do that rather than hiring an AI? And I have, like, many rules where it's like an AI might be generating hundreds of dollars of value for me a month, but, like, humans are generating thousands of dollars or tens of thousands of dollars of value for me a month. Um, like, why is that the case? And I think it's just, like, the capab- AI's are lacking these capabilities. Humans have these capabilities.

    29. ET

      And is the main thing missing in your view sort of continual learning?

    30. DP

      The reason humans are so valuable is not just their raw intellect. It's not mainly their raw intellect, although that's important. It's their ability to build up context. It's to interrogate their own failures and pick up small efficiencies and improvements as they practice a task. Um, whereas with an AI model, its understanding of your problem, your business, will be expunged by the end of a session.Um, and then you're just, you're starting off at the baseline of the model. Like, with a human, you gotta train them over many months to make them useful employees.

  4. 7:0015:00

    AI Replacing Jobs and Shifting Employment

    1. DP

      away.

    2. SP

      Okay, so here's my question about replacing jobs. Uh, you know, it seems to me that it's partly about demand.

    3. DP

      Mm-hmm.

    4. SP

      So for example, suppose that AI has already replaced my job-

    5. DP

      Yeah

    6. SP

      ... or, or can replace my job, so that suppose that anyone who goes onto, you know, fires up ChatGPT or whatever they want-

    7. DP

      Yeah

    8. SP

      ... or whatever model and says, "Search the web, find the most interesting topics that people are talking about, economics, and write me an insightful post telling me some cool new thing I should think about that."

    9. DP

      Yeah.

    10. SP

      And they just do that every day, and then they get a better blog than No Opinion.

    11. DP

      Yeah.

    12. SP

      I don't know if that's happened yet. I, I mean, I've tried that and I don't like it as much. But, but, but suppose that most people would like it as much, and so my job's been automated and people just don't realize it, or people have this sort of, like, idea in their mind of like, "Well, is it really a human?" and blah, blah, blah, and then j- uh, as generational turnover happens, young people won't care about reading a human. They'll care about-

    13. DP

      Right

    14. SP

      ... reading an AI. But in terms of functional capabilities, it's already there. But in terms of, of demand, it's not there. Uh, how much of that could there be?

    15. DP

      I expect there would be much less of that than people assume. If you just look at the example of Waymo versus Uber, I think previously you could have had this thing about people will hesitate to take automated rides. And in fact, in the cities where it's been deployed, people, like, love this product despite the fact that you gotta wait 20 minutes because the demand is so high. Um, and it's still, like, it got some glitches to iron out. But just, like, the seamlessness of using machines to do things for you, the fact that it can be, like, personalized to you, it can happen immediately, uh, I, I, I mean, it was like, okay, if it's like one thing people would be like, "Okay, well, doctors and lawyers will set up guilds, and so you won't be able to consult." Um, I think there might be guilds in who can call themselves a doctor or a lawyer, but I just think if, like, if genuinely ChatGPT can give me as good medical advice as a real doctor, the experience of just talking to a chatbot rather than spending three hours in a waiting room is so much better that I, I think a lot of sectors of the economy look like this, where we're like, we're assuming people will care about having a human, but in fact they will not. If, if you assume that they will genuinely have the capabilities that the human brings to bear.

    16. SP

      So it's interesting, you know, AI is, is better for diagnosis on a lot of things than, than humans.

    17. DP

      Mm-hmm.

    18. SP

      Right? But then, um, something about having humans to follow up with makes me also want to check with a human-

    19. DP

      Yeah

    20. SP

      ... after I've gotten diagnoses from an AI on something. And so it might, that might vary by job. Like, cars may be one thing, and then, um, but, but maybe it is about capabilities. I, I can't say. I'm just saying, like, I, I'm, I'm, I'm saying everybody seems to think that the, that AI is a perfect substitute for humans, and that's what it should be, and that's what it will be, and everyone seems to think of it in that case. However, every other tool that's ever been made, every other technological tool, was a complement to humans. It could do some things human could, humans could do. Maybe even it, it could do anything humans could do, but at different relative costs, different relative prices.

    21. DP

      Mm-hmm.

    22. SP

      So that you'd have humans do something and the tool do other things, and you'd have this complementarity between the two. And yet when people talk about AI and think about AI, they essentially never seem to think in these terms. They always seem to think in terms of perfect substitutability. And so I'm trying to get to the bottom of, like, why people insist on always thinking in terms of perfect substitutability when every other tool has been, uh, you know, complementary in the end.

    23. DP

      Well, human labor is also, um, complementary to other human labor, right? There's increasing returns to scale. Um, but that doesn't mean that there's, like, uh, you know, Microsoft has to hire some number of software engineers and, like, it will care about the cost of what the software engineers cost. Like, it will go to markets where they're, they can get the highest performance for the relative value those software engineers are bringing in. I think it'll be a similar story with AI labor and human labor. And the AI labor just has the benefit of having extremely low subsistence wages. Like, the marginal cost of keeping an H100 running is much lower than the cost of keeping a human alive for a year.

    24. SP

      Noah, would you say you're AGI-pilled to, in, in the sense that Dwarkesh described the term? And how do you u- understand, you know, we've talked a little bit about AI's effect on labor. Why don't you share your, wh- why you're perhaps a little bullish that there'll be a, you know, plenty for, for humans to do and that'll be more complementary?

    25. DP

      What is AGI-pilled?

    26. SP

      Well, just believe in d- d- Dwarkesh's, that it'll automate a huge swath of the economy. I mean-

    27. DP

      Or, or labor

    28. SP

      ... I'm, I am very unwilling to say, like, "Here's something technology will never be able to do."

    29. DP

      [laughs]

    30. SP

      I mean, that always seems like a bad bet. Here's two things people have been saying since the beginning of the Industrial Revolution, neither of which has ever remotely come close to being true, even in specific subdomains. Um, the first one is, "Here's a thing technology will never be able to do." And the second one is, "Human labor will be made obsolete." Those, people have been saying those two things, and you can just go, you can read it, you can even, you know, ask AI to go-

  5. 15:0017:17

    Economic Growth Trajectories After AGI

    1. SP

      you know, Sam Altman was reflecting on his podcast with, uh, with Jack Altman the other week. He was saying, "You know, if you told me 10 years ago that we would have, um, you know, PhD level, um, you know, A- AI, I would think the world looks, looks a lot different."

    2. DP

      Right.

    3. SP

      But in fact it, it, it doesn't look that different.

    4. DP

      Yeah.

    5. SP

      And so is there, is there a potential where we, uh, have, you know, much, much more increased capabilities but actually the world does... It's like the, you know, Peter Thiel called the 1973 test or something. It's like we have these phones, but the world just looks-

    6. DP

      Right

    7. SP

      ... looks the same. We just have phones in our pockets.

    8. DP

      Yeah. I, I think if we have, like, chatbots that can answer hard math questions, I don't expect the world to look that different because the fraction of economic value that is generated by math is, like, extremely small. Um, I... But there's, like, other jobs that are much more mundane than quote-unquote PhD intelligence, which these... A chatbot just cannot do, right? A chatbot cannot edit videos for me. Um, and once those are automated, I actually expect a pretty crazy world because, uh, the big bottleneck to growth has been that human population can only increase at this slow clip. And in fact s- you know, one of the reasons that growth has slowed since the '70s is that in developing countries the population has, uh, plateaued. With AI, the popula- like, the capital and the labor are functionally equivalent, right? You can just, like, build more data centers or build more robot factories, and they can do real work or they can build more robot factories, and so you're gonna have this explosive dynamic. And once we get, like, that loop closed, I think it would just be, like, 20% growth plus.

    9. SP

      Do you see that feasible, possible, 20% growth? And T- Tyler I believe said 5%, right? He, he-

    10. DP

      0.5% more than the steady state.

    11. SP

      0.5%.

    12. DP

      Oh, yeah.

    13. SP

      0.5%.

    14. DP

      More.

    15. SP

      And, and, and, and what's the argument? Just that, um... Yeah, what, what is the argument for that?

    16. DP

      For Tyler's argument? Uh, bottlenecks. I think the problem with that argument is that, like, there's always bottlenecks, right? So you could have said before the Industrial Revolution, well, we will never 10X the rate of growth because there will be bottlenecks, and that doesn't tell you what... Like, you empirically have to just, like, look at the fraction of the economy that'll be bottlenecked and, like, what is the fraction that's not and then, like, actually derive the rate of growth. Um, the, the, the, the, the fact that there's bottlenecks doesn't tell you... Like, yeah, okay, there will be, like-

    17. SP

      Is he mostly referring to, uh, regulation? Or, or-

    18. DP

      Yeah, and just that, like, we live in a fallen world and people will have to use the AIs and there... Yeah, things like that.

    19. SP

      Who'll be buying

  6. 17:1731:14

    Consumer Demand in an AI-Driven Economy

    1. SP

      all the stuff? So, so background, in economics GDP is what people are willing to pay for.

    2. DP

      Right.

    3. SP

      Who will be buying the stuff in a world where we get 20% growth?

    4. DP

      Uh, first of all, I don't know. You could have said, um, in 10,000 BC, like, the economy's gonna be a billion times bigger in, um, in 10,000 years. Like, what does it mean to produce a billion times more stuff than we're producing right now? Who is buying all this, like, stuff? And it just, like, you, you can't, like, predict that in advance.

    5. SP

      In 1700s I could tell you exactly who was buying stuff. It was everybody. You know, peasants. Like, I could tell you, like... You know, we, we, there... In fact, people wrote these, these things in the, about, around 1900 about what the world would look like in 100 years. You know-

    6. DP

      Yeah

    7. SP

      ... what, what, what we'll have. They didn't get exactly the right things-

    8. DP

      Right

    9. SP

      ... right that we'll have, but they correctly identified that it would be regular consumers who would be buying all these things, regular people. And so that would, that came true. It was obvious. But here's my point. Here's my point. Suppose that 99% of people do not have a job and are not getting paid an income-

    10. DP

      Yeah

    11. SP

      ... and all the money is going to sort of-Sam Altman, Elon Musk, and like five other guys.

    12. DP

      Right.

    13. SP

      Okay? And, and, and their captive AIs that they own because for some reason our property rights system still exists.

    14. DP

      Right.

    15. SP

      But okay, suppose that that's, that's the future we're contemplating, right? And so 99% of people or more don't have any job, they don't have any income, they're out on the street, and yet you're saying 20% growth a year. That growth is defined by people, consumers paying for things and saying, "Here is-

    16. DP

      And I wouldn't define it just-

    17. SP

      ... here's my money. Take my money."

    18. DP

      ... just as people.

    19. SP

      Okay, so then-

    20. DP

      I would just define it as like the, the ra- I mean, I-

    21. SP

      We will have AI producing agents

    22. DP

      ... assume the AIs are creating each other. Yeah, and, and that was like-

    23. SP

      No, no, that doesn't, that doesn't count in GDP. Only final good- only final consumer spending

    24. DP

      Okay, so we're like launching the Dyson spheres. We're not allowed to count that because like the AIs are doing it. I mean, like I wanna, I wanna know what the solar system will look like. I don't care like what like the semantics of that are, and I think the better way to capture what is physically happening is just like including the AIs when calculating numbers.

    25. SP

      Why will they do that? Why will they do that? Why will they do any of that?

    26. DP

      One argument is simply that if there's any agent, AI or human, who cares about colonizing the galaxy, um, even if like 99% of agents don't care about that, if like one agent cares, they can go do it. Colonizing the galaxy is a lot of growth 'cause the galaxy's really big, right? So it's very easy for me to imagine if like Sam Altman decides to launch the probes, how like, you know, breaking down Mars and sending out the, the, the virus probes like is like generates 20% growth.

    27. SP

      I think what you're getting at here is that AI will have to have property rights. AI, AI agents will have to be able to have-

    28. DP

      But even if a human wants to use AI-

    29. SP

      ... autonomous control of resources

    30. DP

      ... even if, even I, I guess it w- de- it depends on what you mean by autonomous. Today we already have-

  7. 31:1431:58

    Redistribution, UBI, and the Future of Income

    1. SP

      to do about that.

    2. DP

      Yeah, 100%. The hopeful situa- case here is the way our society currently treats retirees and old people who are not generating any economic value anymore, and if you just look at, like, the percent of your paycheck that's going, basically being transferred to old people, it's like, I don't know, 25% or something. Um, and, uh, you're willing to do this because they have a lot of political power. They've used that political power in order to, uh, lock in these advantages. They're not, like, so overwhelming you're like, "I'm gonna go to, like, Costa Rica instead." You're like, "Okay, I had to pay this money, I had to pay this concession, I'll do it." And hopefully humans can occupy that sort of like, um, can, can have, be in a similar position to this massive AI economy-

    3. SP

      Mm-hmm

    4. DP

      ... that old people today have in, um, in today's economy.

  8. 31:5841:21

    Human Roles and the Evolving Meaning of Work

    1. SP

      All right.

    2. ET

      What do humans do? Uh, so, you know, let's say they get some, some money. They, like, they have enough to live. Uh, h- how do they spend their time? Is it art, religion, poetry-

    3. DP

      Podcasting

    4. ET

      ... drugs?

    5. DP

      Podcasting.

    6. SP

      [laughs]

    7. DP

      It's the final job. [laughs]

    8. ET

      Yeah, we're, we're out of the, out of the curve here.

    9. DP

      [laughs] Or it's a-

    10. SP

      We-

    11. DP

      We're the m- last man of history.

    12. ET

      [laughs]

    13. SP

      Wait, so here's an idea. How about sovereign wealth fund?

    14. DP

      Uh-huh.

    15. SP

      Okay, so sovereign wealth fund, we, uh, we tax Sam Altman and Elon Musk.

    16. DP

      Yeah.

    17. SP

      We tax them.

    18. ET

      We're using Sam as a metaphor here. He's a friend of the firm, you know. [laughs]

    19. SP

      Yeah, yeah, yeah. We tax him. We tax Mark and, and Ben. And so then we-

    20. DP

      [laughs]

    21. SP

      ... we use, we use their money to-

    22. DP

      Only the friends of the show will be taxed.

    23. SP

      Right.

    24. DP

      [laughs]

    25. SP

      We, we, we use that money to buy, we use that money to buy, like, um, shares in the things that those people have. So they get their money back [laughs] 'cause we're buying the shares back from them. Okay, so it's okay.

    26. DP

      Yeah.

    27. SP

      And then, and then we hire them.

    28. DP

      Yeah.

    29. SP

      Because then what we do is we hire a number of firms, including a16z, and pay them two and 20 or whatever-

    30. DP

      [laughs]

  9. 41:2145:43

    Technology, Society, and the Human Future

    1. DP

      have access to."

    2. SP

      Of course, I mean, this discussion may be academic because I believe that, you know, you said that we got phones and the world looked the same. I mean, no, it doesn't. Phones have destroyed the human race.

    3. DP

      [chuckles]

    4. SP

      Like, the fertility crash that's happening all around the world-

    5. DP

      Right

    6. SP

      ... nobody has replacement level fertility. Fertility is going far below replacement everywhere, uh, because of technology. And, uh-

    7. ET

      Is that the phone or the pill or-

    8. SP

      It, it, oh, well, no, it's the phone. I mean, well, no, the, the, the pill a- and other things like-

    9. ET

      Education

    10. SP

      ... women's education, whatever-

    11. ET

      [chuckles]

    12. SP

      ... like, lowered fertility, like, quite a bit, but some countries were still at replacement level, some were still around replacement level. But the crash we've seen since everybody got phones is epic and is just unbounded. Like, uh, you know, the human race does not have a desire, a collective desire to perpetuate itself. Um, we can, you know, yes, we're gonna get lonely, but we'll have company through AI and through the internet, uh, social media, you know, until there's just a few of us and we dwindle and dwindle. Um, yeah, I mean, like, technology has already destroyed the human race, and basically UBI is just, like, keeping us around on life support for a little while, while we, while that plays out.

    13. DP

      So I, I, I have a take about, like-I do think so far there's been a lot of negative effects from, you know, widespread TikTok use or whatever that, like, w-we're still pr- you know, like, learning about. Um, I am somewhat optimistic that in the long run there's some optimistic vision here that could work. Um, just because right now the ratio of, um... Like, it's impossible for Steven Spielberg to make every single TikTok, uh, and direct it in a sort of really compelling way that's, like, genuine content and not just video games at the bottom and some, like, you know, music video at the top. Um, in the future, it might genuinely be possible to give every single person their own dedicated Steven Spielberg and create, like, incredibly compelling but long narrative arcs that include other people they know, et cetera.

    14. SP

      Oh, yeah.

    15. DP

      So in the long run, I'm like, maybe this-

    16. SP

      Let's make that happen.

    17. DP

      I don't think TikTok is, like, the best possible medium.

    18. SP

      No.

    19. DP

      Um-

    20. SP

      But I don't think, I also don't think TikTok is unique in destroying the human race. I think that, um, interacting online instead of interacting in person, that's, that's the great-

    21. DP

      How do you make your money, Noah?

    22. SP

      That's the, that's the great filter. [laughs]

    23. DP

      [laughs]

    24. SP

      How do you make your money? Go ahead.

    25. DP

      I agree. [laughs]

    26. SP

      We're all making, we're all part of the problem-

    27. DP

      We're making money destroying our species

    28. SP

      ... but that's, but, but, but- You don't think we can isolate it- No, it doesn't matter ... to dating apps and sort of... No, I'm saying, like, I'm saying, like, as long as you can get your... You know, why did humans perpetuate the human species? It was not because they wanted to see the human species perpetuated. It was because it's like, "Oop, I had sex and there came a baby." And that's, that's done. We've severed that. Right. And that's, that's- But- That is the end. We did not evolve to want our species to continue. Right, but y-you're saying the, the reasons why we're not having babies is because we're, we can make friends on the internet, but is it that dating apps have created just a much more efficient market and, and thus there isn't the same- Maybe ... pair, pair bond? I don't know. I mean, like, you know, people are having less sex. Uh, you know, if, if Elon gets his way, everybody will just sit there gooning to some sort of, um- [laughs] ... Grok companion thing. The goonpocalypse, uh, seem- Yeah. [laughs] Seems upon us. But, but, but, like-

    29. DP

      Is this available right now?

    30. SP

      [laughs]

  10. 45:4354:04

    AGI Timelines and Forecasting Horizons

    1. SP

      You've, you've had some people on the podcast, you had the AI 2027 folks who believe that AGI is perhaps two, two years away, may- I think they updated to three years away, and then you've also had some folks on who, who said it's not for 30-something years. Maybe you could steelman both, both arguments and then, uh, share how, wh-where you net it, net it out.

    2. DP

      Yeah. So two years, if I'm steelmanning them, is that, look, if you just look at the progress over the last few years, it's reasoning. This is, like, Aristotle's, like, the thing that makes humans humans is reasoning, and we just, like, it was not that hard, right? Like, train on math and code problems, um, and have it, like, think for a second, and you get reasoning. Like, that's crazy. So what is a secret thing that we won't get?

    3. SP

      Right.

    4. DP

      Um-

    5. SP

      Can I ask a stupid question?

    6. DP

      Yeah.

    7. SP

      Why was, uh, stuff like o3-type models, why are those called reasoning models but, like, GPT-4o is not called reasoning? What, what, what are they doing different that's reasoning?

    8. DP

      I, I, one, I think it's, um, like, GPT-3 can technically do a lot of things GPT-4 can, but it just does it way more- GPT-4 just does it way more reliably, and I think this is even more true of reasoning models relative to GPT-4o, where, like, 4o can solve math problems, and in, in fact, like, modern-day 4o has been probably trained a lot on math and code. But the original GPT-4 just wasn't trained that much on math and code problems, so, like, it didn't have whatever meta, uh, meta-circuits there exist for, like, how do you backtrack? How do you be like, "Wait, but I'm on the wrong track, I gotta go back, I gotta, like-

    9. SP

      Right

    10. DP

      ... I gotta pursue the solution this way"?

    11. SP

      Algorithmicically, I have a, you know, okay idea of-

    12. DP

      Right

    13. SP

      ... what a reasoning model does that the non-reasoning models-

    14. DP

      Right

    15. SP

      ... don't. But wh- in terms of how does that map to a thing that we call reasoning, what is the, what is the definition of what it means to reason that these people are using, the operational definition here? Like, 'cause I don't understand that myself.

    16. DP

      I mean, 4o can't get a gold in, uh, IMO.

    17. SP

      Okay, but, but-

    18. DP

      Uh, or on whatever

    19. SP

      ... I, I, I can reason and I can't get a gold in IMO, but I can reason.

    20. DP

      Yeah, but, like, I don't think-

    21. SP

      As can the clerk at the checkout

    22. DP

      ... I can't get a gold either, but I don't think I can reason as well as a, a math Olympiad, at least in the relevant domain. I agree that reasoning is not just about mathematics, but, um, this is true of any word you come up with. Like, the zebra, like, what is, you know, what, what about the thing that, like, is a mixture of a zebra and a giraffe and they have a baby? Is that a zebra still? Like, I agree there's edge cases to everything, but, like, there's a general conceptual category of zebra, um, and I think there's, like, a general conceptual category of reasoning.

    23. SP

      Okay, I'm just wondering what it is. Like, what, what... Like, when you have a, um, no, I'm saying, like, when you have a checkout clerk, right?

    24. DP

      Right.

    25. SP

      That checkout clerk wouldn't, would look at an IMO problem and be like, "What?"

    26. DP

      [laughs]

    27. SP

      But then, like, you have a checkout clerk, and the checkout clerk, you're like, you know, okay, so you put the thing in the, you know, on this shelf, and therefore someone has, he looked for it and didn't find it, so something else must have happened.

    28. DP

      Sure, but I think a reasoning model-

    29. SP

      That's reasoning

    30. DP

      ... I think a reasoning model will be more reliable and be better at solving that kind of problem than 4o.

  11. 54:0457:37

    The Challenge of Predicting AI's Path

    1. DP

      what we're talking about here.

    2. ET

      Yeah.

    3. SP

      Yeah. That, that leads into another thing that I've thought about, which is how poor our track record for making predictions about the future of AI-

    4. DP

      Yeah

    5. SP

      ... has, has been. Um, the first time you and I hung out, uh, I don't know if you remember this, was with Leopold.

    6. DP

      Yeah. Oh, really?

    7. SP

      Yeah.

    8. DP

      Yeah. I remember this.

    9. SP

      It was, it was at your old house.

    10. DP

      Yes.

    11. SP

      And, um, and Leopold was just pronouncing a whole bunch of pronouncements, uh, from the couch.

    12. DP

      Yeah.

    13. SP

      And, um, he released this big situational awareness thing.

    14. DP

      Right.

    15. SP

      And, um, I would say that that wasn't-- How long ago was that? A year and a half?

    16. DP

      Yeah.

    17. SP

      Yeah. I would say that already most of the things he predicted have been invalidated or made irrelevant-

    18. DP

      Really?

    19. SP

      ... uh, in the, in the last year and a half. Like, and especially in terms-- like, all the com- stuff about competition with China.

    20. DP

      Mm.

    21. SP

      Uh, you know, like it turns out filtration was able to get them a whole lot of things that he never predicted. It turns out that, like, so many of the things, other than just the idea that AI would keep getting better, which he predicts and a lot of people predict. But then it-- I feel like a lot of the specific predictions about US capabilities and Chinese capabilities, and what would be the bottlenecks and what would be the things that, you know, we had... Like, here's how we can compete-

    22. DP

      Yeah

    23. SP

      ... with China, that's all been proven wrong since.

    24. DP

      Um, I think this is actually an interesting trend in the history of science where, like, some of the scientists which were the sm- who were the smartest in thinking about the progression of the atom bomb or progression of physics just had these, like, ideas about, like, we'll have a wo- we- the only way we can sustain this is if we have one world government after, you know, after we have it after World War II.

    25. SP

      Right.

    26. DP

      Uh, there- there's no other way we can deal with this new technology. I do think relative to the technological predictions, uh, Leo- you know, like, I think the main way he-- in which he's been wrong is that, like, um, it didn't take some, like, s- breaking the l- breaking, breaking the servers in order to learn how o3 or something works. It was just the physical-- Sorry. The, um, just public-- Just see you being able to use the model, knowing that-

    27. SP

      You can talk to it and learn what it knows.

    28. DP

      T-t-like, just knowing a reasoning model works, and then you can, like, use it and you see, like, oh, what is the latency? Like, how fast is it operating tokens? That will teach you, like, how big is the model? Like, you learn a lot just from publicly using a model and, like, knowing a thing is possible. Um, he has been right in one big way, which is, like, he, he identified three key things that would be required to get us from GPT-4 to, like, a BBA GI kind of thing, which was, um, being able to think, so test-time compute, uh, onboarding, which-

    29. SP

      Did he talk about test-time compute in that document?

    30. DP

      Yeah, yeah.

  12. 57:371:07:10

    Nationalization and the Global AI Race

    1. DP

      explosion.

    2. SP

      One of the other Leopold predictions was, was nationalization.

    3. DP

      Yeah.

    4. SP

      Um, is, is that something you could potentially foresee in the next few years?

    5. DP

      I don't think it's, uh, politically plausible, um, especially given this administration. I don't think it's desirable. Um, at first I think it would, like, just drastically slow down AI progress because look, uh, this is not nineteen forty-five America, and also building an atom bomb is, like, a way easier project, uh, than building AGI.

    6. SP

      But China's quasi-nationalizing most of its in-- I mean, it-- And China doesn't control BYD's day-to-day decisions about what to build, but then if China says, you know, "Do this," BYD does it, as does every Chinese company.

    7. DP

      I mean, that's kind of the relationship American companies have with the US government as well.

    8. SP

      You think so?

    9. DP

      I mean, somewhat. I- also the, the big difference is w-what do we mean by nationalization? There's one thing which is like there's a party cadre who is, uh-

    10. SP

      In your company sitting-

    11. DP

      Exactly. There's another, which is that, um, each province is, like, just pouring a bunch of money into building their own comp-competitor to BYD, uh, in this, you know, b- potentially wasteful way. That, like, distributed competitive process is- seems like the opposite of nationalization to me. Like, when people imagine AGI nationalization, I don't think they're saying, like, Montana will have their AGI and Wyoming will have their AGI, and they'll all compete against each other. I think they imagine that, like, all the labs will merge, which is actually the opposite of how China does industrial policy.

    12. SP

      But then you do think that the American government, basically, if it, it says, "Do this," then, like, xAI and, and OpenAI will do it.

    13. DP

      Um, n-no. So actually, I think in that way, obviously, the Chinese system and the US system are different. Like, um, although it has been interesting to see that whenever, um... I don't know. You-- We've, we've noticed the way that different lab leaders have changed their tweets in the, in the aftermath of the election. I mean, also-

    14. SP

      Yeah

    15. DP

      ... I-th-

    16. SP

      More bullish up at source.

    17. DP

      Right.

    18. SP

      Yeah.

    19. DP

      And, uh, didn't, um, didn't Sam have a thing where, uh... I, I think previously he said that, um, AI will take jobs, how do we deal with this? And then didn't he recently say something at a panel where, like, "I think President Trump is c-correct, that AI will, like, you know, create jobs or something?" Where, like, I don't think in the long run you believe this.

    20. SP

      Um, but the reason why humans should be excited about even their jobs being taken is just they'll be so rich that why do they even need it?

    21. DP

      Yeah. Yeah.

    22. SP

      Much richer than they are now.

    23. DP

      Right. M-m-modulo, this redistribution slash not fucking it over with some, you know, guild-like thing.

    24. SP

      Yeah. The, um-- You mentioned the atomic bomb, and we also mentioned off, off camera that you don't think the nuke is a good comparison for what happens w- how does it play out when s- a lab figures out AGI? W-w-what, what then, then happens? Is, is there a huge advantage if one country has it first, or if one lab has it first, do they-

    25. DP

      Mm

    26. SP

      ... they dominate? W-what does it play out?

    27. DP

      I, I think it's c- less like the nuclear bomb, where there's a self-contained technology that is so obviously, u-um, relevant to specifically this, like, offensive capability. And you can say, well, like, there's nuclear power as well, but, like, neither of those two th-- Like, nuclear power is just, like, this very self-contained thing, whereas I think intelligence is much more like the Industrial Revolution, where there's not, like, this one machine that is the Industrial Revolution.Um, it is just this, like, broader process of growth and automation and, um, uh, and so forth. So I-- But, but that is, that being said-

    28. SP

      So Brad DeLong's right and Robert Gordon is wrong. Or if Robert Gordon said there's only, like, it's four things. It's just four big things.

    29. DP

      Oh, really? Interesting.

    30. SP

      And then Brad DeLong is like, "No, it's a process of discovery." So anyway.

  13. 1:07:101:09:31

    Brand and Network Effects in AI Dominance

    1. DP

      is, like, what are the network effects here?

    2. SP

      Right.

    3. DP

      Um, and, and what is the density? And, and it seems often-

    4. SP

      Network effects

    5. DP

      ... to be brand, uh-

    6. SP

      Yeah. Yeah. I, I mean, I'm not sure that's a network effect.

    7. DP

      Right. No, no. It isn't. [laughs]

    8. SP

      But, but, but brand, like, like, everybody just sort of, you know, OpenAI, ChatGPT is the Kleenex or, you know, of, of AI in that Kleenex is actually called a tissue, but we call it a Kleenex because there was a company called Kleenex.

    9. DP

      Where are you going with this?

    10. SP

      [laughs]

    11. DP

      Are we back to the Grog doing anything?

    12. SP

      The apocalypse. [laughs]

    13. SP

      Oh, no. Well, um, no-

    14. DP

      Sorry

    15. SP

      ... I'm, I'm just saying it's, uh, or, or what's a, what's another example? Xerox.

    16. DP

      Yeah.

    17. SP

      You, you make a, you Xerox this thing.

    18. DP

      Yeah.

    19. SP

      Xerox is just one company that makes a copier.

    20. DP

      Mm.

    21. SP

      Right? Uh, not even the biggest one. But like, but everybody knows that a Xerox thing.

    22. DP

      Right.

    23. SP

      And so, like, ChatGPT gets massive rents from the fact that everyone just says like, "I'll use AI. What's an AI? ChatGPT."

    24. DP

      Right.

    25. SP

      "I'll use it." And so, like, brand is, is the most important thing.

    26. DP

      But, but I think that's mostly due to the fact that-

    27. SP

      So far

    28. DP

      ... this, uh, key capability of learning on the job has not been unlocked. And so-

    29. SP

      Right

    30. DP

      ... for, um, I, I don't know-

  14. 1:09:311:10:07

    Final Thoughts and Preparation for What’s Next

    1. DP

      How do you put a price on that?

    2. SP

      Yes. [laughs]

    3. SP

      Yeah.

    4. DP

      Is there anything we, we have, you wanna- Any last words for the audience based, based on our conversation? I don't know. I, I read your stuff a bunch, so it's great to actually just talk in person.

    5. SP

      Thanks, man. Yeah. I, um, I have to, to come out with an English language book so we, I can do the podcast. I, I've written a, a Japanese language book published in Japan.

    6. DP

      [laughs]

    7. SP

      But, uh-

    8. DP

      That's great

    9. SP

      ... I have to write my English language one-

    10. DP

      Nice. Nice

    11. SP

      ... and then I can do the Dwarkesh Podcast, one of my dreams.

    12. DP

      Amazing. Amazing.

    13. SP

      Noah Dwarkesh, thank you so much for coming on. It's been great.

    14. DP

      Awesome. Thanks, Erik. [outro music]

Episode duration: 1:10:18

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode pjC6C8gfUps

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome