
AGI is still 30 years away — Ege Erdil & Tamay Besiroglu
Ege Erdil (guest), Tamay Besiroglu (guest), Dwarkesh Patel (host)
In this episode of Dwarkesh Podcast, featuring Ege Erdil and Tamay Besiroglu, AGI is still 30 years away — Ege Erdil & Tamay Besiroglu explores why AGI Might Be Decades Away: Intelligence Isn’t the Bottleneck Anymore Dwarkesh Patel interviews Ege Erdil and Tamay Besiroglu about why they expect AGI and full remote-work automation closer to the 2040s, not the 2020s, despite rapid recent AI progress.
Why AGI Might Be Decades Away: Intelligence Isn’t the Bottleneck Anymore
Dwarkesh Patel interviews Ege Erdil and Tamay Besiroglu about why they expect AGI and full remote-work automation closer to the 2040s, not the 2020s, despite rapid recent AI progress.
They argue that intelligence and reasoning alone won’t drive an “intelligence explosion”; instead, economic growth depends on complementary factors like compute, infrastructure, data, institutions, and broad deployment across sectors.
They discuss limits to software‑only singularity stories, emphasizing how hardware, energy, supply chains, and regulation constrain further scaling, and why AI R&D itself is heavily compute- and experiment-bottlenecked.
The conversation also explores explosive economic growth, AI-native firms, central planning, long-run value lock‑in, AI takeover scenarios, and how to think and plan under extreme uncertainty about the future.
Key Takeaways
Intelligence alone won’t cause an automatic “intelligence explosion.”
Erdil and Besiroglu compare “intelligence explosion” to calling the Industrial Revolution a “horsepower explosion”: raw capability increased, but the real transformation came from many complementary changes—new institutions, infrastructure, sectors, and supply chains. ...
Get the full analysis with uListen AI
Compute and hardware scaling are central bottlenecks for future AI capabilities.
They note AI progress has roughly tracked 9–10 orders of magnitude of compute increase since AlexNet, and estimate perhaps only 3–4 more orders are realistically left before hitting hard constraints like energy, fabs, and capital expenditure. ...
Get the full analysis with uListen AI
Current models are impressive reasoners but poor at genuine innovation and agency.
Large reasoning models can beat most humans on math or coding problems yet have not produced even modestly novel mathematical concepts or robust, long-horizon agentic behavior in open-ended environments—suggesting there’s still “a lot left to intelligence” beyond what we see now.
Get the full analysis with uListen AI
Automating AI R&D is far harder than automating narrow coding tasks.
They argue R&D requires messy long-horizon judgment, agenda-setting, and conceptual innovation, not just solving closed benchmarks. ...
Get the full analysis with uListen AI
Explosive economic growth is plausible once AI substitutes broadly for human labor.
If AI workers can be trained once, copied arbitrarily, and run on hardware whose cost they can quickly repay (like an H100 matching a human’s lifetime compute), then labor and capital can scale together. ...
Get the full analysis with uListen AI
AI-native firms will be superhuman mainly through collective structure, not one godlike mind.
They expect the real advantage to come from copyable agents, perfect knowledge transfer, alignment of incentives, and massive scales of coordinated computation—e. ...
Get the full analysis with uListen AI
Under extreme uncertainty, flexibility and institutional quality matter more than rigid grand plans.
Given how often expert views and empirical frontiers shift, they caution against brittle, centralized strategies (like sweeping pauses or nationalization) based on any one forecast. ...
Get the full analysis with uListen AI
Notable Quotes
“It’s kind of like calling the Industrial Revolution a horsepower explosion.”
— Tamay Besiroglu
“Intelligence isn’t the bottleneck. Making contact with the real world and getting a lot of data from experiments and from deployment just has this drastic impact.”
— Ege Erdil
“Just think about the sheer scale of knowledge that these models have… it is actually quite remarkable that there’s no innovation that comes out of that.”
— Ege Erdil
“The world today is not bottlenecked by not having enough good reasoning.”
— Tamay Besiroglu
“I would just say it’s much more important to maintain flexibility and ability to adapt than it is to get a specific plan that’s going to be correct.”
— Ege Erdil
Questions Answered in This Episode
If reasoning alone isn’t the bottleneck, what concrete capabilities or infrastructure do we most need to unlock before remote work can be fully automated?
Dwarkesh Patel interviews Ege Erdil and Tamay Besiroglu about why they expect AGI and full remote-work automation closer to the 2040s, not the 2020s, despite rapid recent AI progress.
Get the full analysis with uListen AI
How would we know empirically that we’ve hit the real limits of compute scaling, rather than just current economic or political limits?
They argue that intelligence and reasoning alone won’t drive an “intelligence explosion”; instead, economic growth depends on complementary factors like compute, infrastructure, data, institutions, and broad deployment across sectors.
Get the full analysis with uListen AI
What observations over the next five years would most strongly update their AGI timelines earlier—or later?
They discuss limits to software‑only singularity stories, emphasizing how hardware, energy, supply chains, and regulation constrain further scaling, and why AI R&D itself is heavily compute- and experiment-bottlenecked.
Get the full analysis with uListen AI
How could we rigorously test whether algorithmic progress can decouple from hardware scaling in a meaningful way?
The conversation also explores explosive economic growth, AI-native firms, central planning, long-run value lock‑in, AI takeover scenarios, and how to think and plan under extreme uncertainty about the future.
Get the full analysis with uListen AI
If AI-native firms with copyable agents emerge, what governance or ownership structures could keep their power aligned with broadly human values?
Get the full analysis with uListen AI
Transcript Preview
Just think about the sheer scale of knowledge that these models have. It is actually quite remarkable that there's no, like, innovation that comes out of that, has a reasoning model, ever come up with a math concept that even seems, like, slightly interesting to a human mathematician. I- I've never seen that.
Intelligence isn't the bottleneck. Making contact with the real world and getting a lot of data from experiments and from deployment just has this drastic impact.
There's just, like, this enormous amount of richness and detail in the real world that you just can't, like, reason about it.
Right.
Like, you- you- you need to see it.
Today I'm chatting with Tamer Besiroglu and Ege Erdil. They were previously running Epoch AI and are now, uh, launching Mechanize, which is a company dedicated to automating all work. One of the interesting points you made recently, Tamer, is that the whole idea of the intelligence explosion is mistaken or misleading. W- why don't you explain what you were talking about there?
Yeah. I think it's not a very useful concept.
Mm-hmm.
Um, it's kind of like calling the Industrial Revolution a horsepower explosion. Like, sure, during the Industrial Revolution we saw this drastic acceleration in raw physical power, but there are many other things that were maybe equally important in explaining the acceleration of growth and technological change that we saw during the Industrial Revolution.
Uh, w- what is a way to characterize the broader set of things that the horsepower perspective would miss about the Industrial Revolution?
So I- I think in the case of the Industrial Revolution, it was a bunch of these complementary changes to many different sectors in the economy. So you had agriculture, you had transportation, you had law and finance, you had urbanization and moving from rural areas into- into cities. Um, there were just many different innovations that-
Mm.
... kind of, you know, happened simultaneously that gave rise to this, um, change in the- the way of economically organizing our society. It wasn't just that we had, uh, more horsepower. That, I mean that was part of it, but that's not the, kind of central thing to focus on when thinking about the Industrial Revolution. And I think similarly for the development of AI, sure, we'll get, like, a lot of very smart AI systems, but that will be one part among very many-
Hm.
... different moving parts that explain, you know, why we expect to get this transition and this acceleration in growth and technological change.
Yeah. I- I wanna better understand how you think about that broader transformation. Um, before we do, the other really interesting part of your world view is that you have longer timelines to get to AGI than most of the people in San Francisco who think about AI. Um, when do you expect a drop-in remote worker replacement?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome