Dwarkesh PodcastAGI is still 30 years away — Ege Erdil & Tamay Besiroglu
At a glance
WHAT IT’S REALLY ABOUT
Why AGI Might Be Decades Away: Intelligence Isn’t the Bottleneck Anymore
- Dwarkesh Patel interviews Ege Erdil and Tamay Besiroglu about why they expect AGI and full remote-work automation closer to the 2040s, not the 2020s, despite rapid recent AI progress.
- They argue that intelligence and reasoning alone won’t drive an “intelligence explosion”; instead, economic growth depends on complementary factors like compute, infrastructure, data, institutions, and broad deployment across sectors.
- They discuss limits to software‑only singularity stories, emphasizing how hardware, energy, supply chains, and regulation constrain further scaling, and why AI R&D itself is heavily compute- and experiment-bottlenecked.
- The conversation also explores explosive economic growth, AI-native firms, central planning, long-run value lock‑in, AI takeover scenarios, and how to think and plan under extreme uncertainty about the future.
IDEAS WORTH REMEMBERING
5 ideasIntelligence alone won’t cause an automatic “intelligence explosion.”
Erdil and Besiroglu compare “intelligence explosion” to calling the Industrial Revolution a “horsepower explosion”: raw capability increased, but the real transformation came from many complementary changes—new institutions, infrastructure, sectors, and supply chains. AI progress will similarly depend on far more than just smarter models.
Compute and hardware scaling are central bottlenecks for future AI capabilities.
They note AI progress has roughly tracked 9–10 orders of magnitude of compute increase since AlexNet, and estimate perhaps only 3–4 more orders are realistically left before hitting hard constraints like energy, fabs, and capital expenditure. Many “unlocks” likely require more compute, not just better ideas.
Current models are impressive reasoners but poor at genuine innovation and agency.
Large reasoning models can beat most humans on math or coding problems yet have not produced even modestly novel mathematical concepts or robust, long-horizon agentic behavior in open-ended environments—suggesting there’s still “a lot left to intelligence” beyond what we see now.
Automating AI R&D is far harder than automating narrow coding tasks.
They argue R&D requires messy long-horizon judgment, agenda-setting, and conceptual innovation, not just solving closed benchmarks. Empirical work suggests algorithmic progress is tightly coupled to compute and experiments; flooding the field with “AI researchers” doesn’t automatically yield hyperbolic growth.
Explosive economic growth is plausible once AI substitutes broadly for human labor.
If AI workers can be trained once, copied arbitrarily, and run on hardware whose cost they can quickly repay (like an H100 matching a human’s lifetime compute), then labor and capital can scale together. That combination historically supports much faster growth than simply adding capital to a fixed human workforce.
WORDS WORTH SAVING
5 quotesIt’s kind of like calling the Industrial Revolution a horsepower explosion.
— Tamay Besiroglu
Intelligence isn’t the bottleneck. Making contact with the real world and getting a lot of data from experiments and from deployment just has this drastic impact.
— Ege Erdil
Just think about the sheer scale of knowledge that these models have… it is actually quite remarkable that there’s no innovation that comes out of that.
— Ege Erdil
The world today is not bottlenecked by not having enough good reasoning.
— Tamay Besiroglu
I would just say it’s much more important to maintain flexibility and ability to adapt than it is to get a specific plan that’s going to be correct.
— Ege Erdil
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome