Dwarkesh PodcastLeopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history
At a glance
WHAT IT’S REALLY ABOUT
Leopold Aschenbrenner forecasts 2027 AGI and geopolitical superintelligence struggle
- Leopold Aschenbrenner argues that AI progress is following a predictable compute-and-algorithm scaling curve that plausibly leads to AGI around 2027–2028, driven by trillion‑dollar, multi‑gigawatt data centers. He believes that once AIs can fully automate top AI researchers, an “intelligence explosion” will rapidly yield superintelligence, compressing decades of technological progress into a few years. This, in turn, makes AI decisive for national power, pushing the U.S. and China into a high‑stakes race involving espionage, industrial mobilization, and potentially government‑run “AGI projects.” He also recounts internal tensions at OpenAI around safety, security, and governance, and explains why he’s launching an investment firm focused on AGI-era situational awareness.
IDEAS WORTH REMEMBERING
5 ideasAI progress is tightly coupled to massive, sustained growth in compute and capital expenditure.
Leopold extrapolates roughly half an order of magnitude per year in effective training compute, leading from today’s hundreds‑million‑dollar clusters to 10‑gigawatt, hundred‑billion‑dollar clusters by the late 2020s and potentially a $1T, 100‑gigawatt “trillion‑dollar cluster” around 2030.
Breaking through the ‘data wall’ will likely require self‑play, synthetic data, and test‑time compute overhang, not just more internet text.
He argues current pre‑training has nearly exhausted high‑quality web data, so future gains depend on models learning to think longer (millions of tokens), correct themselves, and generate their own training signals through RL and self‑improvement.
AGI as ‘drop‑in remote worker’ could arrive before deeply integrated intermediate tools do.
Because integrating today’s tools takes organizational “schlep,” companies may delay adoption until models are powerful and agentic enough to function like autonomous coworkers—planning, executing multi‑week projects, and using computers—creating a sudden ‘sonic boom’ in productivity.
Automating top AI researchers could trigger a rapid intelligence explosion toward superintelligence.
Once systems can fully replace people like Alec Radford or leading DeepMind/OpenAI engineers, you can run tens or hundreds of millions of such AI researchers in parallel, plausibly achieving a decade’s worth of ML progress in a year and quickly surpassing human intelligence across domains.
Geopolitics will likely shift from ‘cool products’ to survival of political systems.
Leopold contends superintelligence will be militarily decisive, potentially yielding Gulf‑War‑style kill ratios or even enabling neutralization of nuclear deterrence; this makes whether liberal democracy or the CCP controls superintelligence central to the 21st‑century world order.
WORDS WORTH SAVING
5 quotesWhat will be at stake will not just be cool products, but whether liberal democracy survives, whether the CCP survives, what the world order for the next century will be.
— Leopold Aschenbrenner
2023 was the moment for me where it went from AGI as a theoretical abstract thing to like, I see it, I feel it. I can see the cluster where it’s trained, the rough combination of algorithms, the people, how it's happening.
— Leopold Aschenbrenner
You really think there’ll be like a private company? And the government would be like, ‘Oh my God, what is going on?’
— Leopold Aschenbrenner
People don’t realize how intense state‑level espionage can be. We have literal superintelligence on our cluster making Stuxnet at the Chinese data centers.
— Leopold Aschenbrenner
Would you do the Manhattan Project in the UAE?
— Leopold Aschenbrenner
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome