Skip to content
Dwarkesh PodcastDwarkesh Podcast

Leopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project. Read the new essay series from Leopold this episode is based on here: https://situational-awareness.ai/ 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/p/leopold-aschenbrenner * Apple Podcasts: https://podcasts.apple.com/us/podcast/leopold-aschenbrenner-china-us-super-intelligence-race/id1516093381?i=1000657821539 * Spotify: https://open.spotify.com/episode/5NQFPblNw8ewxKolIDpiYN?si=6NaTHAugT2SxZrspW3lziw * Follow me on Twitter: https://twitter.com/dwarkesh_sp * Follow Leopold on Twitter: https://x.com/leopoldasch 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 The trillion-dollar cluster and unhobbling 00:21:20 AI 2028: The return of history 00:41:15 Espionage & American AI superiority 01:09:09 Geopolitical implications of AI 01:32:12 State-led vs. private-led AI 02:13:12 Becoming Valedictorian of Columbia at 19 02:31:24 What happened at OpenAI 02:46:00 Intelligence explosion 03:26:47 Alignment 03:42:15 On Germany, and understanding foreign perspectives 03:57:53 Dwarkesh's immigration story and path to the podcast 04:03:16 Random questions 04:08:47 Launching an AGI hedge fund 04:20:03 Lessons from WWII 04:29:57 Coda: Frederick the Great

Leopold AschenbrennerguestDwarkesh Patelhost
Jun 4, 20244h 32mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Leopold Aschenbrenner forecasts 2027 AGI and geopolitical superintelligence struggle

  1. Leopold Aschenbrenner argues that AI progress is following a predictable compute-and-algorithm scaling curve that plausibly leads to AGI around 2027–2028, driven by trillion‑dollar, multi‑gigawatt data centers. He believes that once AIs can fully automate top AI researchers, an “intelligence explosion” will rapidly yield superintelligence, compressing decades of technological progress into a few years. This, in turn, makes AI decisive for national power, pushing the U.S. and China into a high‑stakes race involving espionage, industrial mobilization, and potentially government‑run “AGI projects.” He also recounts internal tensions at OpenAI around safety, security, and governance, and explains why he’s launching an investment firm focused on AGI-era situational awareness.

IDEAS WORTH REMEMBERING

5 ideas

AI progress is tightly coupled to massive, sustained growth in compute and capital expenditure.

Leopold extrapolates roughly half an order of magnitude per year in effective training compute, leading from today’s hundreds‑million‑dollar clusters to 10‑gigawatt, hundred‑billion‑dollar clusters by the late 2020s and potentially a $1T, 100‑gigawatt “trillion‑dollar cluster” around 2030.

Breaking through the ‘data wall’ will likely require self‑play, synthetic data, and test‑time compute overhang, not just more internet text.

He argues current pre‑training has nearly exhausted high‑quality web data, so future gains depend on models learning to think longer (millions of tokens), correct themselves, and generate their own training signals through RL and self‑improvement.

AGI as ‘drop‑in remote worker’ could arrive before deeply integrated intermediate tools do.

Because integrating today’s tools takes organizational “schlep,” companies may delay adoption until models are powerful and agentic enough to function like autonomous coworkers—planning, executing multi‑week projects, and using computers—creating a sudden ‘sonic boom’ in productivity.

Automating top AI researchers could trigger a rapid intelligence explosion toward superintelligence.

Once systems can fully replace people like Alec Radford or leading DeepMind/OpenAI engineers, you can run tens or hundreds of millions of such AI researchers in parallel, plausibly achieving a decade’s worth of ML progress in a year and quickly surpassing human intelligence across domains.

Geopolitics will likely shift from ‘cool products’ to survival of political systems.

Leopold contends superintelligence will be militarily decisive, potentially yielding Gulf‑War‑style kill ratios or even enabling neutralization of nuclear deterrence; this makes whether liberal democracy or the CCP controls superintelligence central to the 21st‑century world order.

WORDS WORTH SAVING

5 quotes

What will be at stake will not just be cool products, but whether liberal democracy survives, whether the CCP survives, what the world order for the next century will be.

Leopold Aschenbrenner

2023 was the moment for me where it went from AGI as a theoretical abstract thing to like, I see it, I feel it. I can see the cluster where it’s trained, the rough combination of algorithms, the people, how it's happening.

Leopold Aschenbrenner

You really think there’ll be like a private company? And the government would be like, ‘Oh my God, what is going on?’

Leopold Aschenbrenner

People don’t realize how intense state‑level espionage can be. We have literal superintelligence on our cluster making Stuxnet at the Chinese data centers.

Leopold Aschenbrenner

Would you do the Manhattan Project in the UAE?

Leopold Aschenbrenner

Scaling laws, compute growth, and trillion‑dollar AI clustersTimelines to AGI and the intelligence explosion via automated AI researchData constraints (“data wall”), unhobbling, and long‑horizon agentic behaviorU.S.–China geopolitical competition and state‑level espionage around AINationalization vs. private labs: governance, safety, and security trade‑offsOpenAI internal politics, superalignment team, and weight/security concernsPost‑AGI economic, industrial, and financial implications; Leopold’s investment firm

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome