Leopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history

Leopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history

Dwarkesh PodcastJun 4, 20244h 32m

Leopold Aschenbrenner (guest), Dwarkesh Patel (host), Narrator, Narrator, Narrator, Narrator, Narrator

Scaling laws, compute growth, and trillion‑dollar AI clustersTimelines to AGI and the intelligence explosion via automated AI researchData constraints (“data wall”), unhobbling, and long‑horizon agentic behaviorU.S.–China geopolitical competition and state‑level espionage around AINationalization vs. private labs: governance, safety, and security trade‑offsOpenAI internal politics, superalignment team, and weight/security concernsPost‑AGI economic, industrial, and financial implications; Leopold’s investment firm

In this episode of Dwarkesh Podcast, featuring Leopold Aschenbrenner and Dwarkesh Patel, Leopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history explores leopold Aschenbrenner forecasts 2027 AGI and geopolitical superintelligence struggle Leopold Aschenbrenner argues that AI progress is following a predictable compute-and-algorithm scaling curve that plausibly leads to AGI around 2027–2028, driven by trillion‑dollar, multi‑gigawatt data centers. He believes that once AIs can fully automate top AI researchers, an “intelligence explosion” will rapidly yield superintelligence, compressing decades of technological progress into a few years. This, in turn, makes AI decisive for national power, pushing the U.S. and China into a high‑stakes race involving espionage, industrial mobilization, and potentially government‑run “AGI projects.” He also recounts internal tensions at OpenAI around safety, security, and governance, and explains why he’s launching an investment firm focused on AGI-era situational awareness.

Leopold Aschenbrenner forecasts 2027 AGI and geopolitical superintelligence struggle

Leopold Aschenbrenner argues that AI progress is following a predictable compute-and-algorithm scaling curve that plausibly leads to AGI around 2027–2028, driven by trillion‑dollar, multi‑gigawatt data centers. He believes that once AIs can fully automate top AI researchers, an “intelligence explosion” will rapidly yield superintelligence, compressing decades of technological progress into a few years. This, in turn, makes AI decisive for national power, pushing the U.S. and China into a high‑stakes race involving espionage, industrial mobilization, and potentially government‑run “AGI projects.” He also recounts internal tensions at OpenAI around safety, security, and governance, and explains why he’s launching an investment firm focused on AGI-era situational awareness.

Key Takeaways

AI progress is tightly coupled to massive, sustained growth in compute and capital expenditure.

Leopold extrapolates roughly half an order of magnitude per year in effective training compute, leading from today’s hundreds‑million‑dollar clusters to 10‑gigawatt, hundred‑billion‑dollar clusters by the late 2020s and potentially a $1T, 100‑gigawatt “trillion‑dollar cluster” around 2030.

Get the full analysis with uListen AI

Breaking through the ‘data wall’ will likely require self‑play, synthetic data, and test‑time compute overhang, not just more internet text.

He argues current pre‑training has nearly exhausted high‑quality web data, so future gains depend on models learning to think longer (millions of tokens), correct themselves, and generate their own training signals through RL and self‑improvement.

Get the full analysis with uListen AI

AGI as ‘drop‑in remote worker’ could arrive before deeply integrated intermediate tools do.

Because integrating today’s tools takes organizational “schlep,” companies may delay adoption until models are powerful and agentic enough to function like autonomous coworkers—planning, executing multi‑week projects, and using computers—creating a sudden ‘sonic boom’ in productivity.

Get the full analysis with uListen AI

Automating top AI researchers could trigger a rapid intelligence explosion toward superintelligence.

Once systems can fully replace people like Alec Radford or leading DeepMind/OpenAI engineers, you can run tens or hundreds of millions of such AI researchers in parallel, plausibly achieving a decade’s worth of ML progress in a year and quickly surpassing human intelligence across domains.

Get the full analysis with uListen AI

Geopolitics will likely shift from ‘cool products’ to survival of political systems.

Leopold contends superintelligence will be militarily decisive, potentially yielding Gulf‑War‑style kill ratios or even enabling neutralization of nuclear deterrence; this makes whether liberal democracy or the CCP controls superintelligence central to the 21st‑century world order.

Get the full analysis with uListen AI

Security and secrecy around algorithms and weights are as critical as raw compute for maintaining a lead.

He stresses that current labs are at ‘startup‑level’ security, making state espionage trivial; protecting algorithmic tricks to cross the data wall and AGI‑capable weights could mean the difference between the U. ...

Get the full analysis with uListen AI

National‑scale coordination or partial nationalization of frontier AI may be hard to avoid.

He doubts private labs will control literal superintelligence and WMD‑enabling capabilities without deep state involvement; he envisions a Manhattan‑Project‑style or Warp‑Speed‑style public–private effort, with democratic checks and an inner coalition of allied democracies leading development.

Get the full analysis with uListen AI

Notable Quotes

What will be at stake will not just be cool products, but whether liberal democracy survives, whether the CCP survives, what the world order for the next century will be.

Leopold Aschenbrenner

2023 was the moment for me where it went from AGI as a theoretical abstract thing to like, I see it, I feel it. I can see the cluster where it’s trained, the rough combination of algorithms, the people, how it's happening.

Leopold Aschenbrenner

You really think there’ll be like a private company? And the government would be like, ‘Oh my God, what is going on?’

Leopold Aschenbrenner

People don’t realize how intense state‑level espionage can be. We have literal superintelligence on our cluster making Stuxnet at the Chinese data centers.

Leopold Aschenbrenner

Would you do the Manhattan Project in the UAE?

Leopold Aschenbrenner

Questions Answered in This Episode

How robust is Leopold’s AGI‑by‑2027 forecast to failures of key assumptions, like solving the data wall or achieving effective long‑horizon RL?

Leopold Aschenbrenner argues that AI progress is following a predictable compute-and-algorithm scaling curve that plausibly leads to AGI around 2027–2028, driven by trillion‑dollar, multi‑gigawatt data centers. ...

Get the full analysis with uListen AI

What concrete security, governance, and legal structures would be needed for a U.S.‑led ‘AGI project’ that is both effective and compatible with liberal democracy?

Get the full analysis with uListen AI

If alignment techniques mainly help powerful actors better control AI systems, how do we prevent them from also enabling more stable dictatorships or mass surveillance regimes?

Get the full analysis with uListen AI

In practice, how could the U.S. balance climate concerns, permitting bottlenecks, and national security imperatives when deciding where and how fast to build 10–100‑gigawatt AI clusters?

Get the full analysis with uListen AI

What empirical milestones over the next 2–3 years (model capabilities, economic impacts, political reactions) would most update you toward or away from Leopold’s situational awareness picture?

Get the full analysis with uListen AI

Transcript Preview

Leopold Aschenbrenner

What will be at stake will not just be cool products, but whether liberal democracy survives, whether the CCP survives, what the world order for the next century will be. The CCP is gonna have an all-out effort to, like, infiltrate American AI labs, billions of dollars, thousands of people. CCP is gonna try to out-build us. People don't realize, like, how intense state-level espionage can be. And we have, like, literal super intelligence on our cluster, making, like, Stuxnet at the Chinese data centers. You really think they'll be, like, a private company? And the government would be like, "Oh my God, what is going on?" I do think it is incredibly important that these clusters are in the United States. I mean, would you do the Manhattan Project in the UAE, right? 2023 was the sort of moment for me where it went from kind of AGI as a sort of theoretical abstract thing, you'd make the models to like, I see it, I feel it. I can see the cluster where it's trained on, like, the rough combination of algorithms, the people, like, how it's happening. And I think, you know, most of the world is not, you know, most of the people who feel it are, like, right here, you know?

Dwarkesh Patel

(laughs)

Leopold Aschenbrenner

Right? Uh...

Dwarkesh Patel

Okay. Today I'm chatting with my friend Leopold Aschenbrenner. He grew up in Germany, graduated valedictorian of Columbia when he was 19, and then he had a very interesting gap year, which we'll talk about, and then he was on the OpenAI Superalignment team. May it rest in peace. And now, he, uh, with some anchor investments from Patrick and John Collison and Daniel Gross and Nat Friedman is launching an investment firm. So Leopold, I know you're off to a s- slow start, but life is long and I wouldn't worry about it too much. You'll make up for it in due time. Um, but thanks for coming on the podcast.

Leopold Aschenbrenner

Thank you. You know, I, um, I first discovered your podcast when your best episode had, you know, like, a couple hundred views.

Dwarkesh Patel

(laughs)

Leopold Aschenbrenner

Uh, and so it's just been, it's been amazing to follow your trajectory, and it's a delight to be on.

Dwarkesh Patel

Yeah, yeah. Well, I, uh, I think, uh, in the Sholto and Trenton episode, I t- mentioned that a lot of the things I've learned about AI, I've learned from talking with them. And the third part of this triumvirate, probably the most significant in terms of the things that I've learned about AI, has been you. We've got all this stuff on the record now.

Leopold Aschenbrenner

(laughs) Great, great.

Dwarkesh Patel

Uh, okay, first thing I have to get on record. Tell me about the trillion-dollar cluster. But, but by the way, I should mention, so the context of this podcast is today there's, you're releasing a series called Situational Awareness. We're gonna get into it. First question about that is, tell me about the trillion-dollar cluster.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome