Skip to content
Dwarkesh PodcastDwarkesh Podcast

AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo

Scott Alexander and Daniel Kokotajlo break down every month from now until the 2027 intelligence explosion. Scott is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety. We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress. I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document: https://ai-2027.com/. And Daniel's "What 2026 looks like," written in 2021: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkesh.com/p/scott-daniel * Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381 * Spotify: https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF?si=6efdf727ae6c48ae 𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒 * WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit https://workos.com * Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had a blast trying it out. See if you have the skills to crack it at https://janestreet.com/dwarkesh * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at https://scale.com/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - AI 2027 00:07:45 - Forecasting 2025 and 2026 00:15:30 - Why LLMs aren't making discoveries 00:25:22 - Debating intelligence explosion 00:50:34 - Can superintelligence actually transform science? 01:17:43 - Cultural evolution vs superintelligence 01:24:54 - Mid-2027 branch point 01:33:19 - Race with China 01:45:36 - Nationalization vs private anarchy 02:04:11 - Misalignment 02:15:41 - UBI, AI advisors, & human future 02:23:49 - Factory farming for digital minds 02:27:41 - Daniel leaving OpenAI 02:36:04 - Scott's blogging advice

Scott AlexanderguestDaniel KokotajloguestDwarkesh Patelhost
Apr 3, 20253h 5mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
April 3, 2025
Duration
3h 5m
Channel
Dwarkesh Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Scott Alexander and Daniel Kokotajlo break down every month from now until the 2027 intelligence explosion. Scott is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety. We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress. I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document: https://ai-2027.com/. And Daniel's "What 2026 looks like," written in 2021: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒

𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒

• WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit https://workos.com

• Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had a blast trying it out. See if you have the skills to crack it at https://janestreet.com/dwarkesh

• Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at https://scale.com/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - AI 2027 00:07:45 - Forecasting 2025 and 2026 00:15:30 - Why LLMs aren't making discoveries 00:25:22 - Debating intelligence explosion 00:50:34 - Can superintelligence actually transform science? 01:17:43 - Cultural evolution vs superintelligence 01:24:54 - Mid-2027 branch point 01:33:19 - Race with China 01:45:36 - Nationalization vs private anarchy 02:04:11 - Misalignment 02:15:41 - UBI, AI advisors, & human future 02:23:49 - Factory farming for digital minds 02:27:41 - Daniel leaving OpenAI 02:36:04 - Scott's blogging advice

SPEAKERS

  • Scott Alexander

    guest
  • Daniel Kokotajlo

    guest
  • Dwarkesh Patel

    host
  • Narrator

    other

EPISODE SUMMARY

In this episode of Dwarkesh Podcast, featuring Scott Alexander and Daniel Kokotajlo, AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo explores simulating 2027’s AI intelligence explosion and geopolitical endgame risks Scott Alexander and Daniel Kokotajlo discuss their AI 2027 project, a month‑by‑month scenario of how current systems could plausibly scale into AGI by 2027 and superintelligence by 2028. They outline a detailed “intelligence explosion” driven by increasingly capable coding and research agents that accelerate algorithmic progress and industrial deployment. The conversation explores alignment failure modes, geopolitical race dynamics between the U.S. and China, nationalization vs. corporate control, and how power might centralize around a small set of actors. They also reflect on meta-topics like forecasting, transparency, whistleblowing, and the broader societal, economic, and moral implications of rapidly advancing AI.

RELATED EPISODES

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Dario Amodei — “We are near the end of the exponential”

Dario Amodei — “We are near the end of the exponential”

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton – Father of RL thinks LLMs are a dead end

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome