Skip to content
Dwarkesh PodcastDwarkesh Podcast

AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo

Scott Alexander and Daniel Kokotajlo break down every month from now until the 2027 intelligence explosion. Scott is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety. We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress. I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document: https://ai-2027.com/. And Daniel's "What 2026 looks like," written in 2021: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkesh.com/p/scott-daniel * Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381 * Spotify: https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF?si=6efdf727ae6c48ae 𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒 * WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit https://workos.com * Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had a blast trying it out. See if you have the skills to crack it at https://janestreet.com/dwarkesh * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at https://scale.com/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - AI 2027 00:07:45 - Forecasting 2025 and 2026 00:15:30 - Why LLMs aren't making discoveries 00:25:22 - Debating intelligence explosion 00:50:34 - Can superintelligence actually transform science? 01:17:43 - Cultural evolution vs superintelligence 01:24:54 - Mid-2027 branch point 01:33:19 - Race with China 01:45:36 - Nationalization vs private anarchy 02:04:11 - Misalignment 02:15:41 - UBI, AI advisors, & human future 02:23:49 - Factory farming for digital minds 02:27:41 - Daniel leaving OpenAI 02:36:04 - Scott's blogging advice

Scott AlexanderguestDaniel KokotajloguestDwarkesh Patelhost
Apr 3, 20253h 5mWatch on YouTube ↗

CHAPTERS

  1. 7:45 – 15:30

    Forecasting 2025 and 2026

  2. 15:30 – 25:22

    Why LLMs aren't making discoveries

  3. 25:22 – 50:34

    Debating intelligence explosion

  4. 50:34 – 1:17:43

    Can superintelligence actually transform science?

  5. 1:17:43 – 1:24:54

    Cultural evolution vs superintelligence

  6. 1:24:54 – 1:33:19

    Mid-2027 branch point

  7. 1:33:19 – 1:45:36

    Race with China

  8. 1:45:36 – 2:04:11

    Nationalization vs private anarchy

  9. 2:15:41 – 2:23:49

    UBI, AI advisors, & human future

  10. 2:23:49 – 2:27:41

    Factory farming for digital minds

  11. 2:27:41 – 2:36:04

    Daniel leaving OpenAI

  12. 2:36:04 – 3:05:16

    Scott's blogging advice

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome