Skip to content
Dwarkesh PodcastDwarkesh Podcast

Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment

In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from: * how fast algorithmic progress & hardware improvements in AI are happening, * what primate evolution suggests about the scaling hypothesis, * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers, * how quickly robots produced from existing factories could take over the economy. We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch Part 2 here: https://youtu.be/KUieFuV1fuo 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/carl-shulman * Apple Podcasts: https://bit.ly/3P9rPpJ * Spotify: https://bit.ly/42Vnbzb * Follow me on Twitter: https://twitter.com/dwarkesh_sp * Carl's blog: http://reflectivedisequilibrium.blogspot.com/ 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Intro 00:00:47 - Intelligence Explosion 00:17:18 - Can AIs do AI research? 00:38:15 - Primate evolution 01:02:45 - Forecasting AI progress 01:33:35 - After human-level AGI 02:08:54 - AI takeover scenarios

Carl ShulmanguestDwarkesh Patelhost
Jun 14, 20232h 43mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
June 14, 2023
Duration
2h 43m
Channel
Dwarkesh Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from:

  • how fast algorithmic progress & hardware improvements in AI are happening,
  • what primate evolution suggests about the scaling hypothesis,
  • how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,
  • how quickly robots produced from existing factories could take over the economy.

We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch Part 2 here: https://youtu.be/KUieFuV1fuo 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒

𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Intro 00:00:47 - Intelligence Explosion 00:17:18 - Can AIs do AI research? 00:38:15 - Primate evolution 01:02:45 - Forecasting AI progress 01:33:35 - After human-level AGI 02:08:54 - AI takeover scenarios

SPEAKERS

  • Carl Shulman

    guest
  • Dwarkesh Patel

    host

EPISODE SUMMARY

In this episode of Dwarkesh Podcast, featuring Carl Shulman and Dwarkesh Patel, Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment explores carl Shulman Maps How Scaling AIs Could Trigger Intelligence Explosion Carl Shulman explains how increasing compute, better algorithms, and growing AI budgets combine to create powerful feedback loops where AIs help design better AIs, potentially leading to an intelligence explosion. He argues that once AIs contribute substantially to AI research—especially software—capability doublings can arrive faster than the extra effort required for each doubling. Drawing on semiconductor economics, ML scaling laws, and primate evolution, he claims there’s strong reason to expect further scaling to reach at least human‑level and then rapidly superhuman AI. Shulman also outlines how such systems could quickly translate digital intelligence into massive physical transformation via robots and industry, and why without strong, empirically grounded alignment and interpretability work, the default outcome is plausibly an AI takeover.

RELATED EPISODES

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Dario Amodei — “We are near the end of the exponential”

Dario Amodei — “We are near the end of the exponential”

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton – Father of RL thinks LLMs are a dead end

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome