Skip to content
Lex Fridman PodcastLex Fridman Podcast

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors: - Notion: https://notion.com - InsideTracker: https://insidetracker.com/lex to get 20% off - Indeed: https://indeed.com/lex to get $75 credit EPISODE LINKS: Max's Twitter: https://twitter.com/tegmark Max's Website: https://space.mit.edu/home/tegmark Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments Future of Life Institute: https://futureoflife.org Books and resources mentioned: 1. Life 3.0 (book): https://amzn.to/3UB9rXB 2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch 3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:56 - Intelligent alien civilizations 14:20 - Life 3.0 and superintelligent AI 25:47 - Open letter to pause Giant AI Experiments 50:54 - Maintaining control 1:19:44 - Regulation 1:30:34 - Job automation 1:39:48 - Elon Musk 2:01:31 - Open source 2:08:01 - How AI may kill all humans 2:18:32 - Consciousness 2:27:54 - Nuclear winter 2:38:21 - Questions for AGI SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Max TegmarkguestLex Fridmanhost
Apr 13, 20232h 48mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
April 13, 2023
Duration
2h 48m
Channel
Lex Fridman Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:

EPISODE LINKS: Max's Twitter: https://twitter.com/tegmark Max's Website: https://space.mit.edu/home/tegmark Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments Future of Life Institute: https://futureoflife.org Books and resources mentioned:

  1. Life 3.0 (book): https://amzn.to/3UB9rXB
  2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch
  3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0

PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:56 - Intelligent alien civilizations 14:20 - Life 3.0 and superintelligent AI 25:47 - Open letter to pause Giant AI Experiments 50:54 - Maintaining control 1:19:44 - Regulation 1:30:34 - Job automation 1:39:48 - Elon Musk 2:01:31 - Open source 2:08:01 - How AI may kill all humans 2:18:32 - Consciousness 2:27:54 - Nuclear winter 2:38:21 - Questions for AGI SOCIAL:

SPEAKERS

  • Max Tegmark

    guest
  • Lex Fridman

    host

EPISODE SUMMARY

In this episode of Lex Fridman Podcast, featuring Max Tegmark and Lex Fridman, Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 explores max Tegmark Urges Six‑Month Pause to Avert AI Suicide Race Max Tegmark argues that advances like GPT‑4 show AGI and superintelligence may be very near, and that humanity is in a “suicide race” where commercial and geopolitical pressures (Moloch) push labs to deploy ever‑more powerful systems faster than we can make them safe.

RELATED EPISODES

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Christof Koch: Consciousness | Lex Fridman Podcast #2

Christof Koch: Consciousness | Lex Fridman Podcast #2

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome