Skip to content
Lex Fridman PodcastLex Fridman Podcast

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off TRANSCRIPT: https://lexfridman.com/roman-yampolskiy-transcript EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 2:20 - Existential risk of AGI 8:32 - Ikigai risk 16:44 - Suffering risk 20:19 - Timeline to AGI 24:51 - AGI turing test 30:14 - Yann LeCun and open source AI 43:06 - AI control 45:33 - Social engineering 48:06 - Fearmongering 57:57 - AI deception 1:04:30 - Verification 1:11:29 - Self-improving AI 1:23:42 - Pausing AI development 1:29:59 - AI Safety 1:39:43 - Current AI 1:45:05 - Simulation 1:52:24 - Aliens 1:53:57 - Human mind 2:00:17 - Neuralink 2:09:23 - Hope for the future 2:13:18 - Meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Roman YampolskiyguestLex Fridmanhost
Jun 2, 20242h 15mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 2:20

    Introduction

  2. 2:20 – 8:32

    Existential risk of AGI

  3. 8:32 – 16:44

    Ikigai risk

  4. 16:44 – 20:19

    Suffering risk

  5. 20:19 – 24:51

    Timeline to AGI

  6. 24:51 – 30:14

    AGI turing test

  7. 30:14 – 43:06

    Yann LeCun and open source AI

  8. 43:06 – 45:33

    AI control

  9. 45:33 – 48:06

    Social engineering

  10. 48:06 – 57:57

    Fearmongering

  11. 57:57 – 1:04:30

    AI deception

  12. 1:11:29 – 1:23:42

    Self-improving AI

  13. 1:23:42 – 1:29:59

    Pausing AI development

  14. 2:09:23 – 2:13:18

    Hope for the future

  15. 2:13:18 – 2:15:38

    Meaning of life

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome