Skip to content
Modern WisdomModern Wisdom

Shocking Ways AI Could End The World - Geoffrey Miller

Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #ai #existentialrisk #chatgpt - 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Geoffrey MillerguestChris Williamsonhost
Jul 6, 20231h 20mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
July 6, 2023
Duration
1h 20m
Channel
Modern Wisdom
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #ai #existentialrisk #chatgpt - 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

SPEAKERS

  • Geoffrey Miller

    guest
  • Chris Williamson

    host

EPISODE SUMMARY

In this episode of Modern Wisdom, featuring Geoffrey Miller and Chris Williamson, Shocking Ways AI Could End The World - Geoffrey Miller explores geoffrey Miller Warns: AI Progress Is Russian Roulette With Humanity Geoffrey Miller argues that rapidly advancing AI, especially large neural networks and prospective AGI, pose existential (x-risk) and even suffering (s-risk) threats comparable to or worse than nuclear war and engineered pandemics. He explains how not just superintelligent AGI, but near-term, narrower systems—bioweapon designers, deepfake engines, persuasive political AIs, and companion bots—could destabilize societies and erode human flourishing. Miller critiques tech leaders like Sam Altman and Marc Andreessen for underestimating these dangers while racing to build systems that may automate most human jobs, including malign ones like terrorism and propaganda. He advocates grassroots moral stigmatization of reckless AI development, coordinated slowdown of the “AI arms race,” and a public that treats AI risk as a personal survival issue, while still embracing safe, narrow AI applications.

RELATED EPISODES

21 Harsh Truths About Why You’re Still Lost - Mark Manson

21 Harsh Truths About Why You’re Still Lost - Mark Manson

How TikTok Hijacked the Future of Music - Nik Nocturnal

How TikTok Hijacked the Future of Music - Nik Nocturnal

DEBATE: Why Do Gen Z Women Hate Men So Much?

DEBATE: Why Do Gen Z Women Hate Men So Much?

A Blueprint for Mastering Every Conversation - Jefferson Fisher

A Blueprint for Mastering Every Conversation - Jefferson Fisher

The Endless Pain Of Emotionally Mature Partners - Mercedes Coffman

The Endless Pain Of Emotionally Mature Partners - Mercedes Coffman

The Uber Eats to OnlyFans Pipeline

The Uber Eats to OnlyFans Pipeline

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome