Skip to content
Modern WisdomModern Wisdom

Shocking Ways AI Could End The World - Geoffrey Miller

Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #ai #existentialrisk #chatgpt - 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Geoffrey MillerguestChris Williamsonhost
Jul 5, 20231h 20mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Geoffrey Miller Warns: AI Progress Is Russian Roulette With Humanity

  1. Geoffrey Miller argues that rapidly advancing AI, especially large neural networks and prospective AGI, pose existential (x-risk) and even suffering (s-risk) threats comparable to or worse than nuclear war and engineered pandemics. He explains how not just superintelligent AGI, but near-term, narrower systems—bioweapon designers, deepfake engines, persuasive political AIs, and companion bots—could destabilize societies and erode human flourishing. Miller critiques tech leaders like Sam Altman and Marc Andreessen for underestimating these dangers while racing to build systems that may automate most human jobs, including malign ones like terrorism and propaganda. He advocates grassroots moral stigmatization of reckless AI development, coordinated slowdown of the “AI arms race,” and a public that treats AI risk as a personal survival issue, while still embracing safe, narrow AI applications.

IDEAS WORTH REMEMBERING

5 ideas

Narrow, non-AGI systems can be catastrophically dangerous.

Miller stresses that you don’t need fully general, self-improving superintelligence to cause disaster; specialized AIs for bioweapons design, deepfakes, or financial and military tactics could trigger pandemics, nuclear war, or systemic chaos within a few years.

Speed and scalability give AI a decisive strategic edge over humans.

Future AI systems may operate 100–100,000 times faster than humans, enabling them to out-react us in trading, warfare, and decision loops; once given agency rather than just advisory roles, their speed creates overwhelming incentives to cede control.

AI development is effectively a high-stakes arms race toward a wall.

Framing AI as a ‘winner-takes-all’ race between nations ignores that the likely “prize” could be shared extinction; Miller argues that if the end-state is catastrophic, the rational move for all players is coordinated slowdown, not acceleration.

Current alignment efforts mostly encode elite, Western tech-bro values.

He criticizes real-world “alignment” as aligning systems with the preferences of secular, liberal Bay Area developers rather than the diverse, often religious, global population—and almost never with non-human life or our own embodied biological interests.

The near-term information ecosystem will be radically manipulated by AI.

Miller predicts the 2024 US election will feature hyper-targeted AI-generated propaganda, deepfakes, and customized messaging based on detailed behavioral models, outstripping our ‘political immune system’ and reshaping opinion formation.

WORDS WORTH SAVING

5 quotes

One in six is literally Russian roulette. So if you're playing Russian roulette and the head is not, like, a single individual's head but your whole species, that's what you're doing.

Geoffrey Miller

If it's an arms race into a concrete wall, it's not really a race that you want to engage in.

Geoffrey Miller

They don't really mean alignment with what people believe. They kind of sort of mean we want the AI to be aligned with what a good liberal, secular, humanist, democratic Bay Area tech bro values.

Geoffrey Miller

You don't need to go down the road towards AGI. There are hundreds of cases where you can have quite safe narrow AI applications that deliver huge quality-of-life benefits.

Geoffrey Miller

Are the people who are charging headlong into this, through some combination of greed and hubris and prestige, on our side—or are they just caught up in a delusional project that is at heart reckless and evil?

Geoffrey Miller

Existential and suffering risks from AI (x-risk and s-risk)Narrow AI dangers vs. superintelligent AGI and singularity scenariosAcceleration of capabilities in large language models and deep learningAlignment problem and whose values AI should serveGeopolitics and AI arms race between the US, China, and othersNear-term societal impacts: elections, propaganda, social media, friend-bots, and lonelinessMoral stigmatization and grassroots opposition as AI governance strategy

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome