Shocking Ways AI Could End The World  - Geoffrey Miller

Shocking Ways AI Could End The World - Geoffrey Miller

Modern WisdomJul 6, 20231h 20m

Geoffrey Miller (guest), Chris Williamson (host)

Existential and suffering risks from AI (x-risk and s-risk)Narrow AI dangers vs. superintelligent AGI and singularity scenariosAcceleration of capabilities in large language models and deep learningAlignment problem and whose values AI should serveGeopolitics and AI arms race between the US, China, and othersNear-term societal impacts: elections, propaganda, social media, friend-bots, and lonelinessMoral stigmatization and grassroots opposition as AI governance strategy

In this episode of Modern Wisdom, featuring Geoffrey Miller and Chris Williamson, Shocking Ways AI Could End The World - Geoffrey Miller explores geoffrey Miller Warns: AI Progress Is Russian Roulette With Humanity Geoffrey Miller argues that rapidly advancing AI, especially large neural networks and prospective AGI, pose existential (x-risk) and even suffering (s-risk) threats comparable to or worse than nuclear war and engineered pandemics. He explains how not just superintelligent AGI, but near-term, narrower systems—bioweapon designers, deepfake engines, persuasive political AIs, and companion bots—could destabilize societies and erode human flourishing. Miller critiques tech leaders like Sam Altman and Marc Andreessen for underestimating these dangers while racing to build systems that may automate most human jobs, including malign ones like terrorism and propaganda. He advocates grassroots moral stigmatization of reckless AI development, coordinated slowdown of the “AI arms race,” and a public that treats AI risk as a personal survival issue, while still embracing safe, narrow AI applications.

Geoffrey Miller Warns: AI Progress Is Russian Roulette With Humanity

Geoffrey Miller argues that rapidly advancing AI, especially large neural networks and prospective AGI, pose existential (x-risk) and even suffering (s-risk) threats comparable to or worse than nuclear war and engineered pandemics. He explains how not just superintelligent AGI, but near-term, narrower systems—bioweapon designers, deepfake engines, persuasive political AIs, and companion bots—could destabilize societies and erode human flourishing. Miller critiques tech leaders like Sam Altman and Marc Andreessen for underestimating these dangers while racing to build systems that may automate most human jobs, including malign ones like terrorism and propaganda. He advocates grassroots moral stigmatization of reckless AI development, coordinated slowdown of the “AI arms race,” and a public that treats AI risk as a personal survival issue, while still embracing safe, narrow AI applications.

Key Takeaways

Narrow, non-AGI systems can be catastrophically dangerous.

Miller stresses that you don’t need fully general, self-improving superintelligence to cause disaster; specialized AIs for bioweapons design, deepfakes, or financial and military tactics could trigger pandemics, nuclear war, or systemic chaos within a few years.

Get the full analysis with uListen AI

Speed and scalability give AI a decisive strategic edge over humans.

Future AI systems may operate 100–100,000 times faster than humans, enabling them to out-react us in trading, warfare, and decision loops; once given agency rather than just advisory roles, their speed creates overwhelming incentives to cede control.

Get the full analysis with uListen AI

AI development is effectively a high-stakes arms race toward a wall.

Framing AI as a ‘winner-takes-all’ race between nations ignores that the likely “prize” could be shared extinction; Miller argues that if the end-state is catastrophic, the rational move for all players is coordinated slowdown, not acceleration.

Get the full analysis with uListen AI

Current alignment efforts mostly encode elite, Western tech-bro values.

He criticizes real-world “alignment” as aligning systems with the preferences of secular, liberal Bay Area developers rather than the diverse, often religious, global population—and almost never with non-human life or our own embodied biological interests.

Get the full analysis with uListen AI

The near-term information ecosystem will be radically manipulated by AI.

Miller predicts the 2024 US election will feature hyper-targeted AI-generated propaganda, deepfakes, and customized messaging based on detailed behavioral models, outstripping our ‘political immune system’ and reshaping opinion formation.

Get the full analysis with uListen AI

AI ‘friend-bots’ could undermine relationships, fertility, and social fabric.

Highly personalized, infinitely patient AI companions may outcompete real partners and friends, leading to isolation and collapsing birthrates; observers could react with a moral or political backlash against such technologies.

Get the full analysis with uListen AI

Grassroots moral stigmatization may be faster than formal regulation.

Since policymakers are slow and easily captured, Miller advocates treating reckless frontier AI work like tobacco or arms trading—stigmatizing companies, investors, employees, and suppliers to withdraw talent, capital, and social license.

Get the full analysis with uListen AI

Notable Quotes

One in six is literally Russian roulette. So if you're playing Russian roulette and the head is not, like, a single individual's head but your whole species, that's what you're doing.

Geoffrey Miller

If it's an arms race into a concrete wall, it's not really a race that you want to engage in.

Geoffrey Miller

They don't really mean alignment with what people believe. They kind of sort of mean we want the AI to be aligned with what a good liberal, secular, humanist, democratic Bay Area tech bro values.

Geoffrey Miller

You don't need to go down the road towards AGI. There are hundreds of cases where you can have quite safe narrow AI applications that deliver huge quality-of-life benefits.

Geoffrey Miller

Are the people who are charging headlong into this, through some combination of greed and hubris and prestige, on our side—or are they just caught up in a delusional project that is at heart reckless and evil?

Geoffrey Miller

Questions Answered in This Episode

How should societies decide whose values—and which trade-offs—future AI systems are aligned to, given global moral and religious diversity?

Geoffrey Miller argues that rapidly advancing AI, especially large neural networks and prospective AGI, pose existential (x-risk) and even suffering (s-risk) threats comparable to or worse than nuclear war and engineered pandemics. ...

Get the full analysis with uListen AI

What concrete mechanisms could realistically coordinate a global slowdown in frontier AI development, especially among geopolitical rivals?

Get the full analysis with uListen AI

Where should we draw the line between beneficial narrow AI and systems that are too close to AGI or too dangerous to deploy?

Get the full analysis with uListen AI

How can individuals and communities build resilience against AI-driven persuasion, deepfakes, and hyper-targeted political manipulation?

Get the full analysis with uListen AI

Is it ethically acceptable to forgo potentially life-extending medical breakthroughs from AGI if doing so significantly reduces extinction risk for future generations?

Get the full analysis with uListen AI

Transcript Preview

Geoffrey Miller

So what we're facing is potentially AI systems that are smarter than any human at doing a wide range of things, but also that are potentially 100, 1,000, maybe 100,000 times faster than humans. I think when we confront very fast, powerful, general-purpose AI systems, that's the kind of situation we're gonna be in. We're gonna be out-classed, not just in terms of intelligence, but also reaction speed.

Chris Williamson

Why are you, as an evolutionary psychologist researcher, talking about AI?

Geoffrey Miller

I just love getting into trouble and, uh, making, making trouble on Twitter. (laughs) No, I actually... Look, long story short, when I started, uh, my PhD program at Stanford in... way back in 1987, right? I was studying cognitive psychology and cognitive science, and pretty quickly, uh, we got one of the leading neural networks, uh, researchers at Stanford Psychology, a guy named David Rumelhart, who worked very extensively with Geoffrey Hinton and lots of other people. Um, and so I started in grad school doing a lot of work on neural networks and machine learning and, uh, genetic algorithms and so forth, and then I spent, um, most of my post-doctoral years at University of Sussex also in a cognitive science department doing autonomous robots and, uh, sort of applying genetic algorithms to evolve neural networks. So a long time ago, I was sort of an early adopter of machine learning, and then I got sidetracked into this evolutionary psychology thing, you know, for about 30 years. (laughs) But recently, um, uh, since about 2016, I've, I've become concerned and fascinated by rapid progress in, in AI, particularly deep learning and the large language models, and I sort of fell in with this gaggle of effective altruists, right? This movement who are quite concerned about, uh, existential risks, risks to all of humanity, and a lot of them were very concerned about how AI could play out. So, uh, the last few years, I've been reading a lot about this. Uh, since last summer, I've been publishing a bunch of essays on the Effective Altruism Forum and gotten pretty active on Twitter, uh, the last few months about AI x-risk. So to, to the casual observer, right? They might go, "Well, who is this psychology dude suddenly interested in AI?" Well, honestly, I've been fascinated by AI ever since I was a high schooler reading science fiction, you know, and ever since grad school learning about cognitive science.

Chris Williamson

Okay.

Geoffrey Miller

So that's my long story short, that wasn't actually very short.

Chris Williamson

Is it right to say that most of the researchers in the existential risk world see AI as one of the, if not the premier primary risk that we're facing?

Geoffrey Miller

Yeah, absolutely. There's a great book by Toby Ord, O-R-D, um, who's an Oxford moral philosopher, but he has worked a lot on existential risks. So he did a book called The Precipice, right? And he actually tries to quantify the different extinction risks that we face. Some of them are really, really low probability, but really hard to fight, like if there's a local gamma ray burster, right? Or, uh, a supernova. Very, very low likelihood, very hard to defend against. Other stuff like asteroids, right? Which get a lot of, uh, a- attention. Very, very low likelihood, like there's probably less than a one-in-a-million chance we're gonna get a dangerous asteroid in the next century. If it comes, that could be bad, but, you know, there's stuff we can do about it. Whereas, uh, Toby Ord estimates, estimates that the risk of extinction, human extinction through AI in this century is about one in six, and I think that's sort of in line with a lot of the estimates that many experts give. The other, the other big risks are basically nuclear war, which is still an issue, right? After, you know, 70 years of, uh, thermonuclear weapons being around. So nuclear war, uh, possibly genetically engineered bioweapons could be really bad. It would be like COVID on steroids that could wipe out a lot of people. But, uh, the other, the other one is AI. So those seem to be the big three, AI, nukes, and super germs.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome