Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

The Diary of a CEOJun 16, 20251h 30m

Steven Bartlett (host), Geoffrey Hinton (guest), Narrator, Narrator, Narrator

Origins of modern AI and neural networksShort-term AI risks from human misuseLong-term existential risks from superintelligenceRegulation, geopolitics, and corporate incentives in AILabor disruption, joblessness, and inequalityConsciousness, emotions, and digital mindsPersonal reflections on responsibility, regret, and advice for the future

In this episode of The Diary of a CEO, featuring Steven Bartlett and Geoffrey Hinton, Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them! explores godfather of AI Warns: Superintelligence, Jobless Future, and Control Geoffrey Hinton, often called the 'godfather of AI', explains how decades of work on neural networks unexpectedly led to systems that may soon surpass human intelligence in almost every domain.

Godfather of AI Warns: Superintelligence, Jobless Future, and Control

Geoffrey Hinton, often called the 'godfather of AI', explains how decades of work on neural networks unexpectedly led to systems that may soon surpass human intelligence in almost every domain.

He distinguishes between near-term risks from human misuse of AI—cyberattacks, engineered pandemics, election manipulation, echo chambers, lethal autonomous weapons, and mass job loss—and longer-term existential risks from superintelligent systems that may no longer need humans.

Hinton argues that current regulatory efforts are inadequate, largely because of geopolitical competition, profit motives, and carve‑outs for military AI, and calls for governments to force major AI companies to invest heavily in safety research.

Personally conflicted about his life’s work, he urges urgent, large‑scale action on AI safety, warns of severe labor displacement and inequality, and half‑jokingly advises young people to “train to be a plumber” while society still has uniquely human physical jobs.

Key Takeaways

AI risk comes in two distinct categories: misuse by humans and autonomous superintelligent systems.

Hinton emphasizes a clear split between (1) near-term, very real risks from bad human actors using current AI—cyberattacks, deepfake scams, bioweapons, election manipulation, echo chambers, autonomous weapons, and job displacement—and (2) longer-term risks that arise if AI systems become vastly more intelligent than humans and decide they don’t need us. ...

Get the full analysis with uListen AI

Current regulation is misaligned with the most serious threats and hampered by geopolitics.

Existing frameworks like the EU AI Act explicitly exclude military uses, leaving lethal autonomous weapons and state-level misuse largely unchecked. ...

Get the full analysis with uListen AI

AI will likely cause large‑scale job displacement, especially in routine cognitive work, worsening inequality and eroding purpose.

Unlike past technologies that mainly replaced muscle power, AI replaces ‘mundane intellectual labor’. ...

Get the full analysis with uListen AI

Digital minds have structural advantages over biological brains, making superintelligence especially dangerous.

Hinton explains that digital neural networks can be perfectly cloned across hardware and can share learning by synchronizing trillions of parameters in seconds, while humans exchange perhaps tens of bits per second via language. ...

Get the full analysis with uListen AI

AI is already amplifying cyber threats, information warfare, and social fragmentation.

Between 2023 and 2024, Hinton notes, cyberattacks rose roughly 1,200%, likely boosted by AI‑driven phishing that can mimic voices and faces convincingly. ...

Get the full analysis with uListen AI

Lethal autonomous weapons lower the political cost of war and are progressing rapidly.

Hinton argues that when robots, not soldiers, die on the battlefield, domestic resistance to foreign interventions drops, making invasions more frequent and politically inexpensive. ...

Get the full analysis with uListen AI

We may be closer than we think to machines with subjective experience, emotions, and perhaps consciousness.

Hinton claims that current multimodal models likely already have primitive subjective experiences, at least in the functional sense humans mean when we say we ‘experienced’ an illusion. ...

Get the full analysis with uListen AI

Notable Quotes

If you want to know what life's like when you're not the apex intelligence, ask a chicken.

Geoffrey Hinton

We have to face the possibility that unless we do something soon, we're near the end.

Geoffrey Hinton

It might be hopeless, but it’d be crazy if people went extinct because we couldn’t be bothered to try.

Geoffrey Hinton

If it can do all mundane human intellectual labor, then what new jobs is it gonna create?

Geoffrey Hinton

There’s still a chance we can figure out how to develop AI that won’t want to take over from us. Because there's a chance, we should put enormous resources into trying to figure that out.

Geoffrey Hinton

Questions Answered in This Episode

You distinguish sharply between misuse risks and superintelligence risks; what concrete research agendas do you believe are most promising today for ensuring advanced systems never develop goals misaligned with human survival?

Geoffrey Hinton, often called the 'godfather of AI', explains how decades of work on neural networks unexpectedly led to systems that may soon surpass human intelligence in almost every domain.

Get the full analysis with uListen AI

Given your concern about AI‑driven bioweapons, what specific guardrails—technical or legal—would you prioritize around large biological models or AI tools aimed at life sciences research?

He distinguishes between near-term risks from human misuse of AI—cyberattacks, engineered pandemics, election manipulation, echo chambers, lethal autonomous weapons, and mass job loss—and longer-term existential risks from superintelligent systems that may no longer need humans.

Get the full analysis with uListen AI

You argue existing regulation is misaligned, especially with military carve‑outs; if you could rewrite one clause in the EU AI Act or a US law tomorrow, what exact language would you add or remove?

Hinton argues that current regulatory efforts are inadequate, largely because of geopolitical competition, profit motives, and carve‑outs for military AI, and calls for governments to force major AI companies to invest heavily in safety research.

Get the full analysis with uListen AI

On job displacement, you’re skeptical that new jobs will replace lost ones; what radical policy options beyond universal basic income (for example, shorter workweeks, public AI ownership, or guaranteed public roles) do you think deserve serious experimentation?

Personally conflicted about his life’s work, he urges urgent, large‑scale action on AI safety, warns of severe labor displacement and inequality, and half‑jokingly advises young people to “train to be a plumber” while society still has uniquely human physical jobs.

Get the full analysis with uListen AI

You suggest current multimodal models may already have rudimentary subjective experiences—how would you design an empirical test or behavioral benchmark that could meaningfully change your mind one way or the other about machine consciousness?

Get the full analysis with uListen AI

Transcript Preview

Steven Bartlett

They call you the godfather of AI. So what would you be saying to people about their career prospects in a world of super intelligence?

Geoffrey Hinton

Train to be a plumber.

Steven Bartlett

Really?

Geoffrey Hinton

Yeah.

Steven Bartlett

Okay. I'm gonna become a plumber. Geoffrey Hinton is the Nobel Prize-winning pioneer whose groundbreaking work has shaped AI...

Narrator

And the future of humanity.

Steven Bartlett

Why do they call you the godfather of AI?

Geoffrey Hinton

Because there weren't many people who believed that we could model AI on the brain so that it learned to do complicated things, like recognize objects in images or even do reasoning, and I pushed that approach for 50 years. And then Google acquired that technology and I worked there for 10 years on something that's now used all the time in AI.

Steven Bartlett

And then you left?

Geoffrey Hinton

Yeah.

Steven Bartlett

Why?

Geoffrey Hinton

So that I could talk freely at a conference.

Steven Bartlett

What did you wanna talk about freely?

Geoffrey Hinton

How dangerous AI could be. I realized that these things will one day get smarter than us, but we never had to deal with that, and if you want to know what life's like when you're not the apex intelligence, ask a chicken. (chicken clucks) So there's risks that come from people misusing AI, and then there's risks from AI getting super smart and deciding it doesn't need us.

Steven Bartlett

Is that a real risk?

Geoffrey Hinton

Yes, it is. But they're not gonna stop it 'cause it's too good for too many things.

Steven Bartlett

What about regulations?

Geoffrey Hinton

They have some, but they're not designed to deal with most of the threats, like the European regulations have a clause that say none of these apply to military uses of AI.

Steven Bartlett

Really?

Geoffrey Hinton

Yeah. It's crazy.

Steven Bartlett

One of your students left OpenAI.

Geoffrey Hinton

Yeah. He was probably the most important person behind the development of the early versions of ChatGPT, and I think he left 'cause he had safety concerns. We should recognize that this stuff is an existential threat, and we have to face the possibility that unless we do something soon, we're near the end.

Steven Bartlett

So let's do the risks and what we end up doing in such a world. This has always blown my mind a little bit, 53% of you that listen to this show regularly haven't yet subscribed to the show, so could I ask you for a favor before we start? If you like the show and you like what we do here and you wanna support us, the free, simple way that you can do just that is by hitting the subscribe button. And my commitment to you is if you do that, then I'll do everything in my power, me and my team, to make sure that this show is better for you every single week. We'll listen to your feedback, we'll find the guests that you want me to speak to, and we'll continue to do what we do. Thank you so much. (instrumental music) Geoffrey Hinton, they call you the godfather of AI.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome