WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

The Diary of a CEOAug 7, 20231h 50m

Sam Harris (guest), Steven Bartlett (host), Narrator, Narrator, Narrator, Narrator

Existential and near‑term risks of artificial general intelligence (AGI)Narrow AI, misinformation, and the coming collapse of online trustLoss of the ‘air‑gapped lab’ moment and unchecked deployment incentivesAI, labor disruption, and the case for universal basic incomeSocial media’s psychological and societal toxicity (especially Twitter/X)Honesty, lying, and how truth‑telling reshapes relationships and happinessSecular spirituality, psychedelics, death, and meaning in a post‑religious world

In this episode of The Diary of a CEO, featuring Sam Harris and Steven Bartlett, WARNING: ChatGPT Could Be The Start Of The End! Sam Harris explores sam Harris Warns: Superhuman AI, Broken Truth, And Democracy’s Fragility Ahead Sam Harris argues that superhuman artificial intelligence is inevitable, substrate‑independent, and inherently dangerous for the "dumber" party in the relationship, making AI alignment the central existential risk of our time. He believes we have already skipped the critical decision point about connecting powerful AI to the internet and embedding it into vital systems, driven by overwhelming economic incentives.

Sam Harris Warns: Superhuman AI, Broken Truth, And Democracy’s Fragility Ahead

Sam Harris argues that superhuman artificial intelligence is inevitable, substrate‑independent, and inherently dangerous for the "dumber" party in the relationship, making AI alignment the central existential risk of our time. He believes we have already skipped the critical decision point about connecting powerful AI to the internet and embedding it into vital systems, driven by overwhelming economic incentives.

In the nearer term, Harris is at least as worried about narrow AI: large language models and generative tools that can mass‑produce convincing misinformation, deepfakes, and fake scientific literature, potentially forcing us into "epistemological bankruptcy" where ordinary people can no longer tell what's real. This, layered onto social media dynamics, threatens elections, institutional trust, and the basic possibility of shared reality.

He and host Steven Bartlett also examine social media’s corrosive impact, personal honesty as a lever for well‑being, the need for new economic models such as UBI in an AI‑driven labor disruption, and why the humanities and consciously cultivated spirituality may matter even more in an AI‑saturated future. Harris ultimately says he would pause AI development to solve alignment, yet acknowledges the immense upside if we get it right.

Key Takeaways

Superhuman AGI is both technically plausible and structurally dangerous for humans.

Harris argues you only need two modest assumptions to expect superhuman AGI: intelligence is substrate‑independent (it need not be ‘made of meat’), and we will keep improving AI unless a catastrophe stops us. ...

Get the full analysis with uListen AI

We’ve already lost the “safe lab” moment; powerful AI is in the wild by default.

AI safety thinkers long assumed there would be a cautious, air‑gapped phase where a powerful AGI is built in isolation, then humanity decides whether it should touch critical systems. ...

Get the full analysis with uListen AI

Near‑term narrow AI threatens epistemic collapse through industrial‑scale fakery.

Harris predicts that within a few years, most online 'information'—articles, papers, videos—could be AI‑generated and often intentionally deceptive. ...

Get the full analysis with uListen AI

Democracy and institutional trust are at acute risk in the next election cycles.

Harris worries that the United States may be unable to run a 2024 presidential election that a majority accepts as valid, even without AI; with AI‑driven misinformation and deepfakes, the challenge grows. ...

Get the full analysis with uListen AI

Economic and ethical norms must change when AI can do most cognitive work.

Unlike past technologies that displaced some jobs but created better ones, Harris believes advanced AI will, in the limit, simply cancel the need for human labor across ever more domains, starting with high‑status cognitive jobs (programmers, radiologists, executives) before trades. ...

Get the full analysis with uListen AI

Social media, especially Twitter/X, is a powerful chaos amplifier—even for experts.

Harris describes almost every major negative professional event in his last decade as traceable to Twitter: reputational crises, misinterpretations, and constant reactivity. ...

Get the full analysis with uListen AI

Radical honesty—refusing to lie, even with ‘white lies’—can transform wellbeing and trust.

Harris traces a major improvement in his own life to a decision, influenced by a Stanford seminar, to almost never lie. ...

Get the full analysis with uListen AI

Notable Quotes

There is something inherently dangerous for the dumber party in that relationship.

Sam Harris

We thought we would have that moment in the lab where we’d decide whether to let it out of the box. We’re way past that.

Sam Harris

Most of what’s online that purports to be information could soon be fake.

Sam Harris

I worry we’re just gonna have to declare bankruptcy with respect to the internet.

Sam Harris

Almost all of the truly bad things that have happened to me in the last decade were a result of my engagement with Twitter.

Sam Harris

Questions Answered in This Episode

You argue that being the ‘dumber party’ in a relationship is inherently dangerous—what concrete alignment mechanisms, beyond current technical proposals, would actually give you enough confidence to reverse your instinct to pause AI?

Sam Harris argues that superhuman artificial intelligence is inevitable, substrate‑independent, and inherently dangerous for the "dumber" party in the relationship, making AI alignment the central existential risk of our time. ...

Get the full analysis with uListen AI

In your near‑term misinformation scenario where most online content is fake, what specific combination of technical tools (e.g., cryptographic watermarking, browser‑level verification) and institutional reforms do you think could realistically preserve usable truth for ordinary citizens?

In the nearer term, Harris is at least as worried about narrow AI: large language models and generative tools that can mass‑produce convincing misinformation, deepfakes, and fake scientific literature, potentially forcing us into "epistemological bankruptcy" where ordinary people can no longer tell what's real. ...

Get the full analysis with uListen AI

You’ve said we might need universal basic income in a world of AI‑driven abundance—how would you design such a system to avoid political backlash, perverse incentives, and capture by the very elites who benefit most from AI?

He and host Steven Bartlett also examine social media’s corrosive impact, personal honesty as a lever for well‑being, the need for new economic models such as UBI in an AI‑driven labor disruption, and why the humanities and consciously cultivated spirituality may matter even more in an AI‑saturated future. ...

Get the full analysis with uListen AI

Your own experience with Twitter led you to walk away entirely; do you think there is any viable design for a large‑scale social platform that could harness human attention for good, or is the only rational individual strategy to exit these systems?

Get the full analysis with uListen AI

You treat near‑total honesty as a kind of life hack for wellbeing and ethics—how would you advise someone to begin practicing this in a world where social and professional norms often tacitly require politeness, omission, or strategic ambiguity?

Get the full analysis with uListen AI

Transcript Preview

Sam Harris

Artificial intelligence is superhuman. It is smarter than you are, and there's something inherently dangerous for the dumber party in that relationship. You just can't put the genie back in the bottle. Sam Harris. Neuroscientist. Philosopher. Author. Podcaster.

Steven Bartlett

He goes into intellectual territory where few others dare tread. Six years ago, you did a TED Talk.

Sam Harris

The gains we make in artificial intelligence could ultimately destroy us.

Steven Bartlett

If your objective is to make humanity happy and there was a button placed in front of you and it would end artificial intelligence, what would you do?

Sam Harris

Well, I would definitely pause it. The idea that we've lost the moment to decide whether to hook our most powerful AI to everything is just oh, s-... It's already connected to the internet, got millions of people using it, and the idea that these things will stay aligned with us because we have built them, yet we gave them the capacity to rewrite their code, there's just no reason to believe that. And I worry about the near term problem of what humans do with increasingly powerful AI, how it amplifies misinformation. Most of what's online could soon be faked. Can we hold a presidential election 18 months from now that we recognize as valid, right? Like is it safe? And it just gets scarier and scarier. I worry we're just gonna have to declare bankruptcy to the internet. The internet. The internet. The internet.

Steven Bartlett

If your intuition is correct, are you optimistic about our chances of survival? Before this episode starts, I have a small favor to ask from you. Two months ago, 74% of people that watch this channel didn't subscribe. We're now down to 69%. My goal is 50%. So if you've ever liked any of the videos we've posted, if you like this channel, can you do me a quick favor and hit the subscribe button? It helps this channel more than you know. And the bigger the channel gets, as you've seen, the bigger the guests get. Thank you and enjoy this episode. Sam, six years ago, you did a TED Talk. Um, I watched that TED Talk a few times over the last week, and the TED Talk was called Can We Build AI Without Losing Control Over It.

Sam Harris

Mm-hmm.

Steven Bartlett

In that TED Talk, you really discussed the idea whether, um, AI, when it gets to a certain point of sentience and intelligence will, will wreak havoc on humanity.

Sam Harris

Mm-hmm.

Steven Bartlett

Six years later, where do you stand on, on it today? Do you think, are you optimistic about our chances of survive, survival?

Sam Harris

Yeah, I mean, uh, I can't say I'm optimistic. I'm, I am worried about t- two species of problem here that are r- r- related. I mean, there's, there's sort of the near term problem of just what humans do with increasingly powerful AI and, um, how it amplifies the, the problem of misinformation and disinformation and make, and just makes it harder and harder to make sense of reality together. Um, and then there's just the, the longer term concern about, well, you know, what's called alignment with, with artificial general intelligence, where we build AI that is, is truly general and, you know, by definition superhuman in its competence and power. And then the question is have we built it in such a way that is aligned in a, in a durable way with, with our interests? And, um, I mean, there's some people who just don't see this problem.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome