Skip to content
The Diary of a CEOThe Diary of a CEO

WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

In this new episode Steven sits down with philosopher, neuroscientist, podcast host and author Sam Harris. 00:00 Intro 02:02 6 years later, where do you stand on AI? 16:36 Is this not the most pressing problem? 33:16 Why I deleted twitter 45:43 Narrow AI 58:26 The meaning of AGI 01:02:00 In the age of AI how do we create purpose? 01:10:06 Who will AI replace? 01:14:41 Should we be doing universal basic income? 01:21:40 Would you stop AI if you could? 01:27:31 How do we change our minds to be happier? 01:34:28 Why not lying & telling the truth will make you happier 01:41:28 The last guests question Follow Sam: Instagram: https://bit.ly/3DHwOHy YouTube: https://bit.ly/3DE8RAy You can purchase Sam's book, 'Waking Up', here: https://bit.ly/3Qp51D7 Sam has kindly given DOAC listeners 30 days free trial on his app - Waking Up. Here is the link: https://bit.ly/3QxIrrZ My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Join this channel to get access to perks: https://bit.ly/3Dpmgx5 Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Whoop: http://bit.ly/3MbapaY

Sam HarrisguestSteven Bartletthost
Aug 7, 20231h 50mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Sam Harris Warns: Superhuman AI, Broken Truth, And Democracy’s Fragility Ahead

  1. Sam Harris argues that superhuman artificial intelligence is inevitable, substrate‑independent, and inherently dangerous for the "dumber" party in the relationship, making AI alignment the central existential risk of our time. He believes we have already skipped the critical decision point about connecting powerful AI to the internet and embedding it into vital systems, driven by overwhelming economic incentives.
  2. In the nearer term, Harris is at least as worried about narrow AI: large language models and generative tools that can mass‑produce convincing misinformation, deepfakes, and fake scientific literature, potentially forcing us into "epistemological bankruptcy" where ordinary people can no longer tell what's real. This, layered onto social media dynamics, threatens elections, institutional trust, and the basic possibility of shared reality.
  3. He and host Steven Bartlett also examine social media’s corrosive impact, personal honesty as a lever for well‑being, the need for new economic models such as UBI in an AI‑driven labor disruption, and why the humanities and consciously cultivated spirituality may matter even more in an AI‑saturated future. Harris ultimately says he would pause AI development to solve alignment, yet acknowledges the immense upside if we get it right.

IDEAS WORTH REMEMBERING

5 ideas

Superhuman AGI is both technically plausible and structurally dangerous for humans.

Harris argues you only need two modest assumptions to expect superhuman AGI: intelligence is substrate‑independent (it need not be ‘made of meat’), and we will keep improving AI unless a catastrophe stops us. Because intelligence differentials create fundamental opacity—where the dumber party cannot understand the smarter party’s motives or plans—superhuman AI is inherently risky. Our experience as the most intelligent species relative to animals is his core analogy: dogs benefit from us but are blind to our real goals and would not understand if we exterminated them for a higher‑order reason (e.g., stopping a pandemic).

We’ve already lost the “safe lab” moment; powerful AI is in the wild by default.

AI safety thinkers long assumed there would be a cautious, air‑gapped phase where a powerful AGI is built in isolation, then humanity decides whether it should touch critical systems. Instead, models like GPT‑4 arrived directly as internet‑connected, API‑exposed tools embedded into businesses, hospitals, aviation, finance, and more. The alignment conversation is happening after deployment rather than before, and Harris finds it "genuinely surprising" and alarming that there was no global pause or coordinated decision about plugging AI into everything.

Near‑term narrow AI threatens epistemic collapse through industrial‑scale fakery.

Harris predicts that within a few years, most online 'information'—articles, papers, videos—could be AI‑generated and often intentionally deceptive. He imagines a single teenager generating thousands of fake cancer‑vaccine studies or highly polished denialist documentaries (e.g., ‘the Holocaust never happened’) with archival‑looking footage and realistic voices. When deepfake video, audio, and text converge, the cost of producing persuasive misinformation approaches zero, while human fact‑checking cannot scale, pushing society toward "epistemological bankruptcy" where we must assume sensational material is fake until proven otherwise.

Democracy and institutional trust are at acute risk in the next election cycles.

Harris worries that the United States may be unable to run a 2024 presidential election that a majority accepts as valid, even without AI; with AI‑driven misinformation and deepfakes, the challenge grows. Social media already siloes citizens into incompatible information universes where debunking doesn’t propagate. Add AI‑boosted conspiracy ecosystems, synthetic science, and manipulated video of political leaders, and coherent democratic decision‑making and institutional legitimacy could break down, opening the door for bad actors and accidental escalations (e.g., misinterpreted 'videos' in nuclear command‑and‑control contexts).

Economic and ethical norms must change when AI can do most cognitive work.

Unlike past technologies that displaced some jobs but created better ones, Harris believes advanced AI will, in the limit, simply cancel the need for human labor across ever more domains, starting with high‑status cognitive jobs (programmers, radiologists, executives) before trades. In a success scenario where AI produces enormous abundance and solves engineering problems, tying basic survival to wage labor becomes both unnecessary and unjust. He sees universal basic income or similar mechanisms as essential to decouple survival from employment and to prevent extreme inequality and eventual revolt.

WORDS WORTH SAVING

5 quotes

There is something inherently dangerous for the dumber party in that relationship.

Sam Harris

We thought we would have that moment in the lab where we’d decide whether to let it out of the box. We’re way past that.

Sam Harris

Most of what’s online that purports to be information could soon be fake.

Sam Harris

I worry we’re just gonna have to declare bankruptcy with respect to the internet.

Sam Harris

Almost all of the truly bad things that have happened to me in the last decade were a result of my engagement with Twitter.

Sam Harris

Existential and near‑term risks of artificial general intelligence (AGI)Narrow AI, misinformation, and the coming collapse of online trustLoss of the ‘air‑gapped lab’ moment and unchecked deployment incentivesAI, labor disruption, and the case for universal basic incomeSocial media’s psychological and societal toxicity (especially Twitter/X)Honesty, lying, and how truth‑telling reshapes relationships and happinessSecular spirituality, psychedelics, death, and meaning in a post‑religious world

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome