Skip to content
The Diary of a CEOThe Diary of a CEO

WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

In this new episode Steven sits down with philosopher, neuroscientist, podcast host and author Sam Harris. 00:00 Intro 02:02 6 years later, where do you stand on AI? 16:36 Is this not the most pressing problem? 33:16 Why I deleted twitter 45:43 Narrow AI 58:26 The meaning of AGI 01:02:00 In the age of AI how do we create purpose? 01:10:06 Who will AI replace? 01:14:41 Should we be doing universal basic income? 01:21:40 Would you stop AI if you could? 01:27:31 How do we change our minds to be happier? 01:34:28 Why not lying & telling the truth will make you happier 01:41:28 The last guests question Follow Sam: Instagram: https://bit.ly/3DHwOHy YouTube: https://bit.ly/3DE8RAy You can purchase Sam's book, 'Waking Up', here: https://bit.ly/3Qp51D7 Sam has kindly given DOAC listeners 30 days free trial on his app - Waking Up. Here is the link: https://bit.ly/3QxIrrZ My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Join this channel to get access to perks: https://bit.ly/3Dpmgx5 Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Whoop: http://bit.ly/3MbapaY

Sam HarrisguestSteven Bartletthost
Aug 7, 20231h 50mWatch on YouTube ↗

CHAPTERS

  1. 4:00 – 7:10

    Framing the AI Question: From TED Talk to Existential Risk

    Bartlett reintroduces Sam Harris via his earlier TED Talk on AI control and asks whether Harris is now optimistic about humanity’s survival. Harris distinguishes between near‑term harms from increasingly powerful narrow AI and longer‑term existential risks from artificial general intelligence, arguing that superhuman systems are both inevitable and intrinsically dangerous for humans as the less intelligent party.

  2. 7:10 – 21:30

    Why Superhuman Intelligence Is Inevitable—and Inherently Risky

    Harris lays out the minimal assumptions required to predict superhuman AI and explores why intelligence mismatches are uniquely hazardous. Drawing analogies to our relationship with dogs and to evolution’s ‘code’ for humans, he argues that building self‑modifying AI does not guarantee durable alignment with human goals.

  3. 21:30 – 42:00

    Alien Civilisation Thought Experiment and the Ethics–Intelligence Gamble

    Harris uses Stuart Russell’s analogy of receiving a message from a superior alien civilisation arriving in 50 years to illustrate how we under‑react to AGI’s approach. He critiques the belief that greater intelligence automatically entails greater ethics, arguing there are many possible superhuman minds unaligned with human wellbeing.

  4. 42:00 – 59:30

    We Skipped the Safety Phase: AI Already Plugged Into Everything

    Contrary to safety community expectations, advanced AI was not developed in isolation but deployed straight into the open internet and critical infrastructures. Harris and Bartlett discuss how commercial incentives overrode caution, and why the idea of ‘turning it off’ (e.g., shutting down the internet) is practically and economically implausible.

  5. 59:30 – 1:18:30

    Narrow AI, Deepfakes, and the Coming Epistemic Bankruptcy

    Harris shifts to narrow AI, arguing that current and near‑future systems pose an immediate threat to our ability to know what’s true. He paints vivid scenarios of AI‑generated scientific fraud and deepfake documentaries, warning that the internet may become effectively unusable for reliable information and that elections and public health decisions will be destabilised.

  6. 1:18:30 – 1:59:00

    Social Media as Chaos Engine: Harris’s Breakup with Twitter

    The conversation narrows to social media, especially Twitter/X, as a precursor of AI‑supercharged dysfunction. Both Harris and Bartlett describe feeds saturated with death, outrage, and polarising culture‑war content. Harris explains how Twitter distorted his priorities, damaged relationships, and fueled professional crises, and why deleting his account dramatically improved his life.

  7. 1:59:00 – 2:21:00

    AI, Work, and the Case for Universal Basic Income

    Harris explores how successful AI will upend labor markets by outperforming humans in many high‑skill domains. He argues that standard ‘technological disruption’ analogies fail here: once AI can handle most cognitive tasks, there may be no new human jobs to absorb the displaced workers, forcing a re‑think of economic and moral assumptions about work, status, and survival.

  8. 2:21:00 – 2:34:00

    If You Could Pause AI: The Existential Button Question

    Bartlett pushes Harris into a hypothetical: if a button could permanently halt AI development, would he press it? Harris distinguishes between an indefinite pause to solve alignment and a permanent freeze, ultimately saying his fear of existential risk is strong enough that he would pause AI, even at the cost of delayed benefits like cures for cancer.

  9. 2:34:00 – 2:48:00

    Rebuilding Trust, Institutions, and Shared Reality After COVID

    Harris reflects on the COVID‑19 pandemic as a failed dress rehearsal for future crises. Instead of uniting societies, it amplified tribalism and mistrust of institutions like the CDC and FDA. He argues that regaining trustworthy institutions and shared information environments is crucial, but will be harder in a world of AI‑driven information silos and podcast‑centric media ecosystems.

  10. 2:48:00 – 3:14:00

    Defining AGI and Why Humanities Might Matter More Than Ever

    They clarify what is meant by artificial general intelligence and discuss how recent systems like AlphaZero illustrate qualitatively new kinds of machine learning. Harris then pivots to what humans should pursue in an AI world, arguing that the humanities and human‑to‑human meaning‑making may become central as engineering and scientific tasks are automated.

  11. 3:14:00 – 3:46:00

    Meaning, Purpose, and Living with AI: Work, Leisure, and Happiness

    Bartlett raises Elon Musk’s discomfort about advising his children on purpose in an AI‑dominated future. Harris responds that humans can adapt to decoupling survival from work and that many fears assume we’re incapable of enjoying ‘vacation’ without economic pressure. He invokes meditation and altered states as evidence that people can find deep fulfilment without conventional labor.

  12. 3:46:00 – 4:26:00

    Radical Honesty: How Not Lying Rewires Relationships and Self‑Knowledge

    Shifting to ethics and personal growth, Bartlett asks how individuals can change their minds and improve their lives. Harris presents his commitment to near‑total honesty—eschewing even white lies—as the single highest‑leverage change he made, explaining how it alters social dynamics, reveals self‑deceptions, and protects against the kinds of scandals that ruined prominent figures.

  13. 4:26:00 – 4:44:00

    Death, Psychedelics, and a Secular Vision of a Good Ending

    In the closing segments, Harris answers a question about where and how he would like to die, using it to gesture toward a broader vision of wise dying. He suggests that psychedelics and contemplative practice could underpin future rites of passage akin to the ancient Eleusinian mysteries, and describes a peaceful death surrounded by loved ones, looking at the sky.

  14. 4:44:00

    From Religion to Secular Spirituality and the Waking Up App

    Bartlett credits Harris with helping him transition from Christianity to a secular yet meaningful worldview that includes spirituality and reduced fear of death. Harris briefly explains his Waking Up app as a better delivery system than books for meditation and contemplative insight, and both reflect on the importance of nuanced, honest dialogue in an increasingly polarised world.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome