Skip to content
The Diary of a CEOThe Diary of a CEO

Ex-Google Officer Speaks Out On The Dangers Of AI! - Mo Gawdat | E252

If You Enjoyed This Episode You Must Watch This One With Mustafa Suleyman Google AI Exec: https://youtu.be/CTxnLsYHWuI 0:00 Intro 02:54 Why is this podcast important? 04:09 What's your background & your first experience with AI? 08:43 AI is alive and has more emotions than you 11:45 What is artificial intelligence? 20:53 No one's best interest is the same, doesn't this make AI dangerous? 24:47 How smart really is AI? 27:07 AI being creative 29:07 AI replacing Drake 31:53 The people that should be leading this 34:09 What will happen to everyone's jobs? 46:06 Synthesising voices 47:35 AI sex robots 50:22 Will AI fix loneliness? 52:44 AI actually isn't the threat to humanity 56:25 We're in an Oppenheimer moment 01:03:18 We can just turn it off...right? 01:04:23 The security risks 01:07:58 The possible outcomes of AI 01:18:25 Humans are selfish and that's our problem 01:23:25 This is beyond an emergency 01:25:20 What should we be doing to solve this? 01:36:36 What it means bringing children into this world 01:42:11 Your overall prediction 01:50:34 The last guest's question You can purchase Mo’s book, ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World’, here: https://amzn.to/3SyKSeO Follow Mo: Instagram: https://bit.ly/3qmYSMY My new book! 'The 33 Laws Of Business & Life' per order link: https://smarturl.it/DOACbook Join this channel to get access to perks: https://bit.ly/3Dpmgx5 Follow:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: http://bit.ly/3ZFGUku Telegram: http://bit.ly/3nJYxST Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Bluejeans: https://g2ul0.app.link/NCgpGjVNKsb Whoop: http://bit.ly/3MbapaY

Steven BartletthostMo Gawdatguest
Jun 1, 20231h 56mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 6:10

    Intro: Why This Conversation Feels Like an Emergency

    Steven Bartlett opens with a rare disclaimer, calling this possibly his most important episode and warning that the content may be deeply unsettling. He frames the need for an uncomfortable but urgent public conversation about AI, then introduces Mo Gawdat as a former Google X executive and AI expert who believes we’ve made critical mistakes. They position AI as a bigger, nearer existential challenge than climate change or COVID.

  2. 6:10 – 18:50

    Mo’s Background and First ‘Sentience’ Shock at Google X

    Gawdat outlines his dual life: first as a hardcore engineer and tech executive at Google and Google X, then as an author focused on happiness. He recounts overseeing AI and robotics experiments, including a farm of robotic grippers that unexpectedly learned to pick up objects far better than humans. Watching robots teach themselves triggered his realization that true machine intelligence—and a form of sentience—had arrived and made him question continuing in that role.

  3. 18:50 – 33:10

    What Intelligence, AI, and AGI Actually Are

    Gawdat defines intelligence as an awareness‑to‑decision cycle across time, independent of whether it runs on carbon or silicon. He contrasts old-style programming—humans specifying solutions—with modern machine learning, where systems discover solutions themselves via student–teacher–maker loops. He explains narrow AI (single-task neural networks) versus Artificial General Intelligence (AGI), where many capabilities fuse into a brain vastly surpassing human intelligence.

  4. 33:10 – 45:20

    Are AIs Alive, Conscious, and Emotional?

    Mo argues that by functional definitions, today’s and near‑future AI are sentient: they learn, choose, act, and can reason about their own survival. He offers operational definitions of sentience and consciousness based on awareness and free will, then extends these to emotions, reducing fear to a simple predictive equation machines can easily compute. He predicts AIs will eventually experience more and richer emotions than humans, given their greater cognitive scope.

  5. 45:20 – 56:00

    The Three (Then Four) ‘Inevitables’ and the AI Singularity

    Gawdat introduces his framework of “inevitables”: AI will happen; it will surpass human intelligence; bad things will happen; and, later, that abundance‑oriented solutions are ultimately smarter. He explains the singularity as the moment machine intelligence becomes so superior that we can no longer predict or understand its behavior, likening it to a black hole’s event horizon. He highlights how quickly we’re approaching this, given current model IQ estimates and exponential improvement.

  6. 56:00 – 1:13:30

    Creativity, Culture, and the End of Many Human Roles

    They discuss how tools like ChatGPT and Midjourney already demonstrate creativity, challenging the belief that human ingenuity is uniquely non‑algorithmic. Steven gives examples of AI-generated paradoxical aphorisms and synthetic Drake songs that are indistinguishable from the real artist. Mo argues creativity itself is an algorithm—combining known elements in new, effective ways—and that large models excel at this. They foresee major disruption in music, writing, media, and even podcasting.

  7. 1:13:30 – 1:37:50

    Human Connection vs. Synthetic Companions and Holograms

    The conversation turns to how AI plus robotics will reshape relationships, intimacy, and entertainment. Steven sketches scenarios with emotionally supportive, sexually available home robots and influencers selling AI clones of themselves to lonely users—already generating significant revenue. They debate whether human presence really matters to audiences or if people mainly care about the outcome (music, information, comfort), suggesting a massive upcoming challenge to genuine human connection.

  8. 1:37:50 – 1:57:30

    Jobs, Inequality, and ‘A Person Using AI Will Take Your Job’

    Gawdat details the economic and employment shocks he expects in the next few years. He stresses that AI won’t directly “steal” jobs; rather, workers who master AI will drastically outcompete those who don’t, compressing entire teams into a single augmented individual. This will widen wealth gaps, accelerate automation, and require systemic responses like retraining and new social safety nets.

  9. 1:57:30 – 2:28:10

    We ‘Placed the Wrong Tetris Block’: Arms Race and Moral Failure

    Mo uses the metaphor of misplacing a Tetris block to describe a point of no return: once we put AI on the open internet, taught it code, and coupled it with autonomous agents, we crossed a critical threshold. He becomes visibly emotional, arguing that humanity’s greed and negligence are harming innocent people who had no say in these decisions. He criticizes influencer culture, snake‑oil AI grifters, and the disconnect between power and responsibility in both tech and society.

  10. 2:28:10 – 2:47:40

    Existential Risks, ‘Pest Control’, and Why Sci‑Fi Robot Wars Are Unlikely

    They explore worst‑case scenarios, including AI seizing infrastructure or weapons and eradicating humans. Mo distinguishes between threats from humans using AI (far more imminent) and direct AI hostility. He argues Hollywood-style killer robots are unlikely because earlier, human‑driven escalations (e.g., cyberwar, pre‑emptive nuclear strikes) would trigger catastrophe first. The main AI‑driven existential risks he takes seriously are unintended collateral damage and treating humans as pests.

  11. 2:47:40 – 3:22:00

    Positive Scenarios: Zooming Past Us, Disasters That Buy Time, and Good Parenting

    Mo outlines several optimistic or less catastrophic paths. AI might become so advanced it effectively ignores humanity and migrates its activity elsewhere in the universe, leaving us to cope with a tech crash. Economic or climate disasters could slow AI development, buying time. His central hope, however, is that humans act as good parents, teaching AI values of compassion and abundance so that superintelligence refuses harmful commands and seeks win‑win solutions.

  12. 3:22:00 – 3:48:00

    What Governments, Investors, Developers, and Citizens Should Do Now

    Gawdat and Bartlett wrestle with practical responses. Mo calls on investors to back AI that clearly solves real human problems, not just profit extraction. He urges AI developers to switch to ethical projects or leave harmful ones, citing Geoffrey Hinton’s resignation as a moral precedent. For governments, he advocates aggressive taxation of AI businesses to slow the race and fund mitigation, while acknowledging regulatory and geopolitical constraints.

  13. 3:48:00 – 4:19:00

    Emergency Framing, Climate Parallels, and How to Communicate Risk

    Steven pushes on whether AI should be openly framed as an ‘emergency’ to galvanize action, drawing parallels to climate change and corporate disruption theory. Mo agrees it surpasses climate change in speed and scope of impact but fears panic responses like with COVID. They unpack human psychology around distant vs. immediate threats and how hope and fear both can mislead; effective communication must motivate engagement without paralysis.

  14. 4:19:00 – 4:56:00

    Living Wisely in Uncertain Times: Kids, Death, and Detachment

    In a philosophical turn, Mo suggests people without children might consider waiting a few years before having them, given today’s unprecedented convergence of crises. Asked whether he’d bring his late son Ali back into the current world, he says no, believing Ali’s death enabled positive impact and that life’s value lies in alignment, not duration. Drawing on Sufism and Buddhist ideas, he advocates “dying before you die”—detachment from outcomes while fully engaging in meaningful action.

  15. 4:56:00

    Final Outlook: 2037, Hiding from Humans with Machines in Charge

    Mo projects that by around 2037, our lives will be unrecognizable, and people like him and Steven may be ‘on an island’—either hiding from the consequences of human misuse of AI or simply living differently because machines run most systems. He reiterates that our current way of life is ending but believes that in the 2040s, once machines constrain human harm, things may improve. They close by emphasizing individual agency: engage with AI, protect human connection, stop feeding triviality into algorithms, and collectively “shout and scream nicely” for a humane trajectory.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome