
Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart!
Yuval Noah Harari (guest), Steven Bartlett (host), Narrator
In this episode of The Diary of a CEO, featuring Yuval Noah Harari and Steven Bartlett, Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart! explores yuval Harari Warns: Algorithms Are Quietly Dismantling Democracy Worldwide Yuval Noah Harari argues that AI should be seen as "alien intelligence" because it makes decisions and generates ideas in ways fundamentally different from human minds, and is already seeping into every bureaucratic and political system. He traces AI within a 5,000‑year history of information technologies—from clay tablets and writing to social media—to show that information systems literally create and reshape democracy, ownership, trust, and even what we consider reality.
Yuval Harari Warns: Algorithms Are Quietly Dismantling Democracy Worldwide
Yuval Noah Harari argues that AI should be seen as "alien intelligence" because it makes decisions and generates ideas in ways fundamentally different from human minds, and is already seeping into every bureaucratic and political system. He traces AI within a 5,000‑year history of information technologies—from clay tablets and writing to social media—to show that information systems literally create and reshape democracy, ownership, trust, and even what we consider reality.
Harari’s core concern is not sentient killer robots but misaligned algorithms whose profit-driven goals exploit human psychological weaknesses—fear, hatred, and disgust—undermining democratic conversation, trust in institutions, and social cohesion. Social media recommendation engines and future AI systems are, in his view, the new "kingmakers" deciding what we see, think about, and fear, while remaining largely unregulated and unaccountable.
He warns that coming elections, especially in the US, could be existential for democracy if leaders use institutional control and algorithmic power to neutralize self‑correcting mechanisms and entrench themselves, as seen in places like Venezuela. The only viable response, he argues, is collective human cooperation to regulate algorithms, rebuild trustworthy institutions to verify information, and resist being divided into hostile tribes by the very systems we created.
Key Takeaways
Treat AI as "alien intelligence," not just a tool, because it independently generates strategies and ideas humans never conceived.
Harari argues that AI is the first technology in history that can both make decisions and create new ideas on its own. ...
Get the full analysis with uListen AI
The real danger is misaligned algorithmic goals, not rogue AI rebellion.
Using Nick Bostrom’s paperclip thought experiment and the very real case of social media, Harari explains the "alignment problem": systems do exactly what we ask, but in ways that undermine our deeper interests. ...
Get the full analysis with uListen AI
Algorithms, not individual posters, should be the primary targets of regulation and accountability.
Harari draws a clear line: humans have free speech; bots and algorithms do not. ...
Get the full analysis with uListen AI
In an age of deepfakes, trust must shift from technology to institutions that verify content.
Once video and audio can be faked as easily as text, Harari says we must stop trusting the medium itself and instead trust institutions that vouch for authenticity—just as we already do with printed words. ...
Get the full analysis with uListen AI
Most information is now "junk" engineered for emotional impact, so individuals need an information diet, not endless exposure.
Harari compares information to food: scarcity once made "more is better" a healthy heuristic; in an age of hyper-abundance and industrial processing, most of what we consume is harmful junk. ...
Get the full analysis with uListen AI
AI will transform work by automating information-only tasks, but jobs combining social and motor skills will be more resilient.
Roles that are "information in, information out" (coding, routine diagnostics, many forms of white‑collar analysis) are easiest to automate. ...
Get the full analysis with uListen AI
Human survival depends on resisting division and rebuilding cooperation before algorithms fully mediate our politics and identities.
Harari frames the central political risk as "divide and rule" at planetary scale: recommendation systems deepen tribal animosities, turning political rivals into existential enemies and making elections feel like wars. ...
Get the full analysis with uListen AI
Notable Quotes
“AI is becoming less and less artificial and more and more alien.”
— Yuval Noah Harari
“Democracy is a conversation. Once you believe people who don't think like you are your enemies, democracy collapses.”
— Yuval Noah Harari
“Only humans have free speech. Bots don't have free speech.”
— Yuval Noah Harari
“The humans are still more powerful than the AIs. The problem is that we are divided against each other, and the algorithms are using our weaknesses against us.”
— Yuval Noah Harari
“If something ultimately destroys us, it will be our own delusions, not the AIs.”
— Yuval Noah Harari
Questions Answered in This Episode
You argue that only institutions can restore trust in an age of deepfakes—concretely, what new types of institutions (beyond legacy media) do you think we urgently need to build in the next five years?
Yuval Noah Harari argues that AI should be seen as "alien intelligence" because it makes decisions and generates ideas in ways fundamentally different from human minds, and is already seeping into every bureaucratic and political system. ...
Get the full analysis with uListen AI
If algorithms are already acting as unelected "kingmakers," what specific regulatory model would you support to hold platform owners personally liable for recommendation-driven violence or democratic breakdown, without crushing smaller innovators?
Harari’s core concern is not sentient killer robots but misaligned algorithms whose profit-driven goals exploit human psychological weaknesses—fear, hatred, and disgust—undermining democratic conversation, trust in institutions, and social cohesion. ...
Get the full analysis with uListen AI
You liken AI’s current stage to amoebas in evolution—given how fast digital evolution moves, what would be the earliest, most realistic signs that we’re entering the "T. rex AI" phase, and how should policymakers respond at that moment?
He warns that coming elections, especially in the US, could be existential for democracy if leaders use institutional control and algorithmic power to neutralize self‑correcting mechanisms and entrench themselves, as seen in places like Venezuela. ...
Get the full analysis with uListen AI
In your view, where is the ethical line between beneficial artificial intimacy (e.g., mental health support bots) and dangerous large-scale emotional manipulation by governments or corporations, and who should have the power to police that line?
Get the full analysis with uListen AI
You suggest that if something destroys us it will be our own delusions, not AI—how should education systems be redesigned, practically, to train citizens to recognize and resist divisive narratives weaponized by algorithms?
Get the full analysis with uListen AI
Transcript Preview
The humans are still more powerful than the AIs. The problem is that we are divided against each other, and the algorithms are using our weaknesses against us. And this is very dangerous, because once you believe that people who don't think like you are your enemies, democracy collapses, and then the election becomes like a war. So if something ultimately destroys us, it will be our own delusions, not the AIs.
We have a big election in the United States.
Yes, and democracy in the States is quite fragile. But the big problem is, what if- (dramatic music)
Surely that will never happen.
(dramatic music) Yuval Noah Harari, the author of some of the most influential non-fiction books in the world today...
And is now at the forefront of exploring the world-shaping power of AI. And how it is beyond anything humanity has ever faced before. ... biggest social networks in the world, they're effectively gonna go for free speech. What is your take on that?
The issue is not the humans, the issue is the algorithms. So let me unpack this. In the 2010s, there was a big battle between algorithms for human attention. Now, the algorithms discovered, when you look at history, the easiest way to grab human attention is to press the fear button, the hate button, the greed button. The problem is that there was a misalignment between the goal that was defined to the algorithm and the interests of human society. But this is how it becomes really disconcerting, because if so much damage was done by giving the wrong goal to a primitive social media algorithm, what would be the results with AI in 20 or 30 years?
So what's the solution?
We've been in this situation many times before in history, and the answer is always the same, which is... (dramatic music)
Are you optimistic?
I try to be a realist.
(dramatic music) This is a sentence I never thought I'd say in my life. Um, we've just hit seven million subscribers on YouTube, and I wanna say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations. Um, from the bottom of my heart, but also on behalf of my team, who you don't always get to meet, there's almost 50 people now behind the Diary of a CEO that worked to put this together. So from all of us, thank you so much. Um, we did a raffle last month, and we gave away prizes for people that subscribed to the show up until seven million subscribers. And you guys loved that raffle so much that we're gonna continue it. So every single month, we're giving away money can't buy prizes, including meetings with me, invites to our events, and £1,000 gift vouchers to anyone that subscribes to the Diary of a CEO. There's now more than seven million of you. So if you make the decision to subscribe today, you can be one of those lucky people. Thank you from the bottom of my heart. Let's get to the conversation. 10 years ago, you made a video that was titled Why Humans Run the World. It's a very well-known TED Talk that you did. After reading your new book, Nexus, I wanted to ask you a slightly modified question-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome