Skip to content
The Diary of a CEOThe Diary of a CEO

Yuval Noah Harari: Algorithms Are Quietly Killing Democracy

Harari argues AI is an alien intelligence reshaping democracy itself: profit-driven algorithms exploit fear and disgust, hollowing out public trust.

Yuval Noah HarariguestSteven Bartletthost
Sep 5, 20241h 54mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:31

    Intro

    1. YH

      The humans are still more powerful than the AIs. The problem is that we are divided against each other, and the algorithms are using our weaknesses against us. And this is very dangerous, because once you believe that people who don't think like you are your enemies, democracy collapses, and then the election becomes like a war. So if something ultimately destroys us, it will be our own delusions, not the AIs.

    2. SB

      We have a big election in the United States.

    3. YH

      Yes, and democracy in the States is quite fragile. But the big problem is, what if- (dramatic music)

    4. SB

      Surely that will never happen.

    5. YH

      (dramatic music) Yuval Noah Harari, the author of some of the most influential non-fiction books in the world today...

    6. SB

      And is now at the forefront of exploring the world-shaping power of AI. And how it is beyond anything humanity has ever faced before. ... biggest social networks in the world, they're effectively gonna go for free speech. What is your take on that?

    7. YH

      The issue is not the humans, the issue is the algorithms. So let me unpack this. In the 2010s, there was a big battle between algorithms for human attention. Now, the algorithms discovered, when you look at history, the easiest way to grab human attention is to press the fear button, the hate button, the greed button. The problem is that there was a misalignment between the goal that was defined to the algorithm and the interests of human society. But this is how it becomes really disconcerting, because if so much damage was done by giving the wrong goal to a primitive social media algorithm, what would be the results with AI in 20 or 30 years?

    8. SB

      So what's the solution?

    9. YH

      We've been in this situation many times before in history, and the answer is always the same, which is... (dramatic music)

    10. SB

      Are you optimistic?

    11. YH

      I try to be a realist.

    12. SB

      (dramatic music) This is a sentence I never thought I'd say in my life. Um, we've just hit seven million subscribers on YouTube, and I wanna say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations. Um, from the bottom of my heart, but also on behalf of my team, who you don't always get to meet, there's almost 50 people now behind the Diary of a CEO that worked to put this together. So from all of us, thank you so much. Um, we did a raffle last month, and we gave away prizes for people that subscribed to the show up until seven million subscribers. And you guys loved that raffle so much that we're gonna continue it. So every single month, we're giving away money can't buy prizes, including meetings with me, invites to our events, and £1,000 gift vouchers to anyone that subscribes to the Diary of a CEO. There's now more than seven million of you. So if you make the decision to subscribe today, you can be one of those lucky people. Thank you from the bottom of my heart. Let's get to the conversation.

  2. 2:316:48

    Will Humans Continue To Rule The World?

    1. SB

      10 years ago, you made a video that was titled Why Humans Run the World. It's a very well-known TED Talk that you did. After reading your new book, Nexus, I wanted to ask you a slightly modified question-

    2. YH

      Mm-hmm.

    3. SB

      ... which is, do you still believe that 10 years from now-

    4. YH

      Oh.

    5. SB

      ... humans will fundamentally be running the world?

    6. YH

      (inhales deeply) I'm not sure. It depends on the decisions we all take in the coming years, but there is a chance that the answer is no, that in 10 years, um, algorithms and AIs will be running the world. I'm not having in mind some kind of Hollywoodian science fiction scenario of one big computer kind of conquering the world. It's more like a bureaucracy of AIs, that we will have millions of AI bureaucrats everywhere. Um, you know, in the banks, in the government, in businesses, in universities, making more and more decisions about our lives that are everyday decisions, whether to give us a loan, whether to, uh, accept us to a job. Uh, and we will find it more and more difficult to understand the logic, the rationale, why the algorithm refused to give us a loan, why the algorithm accepted somebody else, uh, uh, to- for the job. And, um, you know, you could still have democracies with people voting for this president or this prime minister. But, um, if most of the decisions are made by AIs and humans, including the politicians, have difficulty understanding the reason why the AIs are making a particular decision, then power will gradually shift from humanity, uh, to these new alien intelligences.

    7. SB

      Alien intelligences?

    8. YH

      Yeah, I- I prefer to think about AI... And I know that the acronym is artificial intelligence, but I think it's more accurate to think about it as an alien intelligence, not in the sense of coming from outer space, in the sense that it makes decision in a fundamentally different way than, than human minds. Artificial means or- or- or have the sense that we design it, we control it. Some- something artificial is made by humans. With each passing year, AI is becoming less and less artificial and more and more alien. Uh, yes, we still design the kind of baby AIs, but then they learn and they change and they start making, uh, unexpected decisions and they start coming up with new ideas, which are, again, alien to the human way of- of doing things. You know, there is this famous example with the game of Go, that in 2016, AlphaGo defeated the world champion, Lee Sedol. But the amazing thing about it was, was the way it- it did it, because humans have been playing Go for 2,500 years.

    9. SB

      A board game.

    10. YH

      A board game, a- a- a strategy game developed in ancient China and considered one of the basic arts that any cultivated, civilized person in East- in East Asia had to know. And tens of millions of Chinese and Koreans and Japanese played Go for centuries. Entire philosophies developed around the game, of how to play it. It was considered a good preparation for politics and, and for life. And people thought that they explored the entire, uh, uh, realm, the entire geography landscape of Go.... and then AlphaGo came along and showed us that actually for 2,500 years, people were exploring just a very small bit, a very small part of the landscape of Go. There are completely different strategies of how to play the game that not a single human being came up with in more than 2,000 years of playing it. And AlphaGo came up with it in just a few days. Uh, so this is alien intelligence. And, you know, if it's just a game, then ... but, but the same thing is likely to happen in finance, in medicine, in religion, f- for better or for worse.

  3. 6:4810:34

    Why AI Is The Biggest Game-Changer In History

    1. YH

    2. SB

      You wrote this book, Nexus. Nexus. How do you pronounce it?

    3. YH

      Uh, Nexus.

    4. SB

      Nexus.

    5. YH

      (laughs) I'm not an expert on pronunciation, so ...

    6. SB

      You could've written many a book. Um, you're someone that's, I think, broadly curious about the nature of life, but also the nature of history. For you to write a book that is so detailed and comprehensive, there must have been a pretty strong reason-

    7. YH

      Hmm.

    8. SB

      ... why this book had to come from you now. And why is that?

    9. YH

      Because I think we need historical perspective on, on the AI revolution. I mean, there are many books about, about AI. This is, Nexus is not a book about the ... about AI. It's a book about the long-term history of information networks. I think that you understand what is really new and important about AI. We need perspective of thousands of years to go back and look at previous information revolutions, like the invention of writing and the printing press and the radio, and only then you really start to understand what is happening around us right now. Uh, one thing you understand, for instance, is that AI is really different. People compare it to previous revolutions, but it, it's different because it's the first technology ever, in human history, that is able to make decisions independently and to create new ideas independently. A printing press could print my book, but it could not write it. It could just copy my ideas. An atom bomb could destroy a city, but it could ... it can't decide by itself which city to bomb or why to bomb it. And AI can do that. And, you know, there is a lot of hype right now around AI, so people get confused, because they now try to sell us, to sell us everything as AI. Like, you want to sell this table to somebody? Oh, it's, it's an, it's an AI table.

    10. SB

      (laughs)

    11. YH

      And this water, this is AI water. So people ... What is AI? Everything is AI. No, not everything. Um, there is a lot of automation out there which is not AI. If you think about a coffee machine that makes coffee for you-

    12. SB

      Mm-hmm.

    13. YH

      ... it does things automatically, but it's not an AI. It's pre-programmed by humans to do certain things and it can never learn or change by itself. A coffee machine becomes an AI if you come to the coffee machine in the morning and the machine tells, "Hey, based on what I know about you, I guess that you would like an espresso." It learns something about you and it, it r- ... and, and, and it, it makes an independent decision. It doesn't wait for you to ask for the espresso. And it's, it's really AI if it tells you, "And, and I just came up with a new drink. It's called Boffi and I think you would like it." That's really AI, when it comes up with completely new ideas that we did not program into it and that we did not anticipate. And this is a game changer in history. It, it's bigger than the printing press. It's bigger than the atom bomb.

    14. SB

      You said we need to, uh, have a historical perspective and it ... Do you consider yourself to be a, a historian?

    15. YH

      Yes, that, that's my ... My profession is a historian.

    16. SB

      Mm-hmm.

    17. YH

      That, that's kinda ... This is my training. I was originally a specialist in medieval military history.

    18. SB

      Mm-hmm.

    19. YH

      I wrote about the, uh, Crusades and the Hundred Years' War and the strategy and lo- logistics of the English armies that invaded France in the 14th century (laughs) . This was my, my, my first articles. Um, and this is the kind of, of perspective or of knowledge that I also bring to try and understand what's happening now with, with

  4. 10:3416:53

    Is AI Just Information Or Something More?

    1. YH

      AI.

    2. SB

      Because most people's understanding of what AI is comes from them playing around with a large language model like ChatGPT or Gemini or Grok or something.

    3. YH

      Mm.

    4. SB

      That's, like, their understanding of it. You can ask it a question and it gives you an answer. That's really what people think of AI as-

    5. YH

      Mm-hmm.

    6. SB

      ... and so it's easy to be a bit complacent with it-

    7. YH

      Mm.

    8. SB

      ... or to see this technological shift as being trivial. But when you start-

    9. YH

      (laughs)

    10. SB

      ... talking about information and the, like, disruption of the flow of information and information networks, and when you bring it back through history and, and you give us this perspective on the fact that information effectively glues this all together-

    11. YH

      Mm-hmm.

    12. SB

      ... then it starts to become, for me, I think about it completely differently.

    13. YH

      I mean, there, there are two ways I think about it. I mean, one way is that when you realize that, as you said, that information is, is the basis for everything, when you start to shake the bases, everything can collapse or, or change, or, or something new could come up. For instance, uh, um, democracies are made possible only by information technology. Democracy, in essence, is a conversation, a group of people conversing, talking, trying to make decisions together. Dictatorship is that somebody dictates everything. One person dictates everything. That's dictatorship. Democracy's a conversation. Now, in the Stone Age, hunter-gatherers living in small bands, they were mostly democratic. Whenever the band needed to decide anything, they could just talk with each other and, and decide. As human societies grew bigger, it just became technically difficult to hold a conversation.So, the only examples we have from the ancient world for democracies are small city states, like Athens or Republican Rome. These are the two most famous examples, not the only ones, but the most famous. And even the ancients, even philosophers like Plato and Aristotle, they knew once you go beyond the level of a city state, democracy is impossible. We do not know of a single example from the pre-modern world of a large scale democracy, millions of people spread over a large territory conducting their political affairs democratically. Why? Not because of this or that dictator that took power, because democracy was simply impossible. You cannot have a conversation between millions of people when you don't have the right technology. The mo- large-scale democracy becomes possible only in the late modern era when a couple of information technologies appear, first the newspaper, then telegraph, and radio, and, and television, and they make large scale democracy possible. So democracy, it's not like you have democracy and on the side you have these information technologies. No, the basis of democracy is information technology, so if you have some kind of earthquake in the information technology, like the rise of social media or the rise of AI, this is bound to shake democracy, which is now what we see around the world, is that we have the most sophisticated information technology in history, and people can't talk with each other. The democratic conversation is breaking down. And every country has its own explanation, like you talk to Americans, "What's happening there between Democrats and Republicans? Why can't they agree on even the most basic facts?" And they give you all these explanations about the unique conditions of American history and society, but you see the same thing in Brazil, you see the same thing in, in France, in the Philippines. So it can't be the unique conditions of this or that country. It's the underlying technological revolution. And the other thing that history kind of, uh, uh, um, that I, I bring from history is how even relatively small technological changes, or seemingly small changes can have far-reaching consequences. Like you think about the invention of writing. Originally it was basically people playing with mud. I mean, writing was invented for the first... it was invented many times in many places, but the most, the, the first time in ancient Mesopotamia people take clay tablets, which is basically pieces of mud, and they take a stick and they use the stick to make marks in the, in the clay, in the clay, in the, in the mud, and this is the invention of writing. And this had profound effect. To give just one example, um, you think about ownership. What does it mean to own something, like I own a house, I own a field? So previously before writing, to own a field, if you live in a small Mesopotamian village, like 7,000 years ago, you own a field, this is a community affair. It means that your neighbors agree that this field is yours and they don't pick fruits there and they don't graze their sheep there because they agree it's yours. It's a community agreement. Then comes writing, and you have written documents, and ownership changes its meaning. Now to own a f- a, a field or a house means that there is some piece of dry mud somewhere in the archive of the king with marks on it that says that you own that field. So suddenly, ownership is not a matter of community agreement between the neighbors, it's a matter of which document sits in the archive of the king.

    14. SB

      Mm-hmm.

    15. YH

      And it also means, for instance, that you can sell your land to a stranger without the permission of your neighbors simply by giving the stranger this piece of dry mud in exchange for gold or silver or whatever.

    16. SB

      Mm-hmm.

    17. YH

      So, what a big change a seemingly simple invention, like using a stick to, to, to draw some, some signs on a piece of mud. And, and now think about what AI will do to ownership. Like maybe 10 year down the line, to own your house means that some AI says that you own it and if the AI suddenly says that you don't own it, for whatever reason that you don't even know, that's it. It's not yours.

  5. 16:5321:42

    Can AI Manipulate Our Bank Accounts And Political Views?

    1. YH

    2. SB

      That, that mark on that piece of mud was also the invention of sort of written language. And I think... I, I was thinking about when I was reading your book about how language holds our society together, not in the way that we, we often might assume as in me having a conversation with you, but passwords-

    3. YH

      Oh.

    4. SB

      ... um, poetry-

    5. YH

      So many passwords. (laughs)

    6. SB

      Uh, like banking, uh, it's like our whole society is secured by language.

    7. YH

      Yeah.

    8. SB

      And the first thing that the AIs have mastered is, uh, with large language models, is the ability to replicate that, which is, which made me, I think about all the things that in my life are actually held together with language, even my relationships now-

    9. YH

      Mm-hmm.

    10. SB

      ... because I don't see my friends. My friends live in Dubai, in America, in Mexico, so we conversate in language. Our, our relationships are held together in language. And as you said, democracies are held together in language. Um, and now there's a more intelligent force that's mastered that.

    11. YH

      Yeah.

    12. SB

      So-

    13. YH

      And it was so unexpected. Like, you know, five years ago people said AI will master this or that, self-driving vehicles, but language, nah. This is such a complicated problem. This is a, the human masterpiece, language. It will never master language. And ChatGPT came, and it, it's an... and, you know, I'm, I'm a words person-

    14. SB

      Mm-hmm.

    15. YH

      ... and I'm simply amazed by the...... quality of the texts that, uh, these large language l- language models produce. It's not perfect, but, uh, they really understand the semantic field of words. They can string words together in sentences to form a coherent text. Th- th- that's really remarkable. And as you said, I mean, this is the basis for everything. Like, I give instructions to my bank with language. If AI can generate text and audio and image, then how do I communicate with the bank in a way which is not open to manipulation by an AI?

    16. SB

      But the tempting-

    17. YH

      And-

    18. SB

      ... part in that sentence is you don't like communicating with your bank anyway.

    19. YH

      That's true.

    20. SB

      As in calling them, being on the phone, waiting for another human. So the temp- the temptation is, "You know, I don't like speaking to my bank anyway, so I'm gonna let the AIs do that. I'm gonna invest-"

    21. YH

      If I can trust them. I mean, the big question is, I mean, why does the bank want me to call personally to make sure that it's really me?

    22. SB

      Mm-hmm.

    23. YH

      It's not (laughs) somebody else telling the bank, "Oh, make this transfer to, to, I don't know, Cayman Islands."

    24. SB

      Mm-hmm.

    25. YH

      It's really me. And how do you make sure? H- how do you build this tru- I mean, the, the, the whole of finance for thousands of years is just one question: trust. All these financial devices, money itself is really just trust. It's not made from gold or silver or paper or anything. It's how do we create trust between strangers? And therefore, most financial inventions, in the end, they are linguistic-

    26. SB

      Mm-hmm.

    27. YH

      ... and symbolic inventions. It's not... You don't need some complicated physics. It's, it's complicated symbolism. And now AI might start creating new financial devices and, and will master finance because it mastered language. And, and like you said, I mean, w- we now communicate with other people, our friends all over the world. You know, in the 2010s there was a big battle between algorithms for human attention. We were just discussing it bef- before the podcast. Like, who... H- how do we get the attention of, of people? But there is something even more powerful out there than attention, and that's intimacy. If you really want to influence people, intimacy is, is more powerful than attention.

    28. SB

      How are you defining intimacy in this regard?

    29. YH

      Someone that you have a long-term acquaintance with, that you know personally, that you trust, that, to some extent, that you love, that you care about. Um, and until today, it was utterly impossible to fake intimacy and to mass-produce intimacy. You know, dictators could mass-produce attention. You know, once you have, for instance, radio, you can tell all the people in Nazi Germany or in the Soviet Union, "The great leader is giving a speech. Everybody must turn their radio on and listen." So you can mass-produce attention, but this is not intimacy. You don't have intimacy with the great leader. Now with AI, you can, for the first time in history, at least theoretically, mass-produce intimacy with millions of bots, maybe working for some government, uh, uh, faking intimate relationships with us, which will be hard to, to know t- this is a bot and not a human being.

  6. 21:4223:44

    How AI Will Affect Human Intimacy

    1. SB

      Hmm. It's interesting because when I... I've had so many conversations with relationship experts and a variety of people that speak to the decline in human-to-human intimacy and the rise in loneliness and-

    2. YH

      Mm-hmm.

    3. SB

      ... us becoming more s- um, sexless as-

    4. YH

      Right.

    5. SB

      ... a society and all of these kinds of things. So it's, it's almost with the decline in human-to-human intimacy and human-to-human connection and the rise of this sort of artif- the possibility of artificial intimacy-

    6. YH

      Mm-hmm.

    7. SB

      ... it begs the question what the future might look like in a world where people-

    8. YH

      (laughs)

    9. SB

      ... are lonelier than ever, more disconnected than ever, but still have the same Maslovian need for that connection and that feeling of, you know, love and belonging. And maybe this is why we're seeing a rise in polarization at the same time-

    10. YH

      Hmm.

    11. SB

      ... because people are desperately trying to belong somewhere and the algorithm is like reinforcing my echo chamber, so I'm-

    12. YH

      Yeah.

    13. SB

      You know? And it's... But I don't know how that ends.

    14. YH

      (inhales deeply) E- it... I don't think it's deterministic. It depends on the decision we make individually and as a society. Uh, there are, of course, also wonderful things that this technology can do for us. Uh, the ability of AI to hold a conversation, the ability to understand your emotions. It can potentially mean that we will have lots of AI teachers and AI, uh, uh, doctors and AI therapists that can give us better healthcare services, better education services than ever before. Instead of being, you know, a w- a kid in a class of 40 other kids that the teacher is barely able to give attention to this particular child and understand his or her specific needs and his or her specific personality, you can have an, uh, uh, AI t- tutor that is focused entirely on you and that is able to give you a quality of education which is really unparalleled.

  7. 23:4425:09

    Will AI Replace Teachers?

    1. SB

      I had this debate with my friend, uh, on the weekend. He's got two young kids who are one years old and three years old. And we were discussing in the future, in, in sort of 16 years time, where would you rather send your child? Would you rather send your child to be taught by a human in a classroom, as you've described, with lots of people, lots of noise, where they're not getting personalized learning? So if the classroom are more intelligent, they're being left behind. If they're more intelligent, they're being dragged back.

    2. YH

      Mm-hmm.

    3. SB

      Or would you rather your child sat in front of a screen, potentially, or a humanoid robot and was given a really personalized, tailored education?... that was probably significantly cheaper than, say, private education or university.

    4. YH

      Hmm.

    5. SB

      And-

    6. YH

      But you, you need the combination. I mean, I think that the, the, for, for many of the lessons, it will be better to go with the AI tutor, which again, you don't even have to sit in a, in a, in front of a screen. You can go to the park and get a, a, a, a lesson on, on, on ecology-

    7. SB

      Mm-hmm.

    8. YH

      ... just listening on, on, on, on as you, as you walk. But you will need, uh, large groups of kids for break time, because very often you learn that, that the most important lessons in school are not learned during the lessons. They are learned during the breaks.

    9. SB

      Mm-hmm.

    10. YH

      And this is something that (laughs) should not be automated. Uh, you would still need large group of, of children together with, uh, with human supervision, uh, uh, for that.

    11. SB

      The other thing I, I thought about a lot when I was reading your

  8. 25:0928:42

    Why Online Information Is Junk

    1. SB

      book is this idea that I would assume that us having more information and more access to information would lead to more truth in the world-

    2. YH

      Mm.

    3. SB

      ... less conspiracy, more agreement, but that doesn't seem to be the case.

    4. YH

      No, not at all. Uh, most information in the world is junk. I mean, I think the best way to think about it is it's, it's like with food, that there was a time, like a century ago in many countries, where food was scarce. So people ate whatever they could get, especially it was full of fat and sugar. Uh, and they thought that more food is always good. Like if you ask your great-grandmother, she would, "Yes, more food is always good." And then we reach a time of abundance in, in, in food, and, uh, we have all this industrialized processed food which is artificially full of fat and sugar and salt and whatever, and it's obviously bad for us. The idea that more food is always good, no. And definitely not all this, uh, junk food. And the same thing has happened with information, that information was once scarce. So if you could get your hands on a book, you would read it because (laughs) there was nothing else. And now information is abundant. We are flooded by information, and much of it is junk information which is artificially full of greed and anger and fear because of this battle for attention.

    5. SB

      Mm.

    6. YH

      Um, and it's not good for us. So we basically need to go on an information diet that, uh, again, the first step is to realize that it's not the case that more information is always good for us. We need a limited amount, and we actually need more time to digest the information. And we have to be of- of course also careful about the quality of what we take in because, uh, again, of the abundance of, of junk information. And the, the basic misconception, I think, is this link between information and truth, that people think, "Okay, if I get a lot of information, this is the raw material of truth, and I... more information will mean more knowledge." And that's not the case, because even in nature most information is not about the truth. The basic function of information in history, and also in biology, is to connect. Information is connection. And when you look at history you see that very often the easiest way to connect people is not with the truth, because the truth is a, is a costly and- and- and rare kind of information. It's usually easier to connect people with fantasy, with fiction. Uh, why? Because the truth tends to be not just costly. The truth tends to be complicated, and it tends to be uncomfortable and sometimes painful. Uh, if you think of, you know, like, uh, uh, uh, in politics, uh, a politician who would tell people the whole truth about their nation is unlikely to win the elections, because every nation has these skeletons in the cupboard and all these dark sides and dark episodes that people don't want to be confronted with. So we see that politically it's not... uh, uh, if you want to connect nations, religions, political parties, you often do it with fictions and fantasies.

  9. 28:4231:52

    How Politicians Use Fear To Manipulate Us

    1. YH

    2. SB

      And fear?

    3. YH

      Um, yeah.

    4. SB

      I was thinking about Sapiens and the role that stories play, um, in engaging our brains, and I was thinking a lot about the narratives. In the UK, we have a narrative where we- we're told that much of the cause of the problems we have in society, unemployment, um, other issues with crime, are because there's people crossing from France on boats.

    5. YH

      Mm.

    6. SB

      And in the U... And it's a very effective narrative-

    7. YH

      Yeah.

    8. SB

      ... to get people to band together to march in the streets. And in America, obviously, the same narrative of the wall-

    9. YH

      Mm-hmm.

    10. SB

      ... and the southern border. Um, "They're crossing our border in the millions. It's they're rapists. It's they're, it's they're not sending their good people. They're coming from mental institutions-"

    11. YH

      Mm-hmm.

    12. SB

      ... has galvanized people together, and those people are now, like, marching in the streets-

    13. YH

      Yeah.

    14. SB

      ... and voting based on-

    15. YH

      Mm-hmm.

    16. SB

      ... that story that is a fearful story.

    17. YH

      It's a very powerful story, because it connects to something very deep in, in inside us. Uh, and if you want to get people's attention, if you want to get people's engagement, so the fear button is one of the most efficient, most effective buttons to press in the human mind, and again, goes back to the Stone Age. So if you live in a Stone Age tribe, uh, one of your biggest worries is that the- the people from the other tribe will come to your territory, and you... will take your food or will, will kill you. Um, so this is a very ingrained fear in not just in humans, in every social animal. You... Th- they did the experiments on chimpanzees that show that chimpanzees have this also a kind of almost instinctive, uh, uh, uh, fear or disgust towards foreign chimpanzees from a different band.... and politicians and religious leaders, and, and, and they learn how to play on these, uh, uh, human emotions, a- almost like you play on a piano. Now originally, these feelings like disgust, they evolved in order to help us. Um, you know, on the, the most basic level, disgust is there because, you know, as, as, especially as a kid, you want to experiment with different foods. But if you eat something that is bad for you, you need to, to, to, you know, uh, uh, uh, puke it. You need to, to, to, uh, uh, throw it out. So you have disgust protecting you. But then you have religious and political leaders throughout history hijacking this defensive mechanism and teaching people from a very young age to, not just to fear, but to be disgusted by foreign people, by people who look different. And this is, again, you, you c- as an adult, you can learn all the theories and you can educate yourself that this is not true, but still very deep in your mind-

    18. SB

      Mm-hmm.

    19. YH

      ... if, th- there is a, a part that is just, "Ugh, these people are disgusting. These people are dangerous." And, uh, we, we saw it throughout history how, uh, many different movements have learned how to use this, uh, uh, emotional mechanisms to, to motivate

  10. 31:5239:38

    Should There Be A Free Speech Movement

    1. YH

      people.

    2. SB

      We, we s- we sit down at a very interesting time, Yuval, because two quite significant things have happened in the last, I think, year-

    3. YH

      Mm-hmm.

    4. SB

      ... as it relates to information and many of the things we've been talking about. One of them is Elon Musk bought Twitter and his real mandate has been this idea of free speech. And as part of that mandate, he's unblocked a number of figures who were previously blocked on Twitter.

    5. YH

      Mm-hmm.

    6. SB

      Um, a lot of them right-leaning people that were blocked for a variety of different reasons. And then also this week, Mark Zuckerberg released basically a letter publicly, and in that letter he says that he re- regrets the fact that he cooperated so much with the FBI, the government when they asked him to censor things on Facebook.

    7. YH

      Mm-hmm.

    8. SB

      One particular story, he says he regrets doing that, and it looks like, if you read between what he's saying, what he actually says explicitly, he says, "We're gonna push back harder in the future if governments or anybody else asks us to censor-"

    9. YH

      Hmm.

    10. SB

      "... certain messaging." Now what, what I'm seeing is that Twitter, which is one of the biggest social networks in the world, and Meta, the biggest social network in the world, have now taken the stance that effectively they're gonna let information flow. They're effectively gonna go for this free speech narrative.

    11. YH

      Mm-hmm.

    12. SB

      Now as someone that's used these platforms for a long time, specifically X or Twitter, it is crazy how different it is these days.

    13. YH

      Mm-hmm.

    14. SB

      There are things that I see every time I scroll that I never would have seen before this free speech, um, position. Now I'm not taking a stance whether it's good or bad, it's just very interesting. And there's-

    15. YH

      Mm-hmm.

    16. SB

      ... clearly an algorithm that is now really, like if I scroll, if I go on X right now, I will see someone being killed with a knife, I reckon within 30 seconds.

    17. YH

      (laughs)

    18. SB

      And I will see someone getting hit by a car. Um, I will see extreme Islamophobia-

    19. YH

      Mm-hmm.

    20. SB

      ... potentially. Um, but then I'll also see the other side. So it's not just something... I'll see all of the sides. And when you were talking earlier about like, "Is that good for me?" I, I had a flashback to my friend this weekend. It was my birthday so my f- me and my friends were together. Just looking over at him mindlessly scrolling these, like, horror videos-

    21. YH

      (laughs)

    22. SB

      ... on Twitter as he was sat on my left thinking, "God, he's like frying his dopamine receptors."

    23. YH

      (laughs) Oh.

    24. SB

      And I just, I just think this whole new, like, free speech movement-

    25. YH

      Mm-hmm.

    26. SB

      What is your take on th- this idea of free speech and the role, you know?

    27. YH

      Only humans have free speech. Bots don't have free speech. The tech companies are constantly confusing us about this issue because the issue is not the humans. The issue is the algorithms. And le- let me explain wh- wh- what I mean. If the question is whether to ban somebody like Donald Trump from Twitter, I agree this is a very difficult issue and we should be extremely careful about banning human beings, especially important politicians, from voicing their, uh, views and opinions. H- however much we dislike their opinions or, or them personally, it's a very serious matter to ban any human being from, from a platform. But this is not the problem. The problem on the platform is not the human users. The problem is the algorithms and the companies constantly shift the blame to the humans in order to protect their business interests. So le- let me unpack this. Humans create a lot of content all the time They create hateful content, they create sermons on compassion, they create cooking lessons, biology lessons, so many different things. A flood of information. The big question is then what gets human attention? Everybody wants attention. Now, uh, the companies also want attention. The companies give the algorithms that run the social media platforms a very simple goal, increase user engagement, make people spend more time on Twitter, more time on Facebook, engage more, sending more, uh, uh, likes and recommending it to their friends. Why? Because the more time we spend on the platforms, the more money they make. Very, very simple. Now the algorithms made a huge, huge discovery. By experimenting on millions of human guinea pigs, the algorithms discovered that if you want to grab human attention, the easiest way to do it is to press the fear button, the hate button, the greed button, and they started recommending to users to watch more and more content full of hate and fear and greed to keep them glued to the screen.And this is the deep cause of the epidemic of fake news and conspiracy theories and, and, and so forth. And the, the defense of, of the companies is, "We are not producing the content. Somebody, a human being, produced a hate-filled conspiracy theory about immigrants and it's not us." It's like, a bit like, I don't know, the, the, the chief editor of the New York Times publishing a hate-filled conspiracy theory on the front of the first page of the newspaper. And when you ask him, "Why did you do it?" Or you blame him. "Look what you did." He says, "I didn't do anything. I didn't write the piece. I just put it on the front of the New York Times. That's all. That's nothing." It's not nothing. People are producing immense amount of content. The algorithms are the king makers. They are the editors now. They decide what gets viewed. Sometimes they just recommend it to you. Sometimes they actually autoplay it to you. Like you, you, you chose to watch some video, at the end of the video, to keep you glued to the screen, the algorithm immediately, without you telling him, it, the algorithm, without you telling the algorithm, the algorithm autoplays some kind of video full of fear or greed just to keep you glued to the screen. It is the algorithm doing it. And this should be banned, or this should at least be, uh, uh, supervised and regulated. And this is not free- freedom of speech, because the algorithms don't have freedom of speech. Uh, yeah, the person who produced the hate-filled video, I would be careful about banning them, but that's not the problem. It's the recommendation which is the problem. The second problem is that a lot of the conversations now online are being overrun by bots. Again, if you look for instance at Twitter/X as an example, so people often want to know what is trending, what, which stories get the most attention. If everybody's interested in, in, in a particular story, I also want to know what everybody's talking about. And very often it's the bots that are driving the conversation, because a particular story initially gets a lot of traction, a lot of traffic, because a lot of bots retweet it, and then people see it and think... They don't know it's bots. They think it's humans. So they say, "Oh, lots of humans are interested in, in this, so I also want to know what's happening." And this draws more attention. This should be forbidden. Bots are, are very basically, you cannot have AIs pretending to be human beings. This is fake humans. This is counterfeit humans. If you see activity online and you think it's human activity, but actually it's bot activity, this should be banned. And it doesn't, uh, uh, uh, harm the free speech of any human being because it's a bot. It doesn't have freedom of speech.

  11. 39:3845:30

    How Algorithms Are Shaping Global Politics And Increasing Fear

    1. YH

    2. SB

      I was thinking a, a lot about what you said about these algorithms are actually running, running the world. And I mean, yeah, so if the algorithms are deciding what I see based on what I spend my time looking at because they wanna make, you know, the platforms wanna make more money-

    3. YH

      Mm-hmm.

    4. SB

      ... and if I have a innate sort of predisposition to spend more time focused on things that scare me-

    5. YH

      Yeah.

    6. SB

      ... or, then s- you just have to give me a couple of years and every year that goes past, I'll become more fearful, more scared.

    7. YH

      It reinforces your own weaknesses. Again, it's, it's like the food industry. So the food industry discovered we liked food with a lot of salt and, and, and fat in it and gives it more to us. And then it says, "But this is what the customers want. What do you want from us?"

    8. SB

      Mm-hmm.

    9. YH

      It's the same thing, but even worse with these algorithms that, because this is the food for the mind. Yes, humans have a tendency that if something is very frightening or something fills them with anger, that they, they focus on it and they tell all their friends about it, but to artificially amplify it, it's not, it's just, it's just not good for our mental health and social health. It is using our own weaknesses against us instead of helping us deal with them.

    10. SB

      Is it fair to say, now this is me just jumping to conclusions a little bit-

    11. YH

      Mm-hmm.

    12. SB

      ... but is it fair to say that in a world where you remove, um, restrictions around blocking certain characters, right wing characters that are, their messages may be based on immigration, et cetera, you remove those restrictions so they're all allowed on every platform and then you program the algorithm to be focused on revenue, that eventually more people will become right wing? And I say that in part because it's, it's a right wing narrative to say that immigrants are bad and that, you know... I'm not saying that the left are innocent 'cause they're absolutely not.

    13. YH

      Mm-hmm.

    14. SB

      But I'm saying that the fearful narratives, the fear seems to come more from the right in my opinion.

    15. YH

      Mm-hmm.

    16. SB

      Like especially in the UK. It was the fear comes from immigrants and these people can take your money and all these kinds of things. Um-

    17. YH

      I, I, I think that the key issue is to, not to label it as a right or left issue because again, democracy is a conversation and you can have a conversation only if you have several different opinions. And it's, I think it should be okay to have a conversation about immigration that people should be able to have different opinions about it. Uh, that's fine. The problem starts when, uh, one side vilifies and, and, uh, uh, demonizes anybody who doesn't think... And you see it in, to some extent from both sides, but in, in the case of immigration, so you would have these conspiracy theories that anybody who supports immigration, for instance, they want to destroy the country. They are part of this conspiracy to flood the country with immigrants and to change its nature and, and, and whatever. And this is the problem.... that, uh, democracy, once you believe that people who don't think like you, they are not just your political rivals, they are your enemies, they are out to destroy you, they intend to destroy your way of life, your group, then democracy collapses. Because there can be no way, uh, between enemies, democracy doesn't work. It works if you think that the other side is wrong, but they are still essentially good people who care about the country, who care about me, but they have different opinions. If you think that they are my enemies, they try to destroy me, then the election becomes like a war because you're fighting for your survival. You will do anything to win the election because your survival is at stake. If you lose, you have no incentive to accept the verdict. If you win, you only take care of your tribe and not of the enemy tribe.

    18. SB

      What if you don't believe the election is legitimate?

    19. YH

      Then democracy can't function. Uh, this is the, again, the basic, uh, democracy can't exist in just any... You, it- it- it's like, it's like a delicate plant that needs certain conditions in order to survive and to flourish. And one condition, for instance, is that you have information technologies that allows a conversation. Another condition is that you trust the institutions. If you don't trust the institution of elections, it, it doesn't work. And, and, and a third condition is that you need to think that the people on the other side of the political divide, they are my rivals, but they are not my enemies. Now the problem with what's happening now with democratic conversations is because of this tendency to go to more and more and more extremes, it creates the impression that the other side is, is an enemy. And this is a problem not just for the right, also for the left, that on both sides you see this, this, uh, uh, uh, uh, feeling that the other side is the, is an enemy and that its positions are completely illegitimate. And if we reach that point, then the conversation collapses, and it should be possible to have complex conversations and discussions about difficult issues like immigration, like gender, like climate change without seeing the other side as an enemy, which was possible for, for, you know, for generations. So why is it that now it seems to be just become impossible to talk with the other side or to agree about anything?

  12. 45:3048:48

    The Impact Of The US Election On Global Politics

    1. YH

    2. SB

      We have a big election in the United States this year.

    3. YH

      Very big one. Yeah. (laughs)

    4. SB

      Do you think a lot about it?

    5. YH

      Uh...

    6. SB

      And the consequences.

    7. YH

      Yes, yes. I mean, it seems like a very (inaudible) to be a coin toss, I mean, like 50/50.

    8. SB

      Yeah. Mm-hmm.

    9. YH

      Um, you know, e- elections become really an existential issue if there is a chance they will be the last elections. (laughs) If one side is intense to ch- simply change the rules of the game if it comes to power, then it becomes existential. Because again, democracy works on the basis of self-correcting mechanisms, that this is the big advantage of democracy over dictatorship. In a dictatorship, a dictator can make a lot of good decisions, but sooner or later, they will make a bad decision and there is no mechanism in a dictatorship to, uh, uh, identify and correct such mistakes.

    10. SB

      Like Putin?

    11. YH

      Yeah. There is just no mechanism in, in Russia that could say, "Putin made a mistake. He should go. He should let somebody else try a different course of action." This is the great advantage of democracy. You try something. It doesn't work. You try something else. But the big problem is what if you choose someone who then changes the system, neutralizes its self-correcting mechanism, and then you cannot get rid of him anymore? This is what happened, for instance, in Venezuela that or- originally, Chavez and the Chavista movement, they came to power democratically. People wanted to, "Hey, let's try this." And now in the last elections a couple of weeks ago, um, uh, the evidence is very, very clear that Maduro lost big time.

    12. SB

      Mm-hmm.

    13. YH

      But, uh, he controls everything, the election committee, everything. And he claims, "No, I won." And th- the... he, they destroyed Venezuela. You know, it's something like a quarter of the population fled the country, which was one of the richest countries in, in South America b- before and they just can't get, get rid of the guy.

    14. SB

      Surely that will never happen in the West.

    15. YH

      Oh, uh, it's... Don't say never in history. Uh, uh, history can, can catch up with you, whoever you are.

    16. SB

      Th- that's one of the illusions we come-

    17. YH

      And Venezuela w- wa- was part of the West, in, in many ways still is.

    18. SB

      This is one of the illusions we live under, though. We think, oh, that can never happen to the UK or the United States or Canada, these sort of, quote unquote, "civilized"-

    19. YH

      Mm-hmm. (laughs)

    20. SB

      ... nations.

    21. YH

      You know, according to some measurements, democracy in the United States is quite new and quite, quite fragile. I- if you think about it in terms of, of who gets to vote, for instance.

    22. SB

      Mm-hmm.

    23. YH

      So, um, it's, yeah, it would be... Uh, again, I don't know what are the chances, but even if there is a 20% chance that a Trump administration would change the rules of the game of American democracy in such a way to as to make it, for instance, by, uh, uh, changing the rules about who votes-

    24. SB

      Mm-hmm.

    25. YH

      ... or how do you count votes, uh, that it will become almost impossible to get rid of them. Uh, that's not... That, that's not out of the possible in historical terms.

  13. 48:4850:29

    What Trump Could Do To US Democracy

    1. SB

      Do you think it's possible that Trump will do that?

    2. YH

      Yes. I mean, you, you saw it on the 6th of January. I mean, the, the, the most sensitive moment in, in every democracy is the moment of transfer of power. And the magic of democracy is that democracy is meant to ensure a peaceful transfer of power. But as I said, like, you, you choose one party, you give them a try. After some time, if people say, "They didn't do a good job. Let's try somebody else." And, you know, we have people who hold, in the United States, they hold the biggest power in the world. The president of the United States have enough power to destroy human civilization. All these nuclear missiles, all this arming. And he loses the election, and he says, "Okay. I give up all this power and I let the other guy try." This, this is amazing. And this is exactly what Trump didn't do. He, from the beginning, I mean, even from 2016, he refused ... They, they asked him directly, "If you lose the election, will you accept the results?" And he said no. And, um, in 2020, he did not hand power peacefully. He tried to prevent it. And the fact that he's now running again ... And I, I think, to some extent, the lesson he got from the 6th of January is that, "I can basically get away with anything, at least with my people, with my base." That it was like a, a, a test, a try. "If I do this extreme thing and they still support me afterwards, it basically means they will support me no matter what I do."

    3. SB

      I'm,

  14. 50:2955:37

    Can We Trust What We See On Social Media And The News?

    1. SB

      I'm wondering, in a world of, um, such a fragile democracy, when information flows and networks are disrupted by something like AI, if misinformation and disinformation and the ability for me to make a video ... I can make a video right now of Donald Trump speaking and saying something in his voice.

    2. YH

      Hmm.

    3. SB

      Um, and I could he- help that video go viral. Like how do you hold together democracy and communication when you don't believe anything that you're seeing online?

    4. YH

      Hmm.

    5. SB

      And we're just at the start of this now, so I could-

    6. YH

      We haven't seen anything yet. This is just really the, the first baby steps o- of this.

    7. SB

      I, I, I'm gonna play a video on this screen right now so people can see.

    8. YH

      Mm-hmm.

    9. SB

      And for those listening, you'll just hear it. But I'm gonna play a video that Isaac over there-

    10. YH

      Hmm.

    11. SB

      ... in the corner of the room made of me speaking in this chair. And it wasn't me and I didn't say it and I wasn't in this chair. Hey there. This is AI Steve. Do you think I'll be able to take over the diary of a CEO one day? Leave your comments below. And it sounds exactly like me, identical.

    12. YH

      Hmm.

    13. SB

      And it's not me. And I, I wonder this with, you know ... Mo- most of us get our political information and our information generally now from social media.

    14. YH

      Yeah.

    15. SB

      From ... And if I can't believe anything that I'm seeing because it's all easy (snaps fingers) to make, some kid in Russia in their bedroom can make a video of the prime minister here, um, I don't know where we get our information from anymore, how we verify it.

    16. YH

      The an- the answer is institutions. We've been in this situation many times before in history and the answer is always the same, institutions. You cannot trust the technology. You trust the institution that verifies the information. Think about it like with, with print, that, um, you can write on a piece of paper anything you want. You can write, "The Prime Minister of Britain said," and then you open quotation marks and you put something into the mouth of the prime minister. You can write anything you want. And when people read it, they don't believe it, or they, they, they shouldn't believe it. Just because it's written that the prime minister said it doesn't mean that it's true. So how do we know which pieces of paper to believe as an institution? We would believe, or greater chance we will believe if, on the front page of the New York Times or of the Sunday Times or of the Guardian, you will have, "The British prime minister said," open quotation marks, "blah, blah, blah." Because we don't trust the paper or the ink. We trust the institution of the Guardian or the, or the Wall Street Journal or whatever. With videos, we never had to do that, because nobody could fake them, so we trusted the technology. If we saw a video and said, "Ah, this is, this has to be true," but when it becomes very easy to fake videos, then we revert to the same principle as with print. We need an institution to verify it. If we see the video on the official website of CNN or of the Wall Street Journal, then we believe it, because we believe the institution backing it. And if it's just something on TikTok, we know that, you know, any kid can do that. Why, why should I believe it? So now we are in the transition period. We are still not used to it. So when we see a video of Donald Trump or Joe Biden, the video still gets to us, because we grew up in a time when it was impossible to fake it. But I think very quickly people will realize you can't trust videos, you can only trust the institutions. And the, the question is will we be able to produce, uh, to create, to maintain trustworthy institutions fast enough to save the democratic conversation? Because if not, if you can't believe anything, this is the ideal for dictators. Uh, uh, when you can't trust anything, the only system that works is a dictatorship. Because democracy works on trust, but dictatorship works on terror, on fear. You don't need to trust anything in a dictatorship. You don't trust anything. You fear. Uh, for democracy to work, you need to trust, for instance, that some information is reliable, that the election committee is impartial, that the courts are just. And, uh, uh, if more and more institutions are attacked and people lose trust in them-... then, then, then, then democracy collapses. Uh, but going, going back to, to, to information, so one option is that the old institutions like newspapers and TV stations, uh, uh, they will be the institutions that we trust to verify certain videos. Or we will see the emergence of new institutions. And the, again, the big question is whether, uh, we'll be able to develop trust in them. And I specifically say institutions and not individuals. No large scale society, especially not a democratic society, can function without trustworthy bureaucratic institutions.

    17. SB

      And

  15. 55:3758:32

    Will AI Eventually Run Governments?

    1. SB

      will those bureaucratic institutions be AI?

    2. YH

      That's the big question, because increasingly, AIs will be the bureaucrats. And-

    3. SB

      What do you mean by bureaucrats? What's the word bureaucrat? What does that mean?

    4. YH

      Ooh, uh, that, that's a very important question, because, uh, uh, human civilization runs on bureaucracy.

    5. SB

      Bureaucrats are essentially officials in government that try and-

    6. YH

      Not just in government. It's, I mean, the origin of the word bureaucrat, it comes from French, from the 18th century. And bureaucracy is, is means the, the rule of the writing desk, is to rule the world or to rule society with pen and papers and documents. Like the example we gave in the very beginning about ownership. So there, you own a house because there is a document in some archive that says that you own it, and a bureaucrat produced this document, and if you now need to retrieve it, then, uh, this is the job of, of a bureaucrat, to find the right document at the right time. And all big, uh, uh, uh, systems run on it. Hospitals and schools and corporations and banks and, and sports associations and libraries, they, they all run on these documents and, and the bureaucrats who know how to read and write and find and file documents. One of our big problems is that it's, it's, it's difficult for us to understand bureaucratic systems, because they are a very recent development in human evolution, and this makes us suspicious about them, and we tend to, uh, believe all kinds of, uh, uh, conspiracy theories about the deep state and about what, what's going on in all these bureaucracies. And it's really complicated, and it's going to be more complicated as more of the decisions will be made by AI bureaucrats. An AI bureaucrat means that decisions like how much money to allocate to a particular issue will no longer be made by a human official. It will be made by an algorithm. And when people decide why, uh, wh- when people ask, "Why is the sewage system broken? Why didn't they, why didn't they give enough money to fix it?" I don't know. The algorithm just decided to give the money to something else.

    7. SB

      Why will bureaucracies be run by AI over people? Like why will, at some point, a nation decide that, in fact, AI can, is better at making these decisions?

    8. YH

      Uh, first of all, first of all, it's not a future development. It's already happening. More and more of the decisions are being made by AIs, and this is just because the amount of information you need to take into account are, are enormous. And it's very difficult for humans to do it. It's much easier (laughs) for the AIs

  16. 58:321:02:01

    What Jobs Will AI Leave For Humans?

    1. YH

      to do it.

    2. SB

      If all these people, you know, bureaucrats, lawyers, accountants, it sounds like... I, I always wonder, you know, h- what, what are humans gonna be left to do? In your book, you say that AI is going too far. AI is going so far beyond human intelligence that it should actually be referred to alien intelligence.

    3. YH

      Mm.

    4. SB

      And if it goes so far beyond human int- intelligence, it's my assumption that most of the work that we do is based on intelligence. So even like me doing this podcast now.

    5. YH

      Hm.

    6. SB

      This is me asking questions-

    7. YH

      Yeah.

    8. SB

      ... based on information that I've gathered, based on what I think I'm interested in, but also based on what I think the audience will be interested in. And compared to AI, I'm a m- like a little monkey. Like I-

    9. YH

      (laughs)

    10. SB

      Do you know what I mean? If, if-

    11. YH

      Oh.

    12. SB

      ... an AI has a, an IQ that is 100 times mine and an in, uh, source of information that is a fa- million times bigger than mine, there's no need for me to do this podcast.

    13. YH

      Hm.

    14. SB

      I can get an AI to do it. And in fact, an AI can talk to an AI and deliver that information to a human. But then if we look at most industries, like being a lawyer-

    15. YH

      Mm-hmm.

    16. SB

      ... um, accountancy, I mean, a, a lot of... med- the medical profession is based on information.

    17. YH

      Yeah.

    18. SB

      Um, driving. I think that's the biggest employer in the world is the profession of driving, whether it's delivery or Uber or whatever it is. Um, where, where do humans belong in this complex?

    19. YH

      Anything which is just information in and information out is ripe for automation. These are the easiest jobs to automate. Um-

    20. SB

      Like being a coder.

    21. YH

      Like being a coder, or again, like being, uh, an accountant. At, at least certain types of accountants, lawyers, doctors, they are the easiest to automate. If a doctor... The only thing they do is just take information in, all kind of results of blood tests and whatever, and they... information out. They, they, they diagnose disease and they write a prescription. This will be easy to automate in the coming years and decades. But a lot of jobs, they require also social skills and motor skills. If your job requires a combination of skills from several different fields, it's, it's not impossible, but it's much more difficult to automate it. So if you think about a nurse that needs to replace a bandage to a crying child, this is much, much harder to automate than just a doctor that writes a prescription. Because this is not just data.... the nurse needs, uh, uh, good social skills to interact with the child and motor skills to just replace the bandage. Um, so this is harder to automate. And, uh, there will con- even for people who just deal with information, there will be new jobs. The problem will be the retraining, and not just, you know, uh, uh, uh, retraining in terms of, of acquiring new skills, but psychological retraining. How do you kind of reinvent yourself in a new profession and do it not once, but again and again and again? Because as the AI revolution unfolds, and we are just at the very beginning of it, we haven't seen anything yet. So there will be old jobs disappearing, new jobs emerging, but the new jobs will rapidly change and vanish, and then there will be an- a new wave of new jobs, and people will have to reinvent themselves four, five, six times to stay relevant. And this will create immense psychological stress.

  17. 1:02:011:05:33

    Which Jobs Will Be Automated By AI?

    1. YH

    2. SB

      So many of the big companies are also working at the same time on humanoid robots.

    3. YH

      Mm.

    4. SB

      There's this humanoid robot race going on. And by humanoid robots, I mean, you know, Tesla have their humanoid robot, I think it's called Optimus-

    5. YH

      Mm. Yeah.

    6. SB

      ... which they're developing, and it'll cost, you know, X thousands of pounds, and it'll ... and I watched a video of it recently where it can do quite delicate, sort of motor skill-

    7. YH

      Mm.

    8. SB

      ... based stuff, so probably clean the house. It can probably work on the production line.

    9. YH

      Mm-hmm.

    10. SB

      It can probably put things in boxes. Um, and I just wonder, when we say, you know, people are gonna lose their jobs, in a world where you have humanoid robots and you have intelligence that's beyond us, and you combine the two, where these humanoid robots are very, very intelligent-

    11. YH

      Mm-hmm.

    12. SB

      ... like, I don't know what-

    13. YH

      (laughs)

    14. SB

      ... I'm like, "Where do the, the unemployed go to-"

    15. YH

      Hm.

    16. SB

      ... to, to find these new professions?" Like obviously, it's, it's difficult to forecast the new professions of the future. History tells us that.

    17. YH

      Yeah.

    18. SB

      But I, but I can't ... yeah, I can't figure out what the new professions are. I mean, my girlfriend does breath work. I guess the breath work part is quite easy to disrupt, but then she takes women away for t- retreats in Portugal and stuff.

    19. YH

      Hm.

    20. SB

      So I'm like, "Okay, she's gonna kind of be safe because-"

    21. YH

      (laughs) .

    22. SB

      "... these women are going there to connect with humans and to be in this little special s- place offline intentionally." So retreats, she'll probably be fine.

    23. YH

      Hm. Yeah, anything that ... you know, there are things that we want in life which are not just about solving problems, like, "I'm sick. I want to be healthy."

    24. SB

      Mm.

    25. YH

      "I want my problem solved," but there are, uh, um, many things that we want to have a connection. Like if you think about sports, um, robots or, or, or machines can run much faster than people for a very long time now. And we just had the Olympics, and people are not very interested in seeing robots running against each other or, or against people, because what really makes sports interesting in the end is the human weaknesses and the ability of humans to, to deal with their weaknesses. And, and human athletes still have jobs.

    26. SB

      Right.

    27. YH

      Even though in, in ... again, in, in many lines, like running, you can have a machine run much faster than the world champion. Or-

    28. SB

      I thought about this the other day.

    29. YH

      And, uh, uh, an- and another example is, is priests. Like one of the easiest jobs to automate is the priesthood of at least certain religions because you just need to repeat the same texts and ge- and gestures again and again in, in specific situations. Like if you have a wedding ceremony, then, um, you know, the priest just need to repeat s- the same words, and there you are, you're married. Now, we don't think about priests as being in danger of being replaced by robots, um, because what we want from a priest is not just the mechanical repetition of certain words and gestures. We think that only another frail flesh-and-blood human who knows what is pain and love and, and who can suffer, only they can connect us to the divine. So most people would not be interested in having their wedding conducted by a robot, even though technically it's very easy to do it. Now, the, the big question, of course, what happens if AI gains consciousness? This is like the trillion-dollar question of, of, of AI consciousness. Uh, then it's all bets are off, but that's a, a, a, a, a different and very, very big discussion, I mean, whether it's possible, uh, how, w- would we know and, and, and so forth.

  18. 1:05:331:07:19

    Is AI Conscious?

    1. YH

    2. SB

      Do you think it's possible?

    3. YH

      We have no idea. I mean, we don't understand what consciousness is. We don't know how it emerges in the organic brain, so we don't know if there is an essential connection between consciousness and organic biochemistry-

    4. SB

      Mm-hmm.

    5. YH

      ... so that it can't arise in an inorganic, uh, uh, silicon-based computer. There is a big confusion, uh, first of all should, should be said again, between consciousness and intelligence. Um, intelligence is the ability to reach goals and solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate. Humans and other animals, we solve problems through our feelings. Our feelings are not something on the side. They are a main method for how to deal with the world, how to solve problems. Now, so far, computers, they solve problems in a completely different way than humans. Again, they are alien intelligence. They don't have any feelings. When they win a game of chess, they are not joyful. When they lose a game, they are not sad. They don't feel anything. Now, we don't know how organic brains produce these feelings of pain and pleasure and love and hate.... so this is why we don't know whether an inorganic structure based on silicon and not carbon, whether it will be able to generate such things or not. That's, I think, the biggest question in- in- in science. And, um, so far we have no answer.

    6. SB

      Isn't

  19. 1:07:191:10:01

    AI, Robots, And The Future Of Consciousness

    1. SB

      consciousness just like a hallucination? Isn't it just like an illusion that I think I'm conscious because I've got the circuitry which tells me that I am effectively? It tells me through a bunch of like feelings and things that I'm conscious. Like, I think I'm looking at you now. I think I can see you. But that's just

    2. YH

      No. The- the-

    3. SB

      ... function.

    4. YH

      ... feeling is real. I mean even if we are all... It's like the matrix.

    5. SB

      How do you know it's real?

    6. YH

      And we are all in... Hmm?

    7. SB

      How do you know it's real?

    8. YH

      It's the only real thing in the world. I mean, there is nothing... Um, everything else is just conjuncture.

    9. SB

      You know what?

    10. YH

      We- we only experience our own feelings, what we see, what we smell, what- what- what- what- what- what we touch. Th- this we actually experience. This is real. Then we have all these theories about, "Why do I feel pain? Oh, it's because I stepped on a nail, and there is such a thing in the world as a nail," and whatever. It could be that we are all inside the... A- a big computer on the planet Zircon run by super intelligent mice.

    11. SB

      If I spoke to an AI-

    12. YH

      Mm-hmm.

    13. SB

      ... I could get an AI to- to tell me that it feels pain and sadness.

    14. YH

      That's... That- that's the...

    15. SB

      (laughs)

    16. YH

      That's a big problem because there is a huge incentive to train AIs to pretend to be alive, to pretend to have feelings. The... And- and- and we see that there is a huge effort to tr- to- to- to produce such AIs. And in truth, because we don't understand consciousness, we don't have any proof even that other humans have feelings.

    17. SB

      Mm-hmm.

    18. YH

      We... I- I feel my own feelings, but I never feel your feelings.

    19. SB

      Mm-hmm.

    20. YH

      I only assume that you are also a conscious being. And society grants, uh, this... A status of a conscious entity to not only to humans, but also to some animals not based on any scientific proof, but based on social convention. Like most people feel that their dogs are conscious.

    21. SB

      (laughs)

    22. YH

      That their dogs can feel pain and pleasure and love and- and so forth, so society accepts, most societies, that dogs are sentient beings and they have some rights under the law. Now as AI become... Even if AI has no feelings, no consciousness, no sentients whatsoever, but it becomes very good at, uh, uh, pretending to have feelings and convincing us that it has feelings, then this will become a- a social convention, that people will feel that their AI friend is- is a conscious being and therefore should be granted rights.

    23. SB

      Mm-hmm.

    24. YH

      And there is even already a legal path for how to do it, at least in the United States. You don't need to be a human being in order to be a legal person.

  20. 1:10:011:13:09

    Are We Living In A Simulation?

    1. YH

    2. SB

      Y- it's funny because you mentioned, you kind of alluded to the fact jokingly that we might just be in like a simulation. You... It was one of your like, "Well, maybe we're just in a simulation," but-

    3. YH

      Yeah. Could be. (laughs)

    4. SB

      But... And- and it's funny because in a world of AI, I- I think (laughs) my belief in that as a possibility has only increased-

    5. YH

      Yes.

    6. SB

      ... that this is in fact just a simulation because I've watched us go from when I was born, not really having internet access-

    7. YH

      (laughs)

    8. SB

      ... to now being able to kind of speak to this alien on my computer that can like now do things for me and having virtual reality experiences which are sometimes quite indistinguishable where my- my, you know... I fall into the trap of believing that I am inside Squid Games-

    9. YH

      Mm-hmm.

    10. SB

      ... because I've got this headset on. And you play it forward and you play it forward and you play it forward and you imagine any rate of improvement, then I hear the- the arguments for simulation theory and I go, "Do you know what? Probably if you play this forward 100 years-

    11. YH

      Hmm. (laughs)

    12. SB

      ... you know, like at the rate we're on, of the rate of trajectory that we're on, then we will be able to create information networks and organisms that don't... In like a laboratory or in a computer, that don't necessarily realize-

    13. YH

      Yeah.

    14. SB

      ... they're in the computer, especially with like what's going on with bio-

    15. YH

      It's- it's- it's already happening to some extent, you know, these information bubbles that more and more people live inside them.

    16. SB

      Mm-hmm.

    17. YH

      It's still not the whole physical world, but you get the same event and people on, say, different parts of the political- political spectrum, they just can't agree on anything. They live in their own matrixes.

    18. SB

      Mm-hmm.

    19. YH

      And, you know, the- the- the... When- when- when the internet came along for the first time, the main metaphor was the web, the World Wide Web. A web is something that connects everything. And now the main metaphor which is, uh, uh, the- the- the simulate... This simulation theory is- is- is- is- is representing this new metaphor. The new metaphor is the cocoon. It's a web that turns on you and then closes you from all sides so you can no longer see anything outside-

    20. SB

      Yeah.

    21. YH

      ... and there could be other cocoons with other people in there, and you have no way to get to them.

    22. SB

      Yeah.

    23. YH

      No- nothing that happens in the world can connect you anymore because you're in different cocoons.

    24. SB

      You've only gotta look at someone else's phone.

    25. YH

      (laughs)

    26. SB

      You've only got to look at someone else's Twitter or X or Instagram feed.

    27. YH

      Is this the same reality?

    28. SB

      It is so different. I-

    29. YH

      (laughs)

    30. SB

      Do you know what I was talking about? Over the weekend, my friend was sat to my left scrolling.

  21. 1:13:091:16:21

    How Algorithms Control Our Lives

    1. SB

    2. YH

      And- and this was... You know, this- this is a very ancient fear because, um, for instance, Plato...... wrote exactly about that. And the most famous parable, I think, from Greek philosophy is the, the, the allegory of the cave in which Plato imagines a, a theoretical scenario, an imaginary scenario of, of a group of prisoners chained inside a cave with their face to a blank wall in which shadows are being projected from behind them and they mistake the shadows for reality. And he was basically describing, you know, people in front of a screen-

    3. SB

      Mm.

    4. YH

      ... just mistaking the screen for reality. And you have the same thing in ancient India with Buddhists and Hindu sages talking about, uh, uh, Maya, which is the world of illusions, and the deep fear that maybe we are all trapped inside a world of illusions, that th- the most important thing that we think in the world, uh, the, the wars we fight, we fight wars over illusions in our mind. And, uh, this is now becoming technically possible. Like previously it was these philosophical thought experiments. Now, part of what is interesting as a historian about the present era is that a lot of ancient philosophical problems and discussions are becoming technical issues, that, yes, you can suddenly realize Plato's cave in your phone.

    5. SB

      So scary. I find it really scary because you're right. Like, I think right now some people might say that they have some kind of grasp over like the ranking system or why something-

    6. YH

      Mm.

    7. SB

      ... shows up when I search it or whatever. But as these intelligence aliens become more and more powerful, um, it's, of course, we would have less understanding because we're like handing over the decision-making.

    8. YH

      In, i- i- in some industries, they are now completely the kingmakers. Like I'm here on a book tour. I wrote Nexus, so I go from podcast to podcast, from TV station to TV station to talk about my book. But w- the, the, the entities I'm really trying to impress are the algorithms.

    9. SB

      Mm-hmm.

    10. YH

      Because if I can get the attention of the algorithms, the humans will follow.

    11. SB

      (laughs)

    12. YH

      (laughs)

    13. SB

      Ooh. Ugh, yuck.

    14. YH

      Uh, you know, that's the, that's our real... We are basically kind of carbon creatures in a silicon world.

    15. SB

      I used to think we were in control, though. And now I feel like the silicon's in control.

    16. YH

      Uh, sh- control is shifting. That, that's... We are still in control to some extent. We are still making the most important decisions, but not for long. And this is why we have to be very, very careful about the decision we make in the, in the next few years because, uh, in 10 years, in 10, in 20 years, it could be too late. By then, the algorithms will be making the most important decisions.

  22. 1:16:211:21:13

    Understanding The AI Alignment Problem

    1. YH

    2. SB

      You talk about a couple of, um, big dangers you see with the algorithms and AI and this sort of shift and disruption of information. One of them is this alignment problem.

    3. YH

      Mm.

    4. SB

      Which, um, how would you explain the alignment problem to me in a way that's simple to understand?

    5. YH

      So the classical, uh, kind of example is a thought experiment invented by the philosopher Nick Bostrom in 2014, which sounds crazy, but, but, you know, bear with it. Uh, he imagines a super intelligent, uh, AI computer, which is bought by a paperclip factory. And the paperclip manager tells the AI, "Your goal... The reason I, I bought you, your goal, your, your entire existence, you're here to produce as many paperclips as, as possible. That's your goal." And then the AI conquers the entire world, kills all humans, and turns the entire planet into factories for producing paperclips. And it even begins to send expeditions to outer space to turn the entire galaxy into just paperclip production industry. And the point of the thought experiment is that the AI did exactly what it was told. It did not rebel against the humans. It did exactly what the boss wanted, but of course, it was not... The, the strategy it chose was not aligned with the real intentions, with the real interests of the, of the human factory manager who just couldn't foresee that this will be the result. Now this sounds like outlandish and ridiculous and crazy, but it already happened to some extent, and we talked about it. This is the whole problem with social media and user engagement. In the very same years that Nick Bostrom came up with this thought experiment in 2014, the managers of Facebook and YouTube, they told their algorithms, "Your goal is to increase user engagement." And the algorithms of social media, they conquered the world and turned the whole world into user engagement, which was what they were told to do. We are now very, very engaged. And again, they discovered that the way to do it is with outrage and with fear and with conspiracy theories. And this is the alignment problem. When Mark Zuckerberg told the Facebook algorithms, "Increase user engagement," he did not foresee and he did not wish, uh, that the result will be collapse of democracies, wave of conspiracy theories and fake news, hatred of minorities. He did not intend it. Uh, but this is what the algorithms did because there was a misalignment...... between the, uh, uh, the way that the algorithm, the goal th- that was defined to the algorithm and the interests of, of human society and even of the human managers of, of the companies that, that are, are deployed these algorithms. And this is still a, a, a small scale disaster because the social media algorithms that, uh, uh, created all this social chaos over the last 10 years, they are very, very primitive AI. This is like the, the amoebas of if you think about the development of AI as an e- evolutionary process for this is still the amoeba stage.

    6. SB

      The amoeba being the very simple-

    7. YH

      The very simple-

    8. SB

      ... single cell-

    9. YH

      ... life forms. The, the beginning. Like, like single cell life form. We are still in evolutionary terms, organic evolution. We are like billions of years before we will see the dinosaurs and the mammals or the humans. But digital evolution is billions of times faster than organic evolution. So the distance between an AI amoeba and the AI dinosaurs could be covered in just a few decades. If ChatGPT is the amoeba, how would the AI Tyrannosaurus rex would like? Would look like? And this is where the alignment problem becomes really disconcerting. Because if so much damage was done by giving kind of the wrong goal to a primitive social media algorithm, what would be the results of giving a misaligned goal to a T. rex AI in 20 or 30 years?

  23. 1:21:131:25:04

    The Relationship Between AI And Corporate Interests

    1. YH

    2. SB

      The, the issue at the heart of this is, you know, some people might think, okay, just give it a different goal. But when you're dealing with private companies who are listed on the stock market, there really is only one goal that keep that-

    3. YH

      Make money. (laughs)

    4. SB

      Exactly. That benefits survival. So all of the platforms have to say, you know, "The goal of this platform is to make more money and to get more attention."

    5. YH

      Because also it's mathematically easy. There is huge, huge problem in how to define for AIs and algorithms the, the goal in a way they can understand. Now, the, the great thing about make money or increase user engagement is that it's very easy to measure it mathematically.

    6. SB

      Mm-hmm.

    7. YH

      Uh, one day you have a million hours being watched on YouTube, the next, a year later, it's two million. Very easy for the algorithm to see, "Hey, I'm making progress." But let's say that, uh, uh, that Facebook would have told its algorithm, "Increase user engagement in a way that doesn't undermine democracies." How do I measure that? Who knows what is the definition for the robustness of de- of democracy? Nobody knows.

    8. SB

      Mm-hmm.

    9. YH

      So defining the goal for the algorithm as increase user engagement but don't harm democracy, almost impossible. This is why they go for the kind of, of, of easy goals, which are the most dangerous.

    10. SB

      But even in that scenario, if I told... if I'm the owner of a social network and I say, "Increase user engagement, but don't harm democracy," the problem I have is my competitor who leaves out the second part-

    11. YH

      Mm-hmm.

    12. SB

      ... and just says, "Increase user engagement," is gonna beat me because they're gonna have more users, more eyeballs, more revenue. Advertisers are gonna be happier, then my company's gonna falter, investors are gonna pull out.

    13. YH

      That's a question because you, there are two things to take in- into consideration. First of all, uh, you have governments. Governments can regulate and they can penalize a, a social media company that defines goals in a socially responsible way. Just as they penalize newspapers or TV stations or, or, or car companies that are behaving an antisocial way. The other thing is that, uh, um, humans are not stupid and self-destructive. That, uh, uh, if we, we would like to have better products in the sense of also socially better products. And, uh, I, I gave earlier the example with food diets. Like think how much... Yes, the food companies they discovered that, uh, uh, if they fill a product artificially with lots of fat and sugar and salt, people would like it. But people discovered that this is bad for their health, so you now have like, for instance, a huge market for, for diet products and people are becoming very aware of what they eat. The same thing can happen in the information market. Uh-

    14. SB

      The cost though is like 80... 70%, 80% of people in the US have like chronic disease and are obese and-

    15. YH

      Mm-hmm.

    16. SB

      ... you know, life expectancy is now... looks like it's going the other way a little bit in the, in the Western world. And, and it's... Uh, I don't know. I just feel like with, um, with policing consumption of goods like alcohol-

    17. YH

      Mm-hmm.

    18. SB

      ... nicotine, food seems much more simple than policing information-

    19. YH

      Mm-hmm.

    20. SB

      ... and the flow of information beyond, you know, beyond racism or like inciting violence. I don't know how you police...

    21. YH

      Uh, we, we already covered the, the two most basic and powerful, uh, tools are to hold companies liable for the actions of their algorithms. Not for

  24. 1:25:041:33:08

    The Growing Threat Of Totalitarian Governments

    1. YH

      the content that the users produce, but for the actions of the algorithms. Uh, I don't, I don't think we should penalize Twitter or Facebook. If somebody post, uh, a racist post, um, I would be very careful about penalizing Facebook for that. Because then who decides what is racism and so forth? But if the algorithm of, of Facebook deliberately spreads some racist conspiracy theory, that's the algorithm. That's not human free speech and-

    2. SB

      How do you know it's a racist conspiracy theory though?

    3. YH

      Okay. So, n-now, now we get to the, to the difficult conversation, but this is something that we have the courts for. And I would be very c- very careful about having the courts judge on the content of, uh, the production of individual users. But when it comes to, uh, uh, algorithms deliberately, routinely spreading a- a particular type of, of information, like a conspiracy theory, we can involve the courts. The, the key issue is who has liability, that it's the company that is liable for the, what the algorithm is doing and not the human individual liable for what they are saying. Um, and a-another key distinction here is between private and public. Like part of the problem is the erasure of the boundary between the two. I think that humans have a right to stupidity in private, that in your private space with your friends and y- with your family, you have a right to stupidity. You can say stupid things. You can tell racist jokes, you can tell homophobic jokes. It's not good, it's not nice, but you're a human being, you're allowed to do that. But not in public. I mean, even for politicians, like as a gay person, if the prime minister tells a, a homophobic joke in private, I don't need to care about that. That's his or her business. But if they say it in public on, on television, that's a huge problem. Now traditionally, it was very easy to distinguish private from public. You are in your private house with a group of friends, you say something stupid, that's private. N- it's n- nobody's business. You go to t- the town square and you stand on, on a pedestal and you shout something to thousands of people, that's public. Here you can be punished if you say something racist or homophobic or outrageous. But it, it was easy for you to know. Now the problem is you go, let's say, on WhatsApp, you think you're just talking with two of your friends and you say something re- really stupid, and then it, it gets viral, and it's all over the place. And, uh, I don't have an, an easy solution for that. But, um, one, o- one measure which is adopted by some governments is, for instance, that, uh, uh, people who have a large following, they are held to a different standard than people who don't. Even on th- the most basic thing of identifying yourself as a human being, uh, we don't want th- e- e- everybody would have to get some certification from the government to talk with their friends on, on WhatsApp. But if you have 100,000 followers online, we need to know that you are not a bot, that you're actually human being. And again, this is not covered by freedom of speech, because bots don't have freedom of speech.

    4. SB

      A slippery slope, right, because I've, I've gone back and forth on this argument of anonymity and whether-

    5. YH

      Mm-hmm.

    6. SB

      ... it's a good thing or a bad thing for social networks and the rebuttal that I got when I went to the side of, um, IDing people is that like totalitarian governments will use that as a way to basically punish the people who are speaking up.

    7. YH

      The totalitarian governments are doing it whether we like it or not.

    8. SB

      Yeah.

    9. YH

      It's, it's not a question that if the British do it, then the Russians will say, "Okay, so we'll also do it." The Russians are doing it anyway.

    10. SB

      Will Americans start to do it? Will they start to... If, if someone speaks out against Trump-

    11. YH

      Mm-hmm.

    12. SB

      ... and he has access to their identity and information, can he go look at them and get them arrested?

    13. YH

      If we reach that point-

    14. SB

      Like a whistleblower.

    15. YH

      ... when the courts will allow such a thing, then we are in very deep trouble already and, uh, what we should realize is that with the surveillance technology now in existence, a totalitarian government has so many ways to know who you are, that it's, th- that's not the, the, the main issue, right?

    16. SB

      You talked about, um, the platforms being responsible for the consequences.

    17. YH

      Yes.

    18. SB

      In the UK over the last month, we've had, I don't know if you've heard-

Episode duration: 1:54:16

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 78YN1e8UXdM

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome