Skip to content
Lex Fridman PodcastLex Fridman Podcast

Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3

Lex Fridman and Steven Pinker on steven Pinker Challenges AI Doomsday Fears With Rational Optimism.

Lex FridmanhostSteven Pinkerguest
Oct 17, 201837mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    You've studied the human…

    1. LF

      You've studied the human mind, cognition, language, vision, evolution, psychology from child to adult, from the level of individual to the level of our entire civilization, so I feel like I can start with a simple multiple-choice question. What is the meaning of life? Is it, A, to obtain knowledge, as Plato said? B, to obtain power, as Nietzsche said? C, to escape death, as Ernest Becker said?

    2. SP

      (laughs)

    3. LF

      D, to propagate our genes, as Darwin and others have said? E, there is no meaning, as the nihilists have said? F, knowing the meaning of life is beyond our cognitive capabilities, as Steven Pinker said, based on my interpretation 20 years ago? And G, none of the above?

    4. SP

      Uh, I'd say A comes closest, but I would amend that to attaining not only knowledge but, uh, fulfillment more generally. That is, life, health, stimulation, uh, (clears throat) access to the, uh, living, cultural, and social world. Now, this is our meaning of life. It's not the meaning of life, uh, if you were to ask our genes. Uh, the, their meaning, uh, is to, uh, propagate copies of themselves, but that is distinct from the meaning that the brain that they, uh, lead to sets for itself.

    5. LF

      So to you, knowledge is a small subset or a large subset of-

    6. SP

      It's a large subset, but it's not the entirety of human striving because, uh, we also want to, um, interact with people. We wanna experience beauty. We wanna experience the, the richness of the natural world. Uh, but, uh, understanding the, what makes the universe, uh, tick is, uh, is, is way up there, for some of us more than others. Uh, certainly for me, that's, uh, the, the, that's one of the top five.

    7. LF

      So is that a fundamental aspect? Are you just describing your own preference? Or is this a fundamental aspect of human nature is to seek knowledge to s- to, uh... In your latest book, you talk about the, the, the power, the usefulness of rationality and reason and so on. Is that a fundamental nature of, of human beings? Or is it something we should just strive for?

    8. SP

      Uh, it both. It is, we're, we're, uh, capable of striving for it because it is one of the things that make us what we are, homo sapiens-

    9. LF

      Mm-hmm.

    10. SP

      ... uh, w- wise man. Uh, we are unusual among, uh, animals in the degree to which we acquire knowledge and, and use it to survive. We, we make tools. We strike agreements, uh, via language. We, um, extract poisons. We predict the behavior of animals. We, uh, try to get at the workings of plants. And when I say "we," I don't just mean we in the modern West, but we as a species everywhere, which is how we've managed to, uh, occupy every niche on the planet, how we've managed to drive other animals to, to extinction. And the refinement of reason in pursuit of human well-being, of, uh, health, happiness, social richness, cultural richness is our, uh, our, our main challenge in the present. That is, uh, using our intellect, using our knowledge to figure out how the world works, how we work in order to make discoveries and strike agreements that make us all better off in the long run.

    11. LF

      Right. And, uh, you do that almost undeniably and, um, in a data-driven way in your recent book, but I'd like to focus on the artificial intelligence aspect of things. And not just artificial intelligence, but natural intelligence too. So 20 years ago, in a book you've written on how the mind works, you conjecture... Again, am I right to interpret things? (laughs)

    12. SP

      Mm-hmm.

    13. LF

      You can, (laughs) uh, you can correct me if I'm wrong, but you conjecture that human thought in the brain may be a result of a netwo- a massive network of highly interconnected neurons, so from this, uh, interconnectivity emerges thought. Compared to artificial neural networks we, which we use for machine learning today, is there something fundamentally more complex, mysterious, even magical about the biological neural networks versus the ones we have been starting to use, uh, over the past 60 years and have become to success in the past 10?

    14. SP

      There, there is something, uh, a little bit mysterious about, um, the human neural networks, which is that each one of us who is a neural network knows that we ourselves are conscious. Uh, conscious not in the sense of registering our surroundings or even registering our internal state, but in having subjective first-person present-tense experience. That is, when I see red, it's not just different from green, um, but it just, there's, there's a redness to it-

    15. LF

      Right.

    16. SP

      ... uh, that I feel. Whether an artificial system would experience that or not, I don't know and I don't think I can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally indistinguishable from a human, would we attribute consciousness to it? Or, uh, or ought we to attribute consciousness to it? And that's something that it's, uh, very, very hard to know. But putting that aside, putting aside that, that largely philosophical question, the question is, uh, is there some difference between the hum- human neural network and the ones that we are, we're building in, in artificial intelligence will mean that we're, on the current trajectory, not gonna reach the point where we've got a lifelike robot indistinguishable from a human because the way their neural, so-called neural networks are organized are different from the way ours are organized. I think there's overlap. Uh, but I think there are some, some big differences that, uh, they're, the current neural networks, current so-called deep learning systems are, are, are in reality not all that deep. That is, they are very good at extracting high-order statistical regularities, but most of the systems don't have a semantic level, a level of, uh, actual understanding of who did what to whom, uh, why, where, how things work, what causes what else.

    17. LF

      D- do you think that kinda thing can emerge as it does? So artificial neural networks are much smaller, the number of connections and so on, than the current-... human biological networks. But do you think sort of g- to go to consciousness or to go to this higher level semantic reasoning about things, do you think that can emerge with just a larger network with a more richly, weirdly interconnected network?

    18. SP

      Separate, again, consciousness, because consciousness isn't even a matter of complexity. You could have-

    19. LF

      It's a really weird one, yeah.

    20. SP

      Yeah, you could have- you could- you could sensibly ask the question of whether shrimp are conscious, for example. They're not terribly complex, but maybe they feel pain. So let- let's just put that one- that part of it aside.

    21. LF

      Yep.

    22. SP

      But, uh, I- I think sheer size of a neural network is not enough to give it, um, structure and knowledge. But if it's suitably engineered, then, uh, then- then why not? That is, we're neural networks. Natural selection did a- a kind of equivalent of engineering of our brains. So I don't think there's anything mysterious in the sense that no, uh, no system made out of silicon could ever do what a human brain can do. I think it's possible in principle. Whether it'll ever happen depends not only on how clever we are in engineering these systems, but whether even- we even want to, whether that's even a sensible goal. That is, you can ask the question, is there any, um, locomotion system that is, uh, as- as good as a human? Well, we kinda wanna do better than a human, ultimately, in terms of legged locomotion. Uh, eh, there's no reason that humans should be our benchmark. They're- they're tools that might be better in some ways. It may just be not as, uh... It may be that we can't duplicate a natural system because, uh, at some point, it's so much cheaper to use a natural system that we're not gonna in- invest more brain power and resources. So for example, we don't really have a s- substitute, an exact substitute for wood. We still build houses out of wood. We still build furniture out of wood. We like the look. We like the feel. It's- wood has certain properties that synthetics don't. It's not that there's anything magical or mysterious about wood.

    23. LF

      Okay.

    24. SP

      Uh, it's just that the extra s- steps of duplicating everything about wood is something w- we just haven't bothered because we have wood. Likewise, say, cotton. I mean, I'm wearing cotton clothing now. Feels much better than- than polyester. Uh, i- it's not that cotton has something magic in it. Uh, and it's not that if there was... that we couldn't ever synthesize something exactly like cotton, but at some point, it's just, uh, it's just not worth it. We've got cotton. And likewise, in the case of human intelligence, the goal of making an artificial system that is exactly like the human brain is a- a goal that we... probably no one is gonna pursue to the bitter end, I suspect.

    25. LF

      (laughs)

    26. SP

      Because if you want tools that do things better than humans, you're not gonna care whether it does something like humans. So for example, you know, diagnosing cancer or-

    27. LF

      That's right.

    28. SP

      ... predicting the weather, why set humans as your benchmark?

    29. LF

      But in- in general, I suspect you also believe that e- even if the human should not be a benchmark and we don't s- don't wanna imitate humans and their system, there's a lot to be learned about how to create an artificial intelligence system by studying the human.

    30. SP

      Yeah, I- I- I think that's right. They're... In the- in the same way that, um, to build flying machines, we wanna understand the laws of aerodynamics, and including birds, but not mimic the birds.

  2. 15:0030:00

    Mm-hmm. …

    1. SP

      that we're so familiar with homo sapiens where these two traits come bundled together, particularly in men-

    2. LF

      Mm-hmm.

    3. SP

      ... that we are apt to confuse high intelligence with, uh, a, uh, a will to power, but that's just, uh, an error. Um, the other fear is there will be collateral damage, that we'll give, uh, artificial intelligence a, a goal, like make paper clips, and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper clips. Uh, we'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal experiments. Or give it the goal of world peace and it, its conception of world peace is no people, therefore no fighting, and so it will kill us all. Now, I think these are utterly fanciful. In fact, I think they're actually self-defeating. They, first of all, assume that we're gonna be so brilliant that we can design an artificial intelligence that can cure cancer but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the process. Uh, and it assumes that the system will be so smart that it can cure cancer but so idiotic that it doesn't, can't figure out that what we mean by curing cancer is not killing everyone. So I think that the, the collateral damage scenario, the value alignment problem is, uh, is also based on a misconception.

    4. LF

      So one of the challenges, of course, we don't know how to build either system currently or are we even close to knowing? Of course, those things can change overnight, but at this time, theorizing about it is very challenging, uh, in, in either direction. So th- that's probably at the core of the problem is without that ability to reason about the real engineering things here at hand is, uh, your imagination runs away with things.

    5. SP

      Exactly.

    6. LF

      But let me sort of ask what do you think was the motivation and the thought process of Elon Musk, who I, I build autonomous vehicles, I study autonomous vehicles, I studied Tesla autopilot, I think it is one of the greatest currently application, large-scale application of artificial intelligence in the world. It has a potentially very positive impact on society. So w- how does a person who's creating this very good, quote-unquote narrow AI system also seem to be so concerned about this other general AI? What do you think is the motivation there? What do you think is the think- thought process?

    7. SP

      Well, I, you, you'd probably have to ask him, but there... And, and he is, um, uh, notoriously flamboyant, uh, impulsive to the, as we have just seen, to the detriment of his own goals of, of the, the health of the company. Uh, so I, I don't know what's going on, uh, on in his mind. You, you'd probably have to ask him. But I don't think the, uh... And I don't think the distinction between spec- special purpose, uh, AI and so-called general AI is relevant, that in the same way that special purpose AI is not going to do anything conceivable in order to attain a goal. All engineering systems have to, uh, are, are designed to trade off across multiple goals. When we built cars in the first place, we didn't forget to install brakes in, in, because the goal of a car is to go fast. Uh, it occurred to people, yes, you want it to go fast, but not always. So you, you, you build in brakes, too. Likewise, if a, a car is going to be, uh, autonomous that doesn't... And program it to take the shortest route to the airport, it's not gonna take the diagonal and mow down people and trees and fences because that's the shortest route. That's not what we mean by the shortest route when we program it. And that's just what an in- uh, an intelligent system, uh, is by definition. It takes into account multiple constraints. The same is true, in fact, even more true of so-called general intelligence. That is if it's re- genuinely intelligent, it's not going to pursue some goal single-mindedly, omitting every other, uh, uh, consideration and collateral effect. That's not artificial and general intelligence. That's, uh, that's artificial stupidity.Um, I agree with you, by the way, on the promise of autonomous vehicles for improving human welfare. I think it's spectacular. And I'm, I'm surprised at how little press coverage notes that in the United States alone, something like 40,000 people die every year on the highways, vastly more than are killed by terrorists. And we spend- w- we spent a trillion dollars on a war to combat deaths by terrorism, about half a dozen a year. Uh, whereas ev- year in year out, 40,000 people are, are massacred on the highways, which could be brought down to very close to zero. Uh, so I'm, I'm, I'm with you on the humanitarian benefit.

    8. LF

      Let me just mention that it's- as a person who's building these cars, it is a little bit offensive to me to say that engine- engineers would be clueless enough not to engineer safety into systems. I often stay up at night thinking about those 40,000 people that are dying. And everything I try to engineer is to save those peoples' lives. So every new invention that I'm super excited about, every new, uh, and, uh, uh, in, in all the deep learning literature and CVPR conferences and NIPS, everything I'm super excited about is all grounded in making it safe and st- help people. So I just don't see how that trajectory can all of a sudden slip into a situation where intelligence will be highly negative. I just wanna-

    9. SP

      You and I, you and I certainly agree on that. And I think that's only the beginning of the potential humanitarian benefits of artificial intelligence. There's been enormous attention to what are we gonna do with the people whose jobs are made obsolete by artificial intelligence, but very little attention given to the fact that the jobs that are gonna be made obsolete are horrible jobs. The fact that people aren't gonna be picking crops and making beds and driving trucks and mining coal, these are, you know, soul-deadening jobs. And we have a whole literature sympathizing with the people stuck in these menial, mind-deadening, uh, dangerous jobs. If we can eliminate them, this is a fantastic boon to humanity. Now granted, we, uh, you solve one problem, and there's another one, namely how do we get these people a, uh, a decent income? But if we're smart enough to invent machines that can make beds and, and put away dishes and, uh, and, and handle hospital patients, well, I think we're smart enough to figure out how to redistribute income to apportion some of the vast economic savings to the, the human beings who will no longer be needed to, to make beds.

    10. LF

      Okay. Sam Harris says that it's obvious that eventually AI will be an existential risk. He's one of the people who says it's obvious. We don't know when, uh, the claim goes, but eventually, it's obvious. And because we don't know when, we should worry about it now. So it's a very interesting argument in my eyes. Um, so how do you- how do we think about time scale? How do we think about existential threats when we don't really... we kn- we know so little about the threat, unlike nuclear weapons perhaps, a- ab- about this particular threat, that it could happen tomorrow, right? So but very likely it won't.

    11. SP

      Yeah.

    12. LF

      And very likely it'll be 100 years away. So how do... Do we ignore it? Do... How do we talk about it? Uh, do we worry about it? So what, how do we think about those?

    13. SP

      W- w- w- what is it? Uh...

    14. LF

      A threat that we can imagine. It's within the limits of our imagination, but not within our limits of understanding to sufficient- to accurately predict it.

    15. SP

      Uh, but, but what, what is, what is the it that we're afraid of?

    16. LF

      Oh, AI. Sorry, AI, uh, e- AI being the existential threat. AI could always-

    17. SP

      But, but how? But like enslaving us or turning us into paper clips?

    18. LF

      I think the most compelling from the Sam Harris perspective would be the paper clip situation.

    19. SP

      Mm-hmm. Yeah, I mean, I, I think, I just think it's totally fanciful. I mean, at his l- don't build a system. Don't give a, uh, d- don't... First of all, uh, i- the code of engineering is you don't implement a system with massive control before testing it.

    20. LF

      Mm-hmm.

    21. SP

      Now, uh, perhaps the culture of engineering will radically change, then I would worry. But I don't see any signs that engineers will suddenly s- do idiotic things like put a, uh, uh, electrical power plant in control of a system that they haven't tested, uh, first. Uh, or all of these scenarios not only imagine a, um, almost a magically powered intelligence-

    22. LF

      Mm-hmm.

    23. SP

      ... you know, including things like cure cancer, which is probably an incoherent goal because there's so many different kinds of cancer, uh, or bring about world peace. I mean, how do you even specify that as a goal?

    24. LF

      Mm-hmm.

    25. SP

      But the scenarios also imagine some degree of control of every molecule in the universe, uh, which not only is itself unlikely, but a- we would not start to connect these systems to, uh, infrastructure without, uh, without, um, testing as we would any kind of engineering system. Now maybe some engineers will be irresponsible and we need legal and, um, uh, regulatory a- and, uh, legal responsibility implemented so that engineers don't do things that are stupid by their own standards. But the, uh, uh, I- I've never seen enough of a plausible scenario of, uh, existential threat to devote large amounts of, uh, brain power to, to forestall it. And s-

    26. LF

      So you bel- you believe in the, sort of the power en masse of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of B, the very thing that per- that guides the development of new technology so it's safe and also keeps us safe as well.

    27. SP

      Yeah, if the same, uh, uh, and you know, granted the same culture of safety that currently is part of the engineering, uh, mindset for airplanes, for example. So yeah, I don't think that, that, uh, that that should be thrown out the window and, uh, that untested all powerful systems should be, uh, suddenly implemented. But there's no reason to think that they are. And in fact, if you look at the progress of artificial intelligence, it's been, you know, it's been impressive, especially in the last 10 years or so. But the idea that suddenly there'll be a step function that all of a sudden, uh, before we know it, it will be, um, all powerful, that there'll be some kind of recursive self-improvement, some kind of, uh, foom, uh, i- is also, uh, uh, fanciful. We certainly buy the technology that we-... that we're- that now impresses us, such as deep learning, where you train something on, uh, hundreds of thousands or millions of examples. They're not hundreds of thousands of, uh, problems of which curing cancer is, uh, a typical example. Uh, and so the kind of techniques that have allowed AI to increase in the last five years are not the kind that are gonna lead to this fantasy of, uh, of- of exponential sudden self-improvement.

    28. LF

      So-

    29. SP

      It's- I think it's- it's kind of a magical thinking. It's not based on our understanding of how AI actually works.

    30. LF

      Now, give me a chance here. So you said fanciful, magical thinking. In his TED Talk, Sam Harris says that thinking about AI killing all human civilization is somehow fun intellectually. Now, I have to say, as a scientist and engineer, I don't find it fun. But when I'm having beer with my non-AI friends, there is indeed something fun and appealing about it, like talking about an episode of Black Mirror, considering, uh, if a large meteor is headed towards Earth. We were just told that a large meteor is headed towards Earth, uh, something like th- this. And can you relate to this sense of fun? And do you understand the psychology of it?

  3. 30:0037:38

    Yep. So let me…

    1. SP

      to calibrate our budget of fear, worry, concern, planning to the, uh, actual probability of- of harm.

    2. LF

      Yep. So let me ask this then- this question. So speaking of imaginability, you said it's important to think about reason, and one of my favorite people who- who likes to dip into the outskirts of reason, uh, through fascinating exploration of his imagination is Joe Rogan.

    3. SP

      Oh, yes (laughs) .

    4. LF

      Uh, you, uh, so- who has through reason used to believe a lot of conspiracies and through reason has stripped away a lot of his beliefs, um, in that way. So it's fascinating actually to watch him through rationality kind of throw away the ideas of, uh, Bigfoot and, uh, 9/11. I'm- I'm not sure exactly.

    5. SP

      Chemtrails.

    6. LF

      Chemtrails.

    7. SP

      I don't know what he believes in. Yes, okay.

    8. LF

      But he no longer-

    9. SP

      Believed in. No, that's right.

    10. LF

      Believed in, that's right.

    11. SP

      No, he's- he's become a real force for, uh, for good.

    12. LF

      Yep. So you were on the Joe Rogan podcast in February and had a fascinating conversation, but as far as I remember, didn't talk much about artificial intelligence. I will be on his podcast in a couple weeks. Joe is very much concerned about existential threat of AI. I'm not sure if you're, uh... Which is why I was- I was hoping that you would get into that topic.And in this way, he represents quite a lot of people who look at the topic of AI from 10,000-foot level. So as an exercise of, uh, communication, you said it's important to be rational and reason about these things. Let me ask, if you were to coach me-

    13. SP

      (laughs)

    14. LF

      ... as an AI researcher about how to speak to Joe and the general public about AI, what would you advise?

    15. SP

      Well, I'd, uh... The- the short answer would be to read the sections that I wrote in Enlightenment Now-

    16. LF

      Now? (laughs)

    17. SP

      ... about AI. But a longer reason would be, I think, to emphasize, and I, and I think you're very well-positioned as an engineer to remind people about the culture of engineering, that it really is, uh, safety oriented, that... A- a- another discussion in Enlightenment Now, uh, I plot, uh, rates of accidental death from various causes, plane crashes, uh, car crashes, uh, occupational accidents, even death by lightning strikes, and they all plummet, uh, because the culture of engineering is, how do you squeeze out the, uh, the- the lethal risks? Death by fire, death by drowning, uh, death by, uh, asphyxiation, all of them drastically declined because of advances in engineering that I gotta say I did not f- appreciate until I saw those graphs. And it- it is because ex- exactly, people like you who stay up at night thinking, "Oh, my God, is- is what I'm- is what I'm- what I'm inventing likely to hurt people?" And to deploy ingenuity to prevent that from happening. Now, I'm not an engineer, although I spent 22 years at MIT, so I- I know something about the culture of engineering. My understanding is that this is the way- this is the way you think if you're an engineer.

    18. LF

      Yeah.

    19. SP

      And, uh, it's essential that that culture not be suddenly, uh, switched off when it comes to artificial intelligence. So, I mean, that- that- that could be a problem, but is there any reason to think it would be switched off?

    20. LF

      I don't think so. And one, there's not enough engineers speaking up for this way, for this... The- the excitement for, uh, the positive view of human nature. What you're trying to create is positivity. Like, everything we try to invent is trying to do good for the world. But let me ask you about the psychology of negativity. It seems just objectively, not considering the topic, it seems that being negative about the future makes you sound smarter than being-

    21. SP

      (laughs) Yeah.

    22. LF

      ... positive about the future irregardless of topic. Am I correct in this observation? And if you- if so, why do you think that is?

    23. SP

      Yeah, I think- I think there is that- that, uh, phenomenon, that, uh, as, uh, Tom Lehrer, the satirist said, "Always predict the worst, and you'll be hailed as a prophet."

    24. LF

      Yeah.

    25. SP

      Uh, it- it may- may be part of our overall negativity bias. We are, as a species, more attuned to the negative than the positive. We- we dread losses more than we enjoy gains. And, uh, that may- might open up a space for, uh, uh, prophets to remind us of harms and risks and losses that we may have overlooked. Uh, so I think there- there, uh- there- there is that asymmetry.

    26. LF

      So you've written some of my favorite books, uh, all over the place. So starting from Enlightenment Now to, uh, The Better Angels of Our Nature, Blank Slate, How the Mind Works, the- the one about language, uh, Language Instinct. Uh, Bill Gates, big fan too, uh, said of your most recent book that it's, uh, "My new favorite book of all time." (laughs) Um, so for you as an author, what was a book early on in your life that had a profound impact on the way you saw the world?

    27. SP

      Uh, certainly this book, Enlightenment Now, is influenced by, um, David Deutsch's, uh, The Beginning of Infinity. You know, rather deep reflection on, uh, knowledge and- and the power of knowledge to, uh, improve the human condition. Uh, they end with bits of wisdom such as that problems are inevitable, but problems are solvable given the right knowledge, and that solutions create new problems that have to be solved in their turn. That's a, I think, a kind of wisdom about the- the human condition that influenced the writing of this book. There's some, uh, books that are excellent but obscure, some of which I have on my, uh, o- o- on- on a page of my website. I read a book called The History of Force-

    28. LF

      Hmm.

    29. SP

      ... self-published by a political scientist named James Payne on the historical decline of violence, and that was one of the inspirations for The Better Angels of Our Nature.

    30. LF

      Hmm.

Episode duration: 37:53

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode epQxfSp-rdU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome