Skip to content
Lex Fridman PodcastLex Fridman Podcast

Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103

Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of research at the Machine Intelligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence. Support this podcast by signing up with these sponsors: - Jordan Harbinger Show: https://jordanharbinger.com/lex/ - MasterClass: https://masterclass.com/lex EPISODE LINKS: Ben's Twitter: https://twitter.com/bengoertzel Ben's Website: https://goertzel.org/ AGI Conference: http://agi-conf.org/2020/ SingularityNET: https://singularitynet.io/ SingularityNET Twitter: https://twitter.com/singularity_net OpenCog: https://opencog.org/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 3:20 - Books that inspired you 6:38 - Are there intelligent beings all around us? 13:13 - Dostoevsky 15:56 - Russian roots 20:19 - When did you fall in love with AI? 31:30 - Are humans good or evil? 42:04 - Colonizing mars 46:53 - Origin of the term AGI 55:56 - AGI community 1:12:36 - How to build AGI? 1:36:47 - OpenCog 2:25:32 - SingularityNET 2:49:33 - Sophia 3:16:02 - Coronavirus 3:24:14 - Decentralized mechanisms of power 3:40:16 - Life and death 3:42:44 - Would you live forever? 3:50:26 - Meaning of life 3:58:03 - Hat 3:58:46 - Question for AGI CONNECT: - Subscribe to this YouTube channel - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostBen Goertzelguest
Jun 22, 20204h 8mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:003:20

    Introduction

    1. LF

      The following is a conversation with Ben Goertzel, one of the most interesting minds in the artificial intelligence community. He's the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of research at the Machine Intelligence Research Institute, and chief scientist of Hanson Robotics, the company that created the Sophia robot. He has been a central figure in the AGI community for many years, including in his organizing and contributing to the Conference on Artificial General Intelligence, the 2020 version of which is actually happening this week, Wednesday, Thursday, and Friday. It's virtual and free. I encourage you to check out the talks, including by Joscha Bach, uh, from episode 101 of this podcast. Quick summary of the ads. Two sponsors, The Jordan Harbinger Show and Masterclass. Please consider supporting this podcast by going to jordanharbinger.com/lex and signing up on masterclass.com/lex. Click the links, buy all the stuff. It's the best way to support this podcast and the journey I'm on in my research and startup. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcasts, support it on Patreon, or connect with me on Twitter @LexFridman, spelled without the E, just F-R-I-D-M-A-N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. This episode is supported by The Jordan Harbinger Show. Go to jordanharbinger.com/lex. It's how he knows I sent you. On that page, there's links to subscribe to it on Apple Podcasts, Spotify, and everywhere else. I've been binging on his podcast. Jordan is great. He gets the best out of his guests, dives deep, calls them out when it's needed, and makes the whole thing fun to listen to. He's interviewed Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Garry Kasparov, and many more. His conversation with Kobe is a reminder of how much focus and hard work is required for greatness in sport, business, and life. I highly recommend the episode if you want to be inspired. Again, go to jordanharbinger.com/lex. It's how Jordan knows I sent you. This show, sponsored by Masterclass. Sign up at masterclass.com/lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For 180 bucks a year, you get an all-access pass to watch courses from, to list some of my favorites, Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and communication, Will Wright, creator of the greatest city-building game ever, SimCity and Sims, on game design, Carlos Santana on guitar, Garry Kasparov, the greatest chess player ever, on chess, Daniel Negreanu on poker, and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. Once again, sign up on masterclass.com/lex to get a discount and to support this podcast. And now, here's my conversation with Ben Goertzel.

  2. 3:206:38

    Books that inspired you

    1. LF

      What books, authors, ideas had a lot of impact on you, um, in your life in the early days?

    2. BG

      You know, what got me into AI and science fiction and such in the first place wasn't a book, but the original Star Trek TV show, which my dad watched with me, like, in its first run. It would have been 19-1968, '69 or something, and that, that was incredible, 'cause every, every show, they visited a different, a different, uh, alien civilization with a different culture and weird mechanisms. But that, that got me into science fiction, and there wasn't that much science fiction to watch on TV at that stage, so that got me into reading the whole, the whole literature of science fiction, you know, from, from the beginning of the previous century un- un- until that time, and there, I mean, there was so many science fiction writers who were in- inspirational to me. I'd say if I had to pick two, it would have been, uh, Stanislaw Lem, the, the Polish writer-

    3. LF

      Solaris. Yeah.

    4. BG

      Yeah. Well, S- Solaris, and then he had, he had a bunch of more obscure writings on, on superhuman AIs that were engineered. Solaris was sort of a superhuman, naturally occurring in- in- intelligence. Then Philip K. Dick, who-

    5. LF

      Ah.

    6. BG

      ... you know, u- ultimately my fandom for Philip K. Dick is one of the things that brought me together with David Hanson, my collaborator on, on, on robotics projects. So, you know, Stanislaw Lem was, was very much an intellectual, right? So he, he had a very broad view of intelligence going beyond the human and into what I would call, you know, open-ended superintelligence. The, the Solaris superintelligent ocean was intelligent, in some ways more generally intelligent than people, but in a complex and confusing way so that human beings could never quite connect to it. But it wa- but it was still probably very, very smart. And then the, the Golem-4 supercomputer in one of, one of Lem's, Lem's books, this was engineered by people, but eventually it became very intelligent in a different direction than humans and decided that humans were kind of trivial and not that interesting. So it, it put some impenetrable shield around itself, shut itself off from humanity, and then issued some philosophical screed about the pathetic and hopeless nature of, of humanity and, and all human thought, and th- and then disappeared. Now, Philip K. Dick, he was a bit different. He was human-focused, right? Hi- his main thing was...... you know, human compassion and the human heart and soul are going to be the constant that will keep us going through whatever aliens, aliens we discover, or telepathy machines, or, or, or super AIs, or, or whatever it might be. So, he didn't believe in reality, like the reality that we see may be a simulation, or, or, or a dream, or something else we can't even comprehend. But he believed in love and compassion as something persistent through the various simulated realities. So those, those two science fiction writers had a, had a huge impact on me. Then, a little older than that, I got into Dostoevsky and, uh, Friedrich Nietzsche and, uh, Rimbaud, then a bunch of, uh, more, more literary-type writing.

    7. LF

      Can we talk about some

  3. 6:3813:13

    Are there intelligent beings all around us?

    1. LF

      of those things? So, on the Solaris side, Stanislaw Lem, uh, this kind of idea of there being intelligences out there that are different than our own, do you think there are intelligences maybe all around us that we're not able to even detect? So, the, this kind of idea of, uh... Maybe you can comment also on, uh, Stephen Wolfram thinking that there's computations all around us, and we're just not smart enough to kinda detect their, their intelligence or appreciate their intelligence.

    2. BG

      Yeah. So my friend, Hugo de Garis, who I've been talking to about these things for, for, for many decades, since the early '90s, he had an idea he called SIPI, the Search for Intra-Particulate Intelligence.

    3. LF

      (laughs)

    4. BG

      So, the concept there was as AIs get smarter and smarter and smarter, you know, assuming the laws of physics as we know them now are still, are still what these super-intelligences perceive to hold and are bound by, as they get smarter and smarter, they're gonna shrink themselves littler and littler, because of special relativity, make, makes it so they can't communicate between two spatially distant points. So, they're gonna get smaller and smaller. But then ultimately, what does that mean? The minds of the super, super, super-intelligences, they're gonna be packed into the, the interaction of, of elementary particles or, or quarks, or the partons inside quarks, or whatever it is. So, what we perceive as random fluctuations on the quantum or sub-quantum level may actually be the thoughts of the micro, micro, micro-miniaturized super-intelligences. 'Cause there's no way we can tell random from structured, but with an algorithmic information more complex than our brains, right? We can't tell the difference. So, what we think is random could be the thought processes of some really tiny super minds. And if so, there's not a damn thing we can do about it, except, you know, try to upgrade our intelligences and expand our minds so that we can, we can perceive more of what's around us.

    5. LF

      But if the, if those random fluctuations, like even if we go to, like, quantum mechanics, if that, if that's actually, uh, super-intelligent systems, aren't we then part of the soup of super-intelligence? So, we're... Aren't we just like, like a finger (laughs) of the entirety of the body of the super-intelligent system? Okay.

    6. BG

      We could be... I mean, a finger is a, is a strange metaphor.

    7. LF

      (laughs)

    8. BG

      I mean, we, we, we... (laughs)

    9. LF

      Well, a finger is dumb is what I mean. Is, uh, is...

    10. BG

      Well, the finger is also useful and is controlled-

    11. LF

      Right.

    12. BG

      ... with, with intent by, by the brain. Whereas we may be much less than that, right? I mean, I mean, yeah, we may be just some random epiphenomenon that, that they don't care about too much. Like, think, think about the, the shape of the crowd emanating from a sports stadium or something, right?

    13. LF

      Right.

    14. BG

      There, there's some emergent shape to the crowd. It's there. You could take a picture of it. It's kind of cool. It's irrelevant to the main point of the sports event or where the people are going, or, or, or what's on the minds of the people making that shape in the crowd, right? So, we, we may just be some semi-arbitrary, higher-level pattern popping out of, of a lower level, hyper-intelligent self-organization. And I mean, so, so be it, right?

    15. LF

      (laughs)

    16. BG

      I mean, that, that's one thing that-

    17. LF

      Still a fun ride.

    18. BG

      Yeah. I mean, the older I've gotten, the more respect I've achieved for our fundamental ignorance. I mean, my, mine and, and everybody else's. I mean, I look at my, my two dogs, two beautiful little toy poodles, and you know, they watch me sitting at the computer typing. They just think I'm sitting there wiggling my fingers to exercise them maybe, or guarding the monitor on the desk, that they have no idea that I'm communicating with other people halfway around the world, let, let alone, you know, creating complex algorithms running in, in RAM on some computer server in Saint Petersburg or something, right? That, although they're right there, they're right there in the room with me. So, what things are there right around us that we're just too stupid or close-minded to comprehend?

    19. LF

      Well, pro-

    20. BG

      Probably, probably quite a lot.

    21. LF

      Your very, your very poodle could be, uh, could also be communicating across multiple dimensions with, with other, with other beings, and you're too, you're too unintelligent to understand the kind of communication mechanism they're going through.

    22. BG

      There, there, there have been various, uh, TV shows and science fiction novels positing cats, dolphins, uh, mice and whatnot are actually super-intelligences h- here to observe that. I wou- I would, I would guess as one or the other quantum physics founders said, those theories are not crazy enough to be true.

    23. LF

      (laughs)

    24. BG

      The reality's probably crazier than that.

    25. LF

      Beautifully put. So, on the human side, uh, with, uh, Philip K. Dick and, and, uh, in general, where do you fall on this idea that, uh, love and just the basic spirit of human nature persists throughout these multiple realities? Um, are you on the side... Like, the thing that inspires you about artificial intelligence, is it the human side of somehow persisting through all of the different systems we engineer? Or is it, or does AI inspire you to create something that's greater than human, that's beyond human, that's almost non-human?

    26. BG

      I would say my motivation to create AGI comes from, from both of those directions, actually. So when I, when I first became passionate about AGI when I was, it would have been two or three years old after watching robots on, on Star Trek. I mean, then, it was really a combination of intellectual curiosity, like can a machine really think? How, how would you do that? And yeah, just ambition to create something much better than all the clearly limited and fundamentally defective humans I saw around me. Then as I got older and got more enmeshed in, in the human world, and you know, got married, had, had children, saw my parents begin to age, I started to realize, well, not only will AGI let you go far beyond the limitations of the human, but it could also, like stop us from dying and, and suffering and, and feeling pain and, and tormenting ourselves mentally. So you can see AGI has amazing capability to do good for humans, as humans a- alongside with its capability to go far, far beyond the human level. So I mean both, both aspects are, are there which makes it, uh, even more exciting and important.

  4. 13:1315:56

    Dostoevsky

    1. BG

    2. LF

      So you mentioned Dostoevsky and Nietzsche. What did you pick up from those guys? I mean...

    3. BG

      Ah.

    4. LF

      (laughs)

    5. BG

      That would probably go beyond the...

    6. LF

      (laughs)

    7. BG

      ...beyond the scope of a brief interview, certainly.

    8. LF

      (laughs) Sure.

    9. BG

      But both of those are amazing thinkers who one will necessarily have a complex relationship with, right? So I mean, Dostoevsky on the minus side, he's kind of a religious fanatic and he sort of helped squash the Russian nihilist movement, which was very interesting because what nihilism meant originally in that period of the mid-late 1800s in Russia was not taking anything fully 100% for granted. It was really more like what we'd call Bayesianism now where you don't want to adopt anything as a dogmatic certitude and always leave your mind open. And how Dostoevsky parodied nihilism was wha- wa- wa- was a bit different, right? He parodied it as people who believe absolutely nothing so they must assign an equal probability weight to every proposition which, which, which doesn't really work. So on the one hand I didn't, I didn't really agree with Dostoevsky on his sort of religious point of view. On, on, on the other hand, if you look at his understanding of human nature and sort of the human mind and, and, and heart and soul it's, it's really unparalleled and he had an amazing view of how human beings, you know, construct a world for themselves based on their own understanding and their own mental predisposition. And I think if you look in The Brothers Karamazov in particular, the Russian literary theorist Mikhail Bakhtin wrote about this as a polyphonic mode of fiction which means it's not third-person but it's not first-person from any one person really. There are many different characters in the novel and each of them is sort of telling part of the story from their own point of view so the reality of the whole story is an intersection like synergetically of the many different characters' world views and that really it's a beautiful metaphor and even a reflection I think of how all of us socially create our reality. Like e- e- each of us sees the world in a certain way, each of us in a sense is making the world as we see it based on our own minds and understanding but it's polyphony like, like in music where multiple instruments are coming toge- coming together to create the sound. The ultimate reality that's created comes out of each of our subjective understandings you know intersecting with each other and that, that was one of the many beautiful things in Dostoevsky.

    10. LF

      So

  5. 15:5620:19

    Russian roots

    1. LF

      maybe a little bit to mention you have a connection to Russia and the Soviet culture. I mean I'm not sure exactly what the nature of the connection is but there at least the spirit of your thinking is in there.

    2. BG

      Ah yeah well my, my, my ancestry is three-quarters Eastern European Jewish.

    3. LF

      Oh shit. Oh.

    4. BG

      So I mean my... Three of my great-grandparents emigrated to New York from Lithuania and sort of border regions of Poland which were in and out of Poland in around the ti- around the time of World War I and they were, they were socialists and communists as well as Jews, mostly Menshevik not Bolshevik and they sort of they fled at just the right time to the US for their own personal reasons and then almost all or maybe all of my extended family that remained in Eastern Europe was killed either by Hitler's or Stalin's minions at some point so the branch of the family that emigrated to the US was pretty much the only one right

    5. LF

      So how much...

    6. BG

      ...that survived.

    7. LF

      ...of the spirit of the people is in your blood still? Like do you when you look in the mirror, do you see uh... What do you see? (laughs)

    8. BG

      Meat. I see a bag of meat that I want to transcend by uploading into some sort of superior reality but...

    9. LF

      (laughs)

    10. BG

      Very...

    11. LF

      Yeah.

    12. BG

      I mean yeah very clearly...

    13. LF

      Well put.

    14. BG

      I mean I'm not religious in a traditional sense but clearly the Eastern European Jewish tradition was what I was raised in. I mean there was... My grandfather Leo Zwell was a physical chemist who worked with Linus Pauling and a bunch of the other early greats in quantum mechanics. I mean he was into X-ray diffraction. He was on the material science side, an experimentalist rather than a theorist. His sister was also a physicist and my father's father Victor Goertzel was a PhD in psychology who had the unenviable job of giving psychotherapy to the Japanese in internment camps in the US in World War II like to counsel them why they shouldn't kill themselves e- e- even though they'd had all their stuff taken away and been imprisoned for no good reason. So I mean there yeah there was a lot of uh-... Eastern European Jewishness in my, in my background. One of my great uncles was, I guess, conductor of San Francisco Orchestra. So there, there's a lot of, uh, Mickey Salk and bunch of m- music, music in there also. And clearly, this culture was all about learning and, and understanding, understanding the world and also not quite taking yourself too seriously while you do it, right?

    15. LF

      Yeah.

    16. BG

      There's a lot of Yiddish, Yiddish humor in there. So I, I do appreciate that, that culture. Although, the whole idea that, like, the Jews are the chosen people of God never resonated with me too much.

    17. LF

      The graph of the Gödel family, I mean, just the people I've encountered just doing some research and just knowing your work through, through the decades, uh, is kind of fascinating. Um, just the, the number of PhDs. (laughs)

    18. BG

      Yeah, yeah, I mean, my, yeah, my, my-

    19. LF

      It's kind of fascinating.

    20. BG

      My dad is a sociology professor-

    21. LF

      Yeah.

    22. BG

      ... who recently retired from, from Rutgers University. But that, clearly, that gave me a head start in life. I mean, my, my grandfather gave me all his quantum mechanics books when I was like seven or eight years old, and I, I remember going through them and it was all the old quantum mechanics, like R- R- Rutherford atoms and stuff. So I got to the part of wave functions which I didn't understand, although I was a very bright kid. And I realized he, he didn't quite understand it either. But at least... Like, he pointed me to some professor he knew at, at UPenn nearby who, who understood these things, right? So that's, that, that's an unusual opportunity for a kid to have, right? And my, my dad, he was programming Fortran when I was 10 or 11 years old on like HP3000, uh, mainframes at Rutgers University.

    23. LF

      Mm-hmm.

    24. BG

      So I got to do linear regression in Fortran on, on punch cards, uh, when, when I was in, in, in middle school, right? 'Cause he, he was doing, I guess, analysis of demographic and, and, and sociology data. So yes, certainly, certainly that gave me a head start and a push towards science be- beyond what would have been the case with m- many, many different

  6. 20:1931:30

    When did you fall in love with AI?

    1. BG

      situations.

    2. LF

      When did you first fall in love with AI? Is it the, is it the programming side of Fortran? Is it the, maybe the sociology, psychology that you picked up from your dad or is it the quantum mechanics?

    3. BG

      I fell in love with AI when I was probably three years old when I saw a robot on Star Trek.

    4. LF

      (laughs)

    5. BG

      It was turning around in a circle going, "Error, error, error, error," because Spock and Kirk had tricked it into a mechanical breakdown by presenting it with a logical paradox. And I was just like, "Well, this makes no sense. This AI is very, very smart. It's been traveling all around the universe. But these people could trick it with a simple logical paradox." Like why... If, you know, if the human brain can get beyond that paradox, wh- wh- wh- why, why can't, why can't this AI? So I, I, I felt the, the screenwriters of Star Trek had misunderstood the nature of intelligence. And I complained to my dad about it and he... (laughs)

    6. LF

      (laughs)

    7. BG

      He, he, he wasn't, he wasn't gonna say anything one way or the other.

    8. LF

      Yeah.

    9. BG

      But, you know, I... And before I was born, when my dad was at Antioch College in the, in, uh, the middle of the US, he, he led a, he led a protest movement called SLAM, Student League Against Mortality. They were protesting against death wh- wh- wh- wha- wa- wandering-

    10. LF

      Brilliant.

    11. BG

      ... across the campus. So he, he, he was into some futuristic things even back then. But whether AI could confront logical paradoxes or not, he didn't, he didn't know. But that... You know, when I... 10 years after that is something I discovered Douglas Hofstadter's book, Gödel, Escher, Bach, and that was sort of to the same point of AI and paradox and logic, right? 'Cause he was over with, over and over with Gödel's incompleteness theorem, and can an AI really fully model itself reflexively or does that lead you into some paradox? Can the human mind truly model itself reflexively or does that lead you into some paradox? So when... I think that book, Gödel, Escher, Bach, which I think I read when it first came out, I would have been 12 years old or something. I remember it was like 16-hour day. I read it cover to cover and then re-read it.

    12. LF

      Oh, really? Wow.

    13. BG

      Re-read it af- I re-read it after that 'cause there was a lot of weird things with little formal systems in there that were hard for me at the time. But that was the first book I read that gave me a feeling for AI as like a practical academic or engineering discipline that, that people were working in. 'Cause before I read Gödel, Escher, Bach, I was into AI from the point of view of a, of a science fiction fan. And I, I had the idea, "Well, it may be a long time before we can achieve immortality and superhuman AGI. So, I should figure out how to build a spacecraft traveling close to the speed of light, go far away, then come back to the Earth in a million years when technology is more advanced and we can build these things." Reading Gödel, Escher, Bach, while it didn't all ring true to me, a lot of it did, and, but I could see, like, there are smart people right now at various universities around me who are actually trying to work on building what I would now call AGI, although Ho- Hofstadter didn't, didn't call it that. So really, it was when I read that book, which would have been probably middle school, that then I started to think, "Well, this, this is something that I could, I could practically work on." Yeah.

    14. LF

      As opposed to flying away and waiting it out, you can actually be the, one of the people that actually, uh, builds the systems.

    15. BG

      Yeah, exactly. And if you think about, um, I was interested in what we'd now call nanotechnology and in, uh, human immortality and time travel, all the same cool things as every other, like, science fiction-loving kid. But AI seemed like, if Hofstadter was right, you just figure out the right program, sit there and type it. Like, you, you don't, you don't need to, you don't need to spin stars into weird configurations or get government approval to cut people up and fiddle with their DNA or something, right?

    16. LF

      Yeah.

    17. BG

      It's just programming. And then of course, that can achieve anything else. The... There's another book from back then which was by Oda...... Feinbaum, Gerald Feinbaum, who was a, who was a physicist at, at, at Princeton. And that was the Prometheus project. Ah. And this book was written in the late 1960s, though I encountered it in the mid-'70s. But what this book said is in the next few decades, humanity is going to create superhuman thinking machines, molecular nanotechnology, and human immortality. And then the challenge we'll have is what to do with it. Do we use it to expand human consciousness in a positive direction or, or do we use it just to further vapid, uh, consu- consumerism? And what he proposed was that the UN should do a survey on this, and the UN should send people out to every little village in, in remotest Africa or South America and explain to everyone what technology was going to bring the next few decades and the choice that we had about how to use it. (laughs) And let everyone on the whole planet vote about whether we should develop, you know, super AI nanotechnology and, and, and immortality for expanded consciousness or for rampant, rampant consumerism. And needless to say, that didn't quite happen.

    18. LF

      Right. (laughs)

    19. BG

      And I think this guy died in the mid-'80s, so he didn't even see his ideas start to become, become more mainstream. But it's interesting, many of the themes I'm engaged with now, from AGI and immortality even to trying to democratize technology as I've been pushing for with singularity

    20. LF

      (laughs)

    21. BG

      ... in my work in the blockchain world, many of these themes were there in, you know, Feinbaum's book in the, in the late '60s even. And of course, Valentin Churchin, uh, a Russian writer who, who I... and a great Russian physicist, who I got to know when we both lived in New York in the late '90s and early aughts. I mean, he, he had a book in the late '60s in, in Russia which was The Phenomenon of Science which laid out, laid out all these, all these same things a- a- as well. And Val died in, I don't remember, 2004 or '05 or something of, of Parkinson's disease. So yeah, it's easy, easy for people to lose track now of the fact that the, the futurist and singularitarian advanced technology ideas that are now almost mainstream and are on TV all the time, I mean, these, these are not that new, right? They're sort of new in the history of, of the human species. But I mean, these were all around in fairly mature form in, in the middle of the last century, were written about quite articulately by fairly mainstream people who were professors at, at top universities. It's just until the enabling technologies got to a c- a certain point, then you, you couldn't make it real. So and, and even in the '70s, I was sort of seeing that and, and living, living through it, right? From Star Trek to Douglas Hofstadter, things were getting very, very practical from the late '60s to the late '70s. And, you know, the first computer I bought, you could only program with hexadecimal machine code, and you had to solder it together.

    22. LF

      (laughs) Yeah.

    23. BG

      And then, then like a few years later, there's punch cards. Then a few years later, you could get like Atari 400 and Commodore VIC-20 and you could, you could type on the keyboard and program in higher level languages along- alongside the assembly language. So these ideas have been building up a while, and I guess my generation got to feel them-

    24. LF

      Mm-hmm.

    25. BG

      ... build up, which is different than people coming into the field-

    26. LF

      Now, yeah.

    27. BG

      ... now for whom these things have just been part of the ambiance of, of culture for their whole career or even their, or even their, even their whole life.

    28. LF

      Well, it's fascinating to think about, you know, there being all of these ideas kind of swimming, you know, almost with the noise all around the world, all the different generations and, and some kind of non-linear thing happens where they percolate up and, and, uh, capture the imagination of the mainstream.

    29. BG

      Yeah.

    30. LF

      And that seems to be what's happening with AI now.

  7. 31:3042:04

    Are humans good or evil?

    1. LF

      uh... what do you think about the will to power? Do you think human... what do you think drives humans? Is it, is it, um...

    2. BG

      Oh, an unholy mix of things. I, I, I don't think there's one pure, simple, and elegant objective function dri- driving humans by, by, by any means.

    3. LF

      Well, do you think, um, if we look at... I know it's hard to look at humans in aggregate, but do you think overall humans are good or, uh, do we have both good and evil within us that, uh, depending on the circumstances, depending on the whatever-

    4. BG

      Oh.

    5. LF

      ... can, can, can, uh-

    6. BG

      So I think-

    7. LF

      ... percolate to the top?

    8. BG

      Good and evil are very ambiguous, complicated, and in some ways silly concepts. But if we, we could dig into your question from a couple directions. So I think if you look in evolution, humanity is shaped both by individual selection and what biologists would call group selection, like tribe-level selection, right? So, individual selection has driven us in a selfish DNA sort of way. S- so that each of us does to a certain approximation what will help us propagate our, our DNA to future generations. I mean, that, that, that, that's why I've got, uh, four kids so far, and, uh, and, uh, probably that's not the last one. (laughs)

    9. LF

      Yeah. (laughs)

    10. BG

      On the other hand-

    11. LF

      I like the ambition. (laughs)

    12. BG

      ... tribal, like group selection means humans, in a way, will do what, what will advocate for the persistence of the DNA of their whole, their whole tribe or, or, or their, their social group. And in biology, you, you have both of these, right? Like a... and you can see, say, an ant colony or beehive, there's a lot of group selection in, in, in the evolution of those social animals. On the other hand, say a, a big cat or some very solitary animal, it's a lot more biased toward, toward individual selection. Humans are, are an interesting balance, and I think this reflects itself in what we would view as selfishness versus altruism to, to, to some extent. So, we just have both of those objective functions contributing to the, the makeup of, of our brains. And then as Nietzsche analyzed in his own way, and others have analyzed in different ways, I mean, we abstract this as, well, we have both good, good and, and, and evil with- within us, right? 'Cause a lot of what we view as evil is really just selfishness, and a lot of what we view as good is altruism, which means doing, doing what's good for the, for, for the tribe. And on that level, we have both of those just baked, baked into us, and that's how it is. Of course, there are psychopaths and sociopaths and people who, you know, get gratified by the suffering of others, and that's, that, that, that's, that's a different thing.

    13. LF

      Yeah, those are exceptions-

    14. BG

      But I, I-

    15. LF

      ... but on the whole...

    16. BG

      ... yeah. Well, I, I think at, at core we're not purely selfish, we're not purely altruistic, we, we are a mix, and that's, that's the nature of it. And we also have a complex constellation of values that are just very specific to our, our evo- evolutionary history. Like we, you know, we, we love waterways and, and, and mountains, and the, the ideal place to put a house is in the mountain overlooking the water, right? And-

    17. LF

      (laughs)

    18. BG

      ... you know, we, we, we care a lot about our, our kids, and we care a little less about our cousins, and even less about our fifth cousins.

    19. LF

      Yeah.

    20. BG

      I mean, there are many particularities to human values-

    21. LF

      Yeah.

    22. BG

      ... which whether they're good or evil depends on your, (laughs) on, on, on, on your perspective really.

    23. LF

      It-

    24. BG

      See, I, I, I spent a lot of time in Ethiopia in Addis Ababa where we have one of our AI development offices for my SingularityNET project. And when I walk through the streets in Addis, you know, there's so... there's people lying by the side of the road, like just living there by the side of the road, dying probably of curable diseases without enough food or medicine. And when I walk by them, you know, I feel terrible, I give them money. When I come back home to the developed world, they're not on my mind that much. I, I do donate some, but, I mean, I, I also spend some of the limited money I have enjoying myself in frivolous ways rather than donating it to those people who are right now, like, starving, dying, and suffering on, on the roadside. So, does that make me evil? I mean, it makes me somewhat selfish and somewhat altruistic, and we each, we each balance that in, in, in our own way, right? So that's... that, that... whether that will be true of all possible AGIs is a, is a, is a subtler question.

    25. LF

      So you, you have-

    26. BG

      But that's how humans are.

    27. LF

      So, you have a sense, you kinda mentioned that there's a selfish... um, I'm not gonna bring up the whole Ayn Rand idea of, uh, selfishness being the core virtue, that's then a whole interesting kinda-... tangent that I think we'll just-

    28. BG

      (laughs)

    29. LF

      ... distract ourselves on it.

    30. BG

      I, I, I, I have to make one amusing comment.

  8. 42:0446:53

    Colonizing mars

    1. BG

      Yeah, I mean, colonizing Mars, first of all, it's a- it's a super cool thing to do. We- we should be doing it.

    2. LF

      So you're- you love the idea?

    3. BG

      Yeah, I mean, it's more important, it's more important than making chocolatey chocolates and-

    4. LF

      (laughs)

    5. BG

      ... and sexier lingerie and- and many of the things that we spend a lot more resources on as a species, right?

    6. LF

      Yeah.

    7. BG

      So, I mean, we should- certainly should do it. I think the possible futures in which a Mars colony makes a critical difference for humanity are- are- are- are very few. I mean, I- I- I think...... I mean, assuming we make a Mars colony and people go live there in a couple of decades, I mean, their supplies are going to come from Earth. The money to make the colony came from Earth. And whatever powers are supplying the, the, the goods there from, from Earth are going to, in effect, be in, in control of that, of that Mars colony. Of course, there are outlier situations where, you know, Earth gets nuked into oblivion and somehow Mars has been made self-sustaining by that point, and, and then Mars is what allows humanity to persist. But I think the... Those are very, very, very un- un- un- unlikely possibilities.

    8. LF

      You don't think it could be a first step on a long journey? Uh, let's-

    9. BG

      Of course, it's a first step on a long journey, which, which is, which is awesome. I'm guessing the colonization of the rest of the physical universe will probably be done by AGIs that are better designed to live in space than by, by the meat machines that, that, that we are. But I mean, who knows? We may cryopreserve ourselves in some superior way to what we know now and, like, shoot ourselves out to Alpha Centauri and beyond. I mean, that's all cool, it's very interesting, and it's much more valuable than most things that humanity is spending its resources on. On the other hand, with AGI, we can get to a singularity before the Mars colony becomes sustaining, for sure, possibly before it's even operational. And so-

    10. LF

      Interesting. So your intuition is that's- that's the problem if we really invest resources and we can get to faster than a legitimate, full, like, self-sustaining colonization of Mars?

    11. BG

      Yeah, and it's almo- it's very clear that we will, to me, because there's so much economic value in getting from narrow AI toward, toward AGI, whereas the Mars colony, there's less economic value until you get quite far- far- far out in- into the, into the future. So, I think that's very interesting. I just think it's- it's somewhat, somewhat off to the side. I mean, ju- just as a, I think, say, you know, art and music are- are very, very interesting and I want to see resources go into amazing art and music being, being created. And I'd rather see that than a lot of the garbage that society (laughs) spends their money on. On- on- on the other hand, I don't think Mars colonization or inventing amazing new genres of music is- is not one of the things that is most likely to make a critical difference in the evolution of human or non-human life in- in- in- in this part of the universe o- o- over the next decade.

    12. LF

      Do you think AGI is really...

    13. BG

      AGI is- is by far the most important thing that's on the horizon. And then technologies that have direct ability to enable AGI or to accelerate AGI are also very important. For example, say, quan- quantum computing. I don't think that's critical to achieve AGI, but certainly you could see how the right quantum computing architecture could massively accelerate AGI. Similar other- other types of- of nanotechnology, right? Now, the quest to cure aging and end disease, while not in the big picture as important as- as- as AGI, of course it's important to- to all of us as- as- as individual humans. And if someone made a super longevity pill and distributed it tomorrow, I mean, that would be huge and a much larger impact than a Mars colony i- is- is gonna have for quite some time.

    14. LF

      But perhaps not as much as an AGI system? I mean, these are...

    15. BG

      No, because if you get... If you can make a benevolent AGI, then all the other problems are solved. I mean, the... If... Then the AGI can be... Once it's as generally intelligent as humans, it can rapidly become massively more generally intelligent than humans. And- and then that- that AGI should be able to solve science and engineering problems much- much better than- than- than human beings, as long as it is, in fact, motivated to do so. That's why I said a- a benevolent AGI. There could

  9. 46:5355:56

    Origin of the term AGI

    1. BG

      be other kinds.

    2. LF

      Maybe it's good to step back a little bit. I mean, we've been using the term AGI. People often cite you as the creator, or at least the popularizer of the term AGI, artificial general intelligence. Can you tell the origin story of the term?

    3. BG

      Sure, sure.

    4. LF

      Or maybe...

    5. BG

      So yeah, I would say I- I launched the term AGI upon the world for- for- for what- what it's worth, without ever fully being in- in- in love with the term.

    6. LF

      Right.

    7. BG

      What happened is, I was editing a book, and this process started around 2001 or '02. I think the book came out 2005, funnily. I was editing a book which I provisionally was titling Real AI.

    8. LF

      Mm-hmm.

    9. BG

      And I mean, the goal was to gather together fairly serious academic-ish papers on the topic of making thinking machines that could really think in the sense like people can or- or- or even more broadly than people can, right? So then I was reaching out to other folks that I had encountered here or there who were in- interested in- in- in that, which included some- some other folks out of the... Who I knew from the transhumanist and singularitarian world, like Peter Voss, who has a company, AGI Incorporated still in- in California, and included, uh, Shane Legg, who had worked for me at my company WebMind in New York in the late '90s, who by now has become rich and famous.

    10. LF

      (laughs)

    11. BG

      He was one of the co-founders of- of Google DeepMind.

    12. LF

      Yeah.

    13. BG

      But at that... At that time, Shane was, uh... I think he may have been... Have just started doing his PhD with, uh, Marcus Hutter, who at that time hadn't yet published his book Universal AI, which sort of gives a mathematical foundation for artificial general intelligence. So I reached out to Shane and Marcus and Peter Voss and Pei Wang, who was another former employee of mine who had been Douglas Hofstadter's PhD student, who had his own approach to AGI, and a bunch of... Some Russian folks.... reached out to these guys and they contributed papers for the book. But that was my provisional title, but I never loved it because in the end, you know, I was doing some, what we would now call narrow AI, uh, as well, like applying machine learning to genomics data or chat data for sentiment analysis. And I mean, that work is real. And then in a sense, in a sense, it's, it's really AI. It's just a different kind of, kind of AI. Ray Kurzweil wrote about narrow AI versus strong AI. But that seemed weird to me because first of all, narrow and strong are not antonyms. (laughs)

    14. LF

      (laughs) That's right.

    15. BG

      I mean, but secondly, strong AI was used in the cognitive science literature to mean the hypothesis that digital computer AIs could have true consciousness like-

    16. LF

      Right.

    17. BG

      ... like human beings. So there was already a meaning to strong AI, which was complexly different, but related, right? So we were tossing around on an, an email list whether, what title it, title it should be, and so we, we talked about narrow AI, broad AI, wide AI, narrow AI, general AI, and I think it, it was either Shane Legg or Peter Vohs on the private email discussion we had, he said, "Well, why don't we go with AGI, artificial general intelligence?" And Pei Wang wanted to do GAI, general artificial intelligence 'cause in Chinese it goes in that order.

    18. LF

      (laughs) Right.

    19. BG

      But we figured gay wouldn't work in, in-

    20. LF

      (laughs)

    21. BG

      ... in US culture at that time, right?

    22. LF

      Yeah. Yeah. Yeah. (laughs)

    23. BG

      So, so we, we went with the-

    24. LF

      AGI.

    25. BG

      ... AGI. We used it for the, for the title of that book. And part of Peter and Shane's reasoning was you have the G factor in psychology, which is IQ, general intelligence-

    26. LF

      That's right.

    27. BG

      ... right? So you have a meaning of GI, general intelligence-

    28. LF

      It's interesting.

    29. BG

      ... in psychology, so then you're looking like artificial GI. So then, then we-

    30. LF

      Oh, that makes a lot of sense, I think.

  10. 55:561:12:36

    AGI community

    1. LF

      maybe also just a comment on AGI representing before even the term existed, representing a kind of community. Now, you've talked about this in the past, sort of AI has come in waves. But there's alwa- always been this community of people who dream about creating, uh, general human level super intelligence systems. Uh, can you maybe give your sense of the history of this community as it exists today, as it existed before this deep learning revolution, all, all throughout the winters and the summers of AI?

    2. BG

      Sure. Uh, first I would say, as a side point, the winters and summers of AI are greatly exaggerated-

    3. LF

      Yeah.

    4. BG

      ... by, by Americans.

    5. LF

      Yeah.

    6. BG

      And, in that if you look at the publication record of the artificial intelligence community since, say, the 1950s, you would find a pretty steady growth and advance of ideas and, and, and papers. And what's thought of as an AI winter or summer was sort of how much money is the US military pumping into AI.

    7. LF

      (laughs)

    8. BG

      Which was, was meaningful. On the other hand, there was AI going on in Germany, UK, and, and Japan and, and Russia all, all, all over the place while US military got more and less, less en- en- enthused about AI. So, what, uh, I mean-

    9. LF

      That happened to be, just for people who don't know-

    10. BG

      Yeah.

    11. LF

      ... the US military happened to be the main source of funding for AI research. So, another way to phrase that is it's up and down of, uh, funding for artificial intelligence research.

    12. BG

      It's true.

    13. LF

      Yeah.

    14. BG

      And I would say the correlation between funding and intellectual events was not 100%, right? Because, I mean, in, in Russia as an example, or in Germany, there was less dollar funding than in the US, but many foundational ideas were, were laid out, but it was more theory tha- than implementation, right? And US really excelled at sort of breaking through from theoretical papers to working implementations, which, which did go up and down somewhat with US military funding. But still, I mean, you can look, in the 1980s, Dietrich Dörner in Germany had self-driving cars on the Autobahn, right? And, uh, I mean, this, it was a little early with regard to the car industry, so it didn't catch on such as has, has happened now. But, I mean, that whole advancement of self-driving car technology in Germany was pretty much independent of AI military summers and, and, and winters in the US. So there, there's been more going on in AI globally than not only most people on the planet realize, but than most new AI PhDs realize, because they've come up within a certain sub, sub-field of, of AI and haven't had to look so much, so much beyond that. But I, I would say when I got, when I got my PhD in 1989 in, in mathematics, I was interested in AI already.

    15. LF

      In Philadelphia, by the way.

    16. BG

      Yeah. At Tem- I started at NYU, then I transferred to, to Philadelphia, to Temple University.

    17. LF

      Yeah.

    18. BG

      Good old North Philly, yeah.

    19. LF

      North Philly, yeah.

    20. BG

      Yeah, yeah, yeah. The pearl of, pearl of the US, right?

    21. LF

      (laughs)

    22. BG

      Yeah. You never stopped at a red light then 'cause you were afraid-

    23. LF

      (laughs)

    24. BG

      ... if you stopped at a red light, someone will carjack you.

    25. LF

      (laughs)

    26. BG

      So you drive through every red light, yeah.

    27. LF

      Yeah. (laughs)

    28. BG

      It's, it's a, it's a... Ev- every, everyday driving or bicycling to Temple from my house was a, is like a new, new adventure, right? But yeah, when I... The reason I didn't do a PhD in AI was what people were doing in the academic AI field then was just astoundingly-

    29. LF

      Uh-huh.

    30. BG

      ... boring and seemed wrongheaded to me. It was really, like, rule-based expert systems and production systems. And I, actually, I loved mathematical logic. I had nothing against logic as the cognitive engine for an AI. But the idea that you could type in the knowledge that AI would need to think seemed just completely stupid and, and, and, and wrongheaded to me. I mean, you can use logic if you want, but somehow the system has got to be-

  11. 1:12:361:36:47

    How to build AGI?

    1. LF

      what's your sense of what kind of things will an AGI system need to have?

    2. BG

      Yeah. That, that's a very interesting topic that I've thought about for, for a long time. And, uh, I, I, I think there are many, many different approaches that can work for getting to, to human level AI. So, I, I, I don't, I don't think there's, like, one golden algorithm or one, one golden design that, that, that can, that can work. And I mean, flying machines ...... is the, the much-worn analogy here, right? Like, I mean, you have airplanes, you have helicopters, you, you, you have balloons, you have stealth bombers that don't look like regular airplanes. You, you, you've got all s- blimps.

    3. LF

      Birds too. (laughs)

    4. BG

      Birds, yeah, and, and bugs, right?

    5. LF

      (laughs) Yeah.

    6. BG

      And, and, uh, you, I mean, and there are certainly many kinds of flying machines that-

    7. LF

      And there's a catapult that you can just launch. (laughs)

    8. BG

      And there's bicycle-powered, like, uh, flying machines, right?

    9. LF

      Nice, yeah.

    10. BG

      Yeah, so now these are all analogizable by basic theory of, of aerodynamics, right? Now, so one issue with AGI is we don't yet have the analog of the theory of aerodynamics and that, that's what Marcus Hutter was trying to make with the AXE and his general theory of general intelligence. But that theory in its most clearly articulated parts really only works for either infinitely powerful machines or almost or insanely-

    11. LF

      (laughs) Yeah.

    12. BG

      ... impractically powerful machines. So, I mean, if, if you were going to take a theory-based approach to AGI, what you would do is say, "Well, let's, let's take what's called, say, AXE-TL, which is a, which is Hutter's AXE machine that can work on merely insanely much processing power, rather than infinitely much processing power."

    13. LF

      What does TL stand for?

    14. BG

      Uh, ti- time and length.

    15. LF

      Okay.

    16. BG

      So you're... Basically, how, how, uh-

    17. LF

      Like constrained somehow.

    18. BG

      Yeah, yeah, yeah. So how AXE works basically is each-

    19. LF

      (laughs)

    20. BG

      ... each, each action that it wants to take, before taking that action, it looks at all its history.

    21. LF

      Yeah.

    22. BG

      And then it looks at all possible programs that it could use to make a decision.

    23. LF

      Yeah.

    24. BG

      And it decides like which decision program would have let it make the best decisions according to its reward function over its history, and it uses that decision program to take, to make the next decision, right?

    25. LF

      Yeah. It's not afraid of infinite resources and (laughs) -

    26. BG

      It's searching through the space of all possible computer programs-

    27. LF

      Yeah.

    28. BG

      ... in between each action and its next action.

    29. LF

      Yeah.

    30. BG

      Now, AXE-TL searches through all possible computer programs that have runtime less than T and length less than L.

Episode duration: 4:08:57

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode OpSmCKe27WE

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome