Skip to content
Modern WisdomModern Wisdom

Shocking Ways AI Could End The World - Geoffrey Miller

Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author. Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It's opened up massive possibilities. But it's also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI? Expect to learn the key risks that AI poses to humanity, the 3 biggest existential risks that will determine the future of civilisation, whether Large Language Models can actually become conscious simply by being more powerful, whether making an Artificial General Intelligence will be like creating a god or a demon, the influence that AI will have on the future of our lives and much more... Sponsors: Get 10% discount on all Gymshark’s products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://craftd.com/modernwisdom (use code MW15) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #ai #existentialrisk #chatgpt - 00:00 Intro 02:43 Is AI Viewed as an Existential Risk? 06:14 How the Perceived Risk of AI Has Evolved 15:47 Rapid Advancements in Neural Networks 20:07 The Next Level for ChatGPT 28:25 Helping the Public Understand the AI Arms Race 37:16 Who Will Be the First to Create AGI? 40:53 Is the Alignment Problem Still a Problem? 50:42 The Opposing View to AI Safety-ism 54:55 Most Concerning Current AI Advancements 59:36 How AI Will Influence Content & Social Media 1:04:57 How Accurate Can AI Expert Predictions Be? 1:08:40 Something Much Worse than Existential Risk? 1:11:43 Why AI Won’t Save the World 1:19:29 Where to Find Geoffrey - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Geoffrey MillerguestChris Williamsonhost
Jul 6, 20231h 20mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:43

    Intro

    1. GM

      So what we're facing is potentially AI systems that are smarter than any human at doing a wide range of things, but also that are potentially 100, 1,000, maybe 100,000 times faster than humans. I think when we confront very fast, powerful, general-purpose AI systems, that's the kind of situation we're gonna be in. We're gonna be out-classed, not just in terms of intelligence, but also reaction speed.

    2. CW

      Why are you, as an evolutionary psychologist researcher, talking about AI?

    3. GM

      I just love getting into trouble and, uh, making, making trouble on Twitter. (laughs) No, I actually... Look, long story short, when I started, uh, my PhD program at Stanford in... way back in 1987, right? I was studying cognitive psychology and cognitive science, and pretty quickly, uh, we got one of the leading neural networks, uh, researchers at Stanford Psychology, a guy named David Rumelhart, who worked very extensively with Geoffrey Hinton and lots of other people. Um, and so I started in grad school doing a lot of work on neural networks and machine learning and, uh, genetic algorithms and so forth, and then I spent, um, most of my post-doctoral years at University of Sussex also in a cognitive science department doing autonomous robots and, uh, sort of applying genetic algorithms to evolve neural networks. So a long time ago, I was sort of an early adopter of machine learning, and then I got sidetracked into this evolutionary psychology thing, you know, for about 30 years. (laughs) But recently, um, uh, since about 2016, I've, I've become concerned and fascinated by rapid progress in, in AI, particularly deep learning and the large language models, and I sort of fell in with this gaggle of effective altruists, right? This movement who are quite concerned about, uh, existential risks, risks to all of humanity, and a lot of them were very concerned about how AI could play out. So, uh, the last few years, I've been reading a lot about this. Uh, since last summer, I've been publishing a bunch of essays on the Effective Altruism Forum and gotten pretty active on Twitter, uh, the last few months about AI x-risk. So to, to the casual observer, right? They might go, "Well, who is this psychology dude suddenly interested in AI?" Well, honestly, I've been fascinated by AI ever since I was a high schooler reading science fiction, you know, and ever since grad school learning about cognitive science.

    4. CW

      Okay.

    5. GM

      So that's my long story short, that wasn't actually very short.

    6. CW

      Is it right to say that

  2. 2:436:14

    Is AI Viewed as an Existential Risk?

    1. CW

      most of the researchers in the existential risk world see AI as one of the, if not the premier primary risk that we're facing?

    2. GM

      Yeah, absolutely. There's a great book by Toby Ord, O-R-D, um, who's an Oxford moral philosopher, but he has worked a lot on existential risks. So he did a book called The Precipice, right? And he actually tries to quantify the different extinction risks that we face. Some of them are really, really low probability, but really hard to fight, like if there's a local gamma ray burster, right? Or, uh, a supernova. Very, very low likelihood, very hard to defend against. Other stuff like asteroids, right? Which get a lot of, uh, a- attention. Very, very low likelihood, like there's probably less than a one-in-a-million chance we're gonna get a dangerous asteroid in the next century. If it comes, that could be bad, but, you know, there's stuff we can do about it. Whereas, uh, Toby Ord estimates, estimates that the risk of extinction, human extinction through AI in this century is about one in six, and I think that's sort of in line with a lot of the estimates that many experts give. The other, the other big risks are basically nuclear war, which is still an issue, right? After, you know, 70 years of, uh, thermonuclear weapons being around. So nuclear war, uh, possibly genetically engineered bioweapons could be really bad. It would be like COVID on steroids that could wipe out a lot of people. But, uh, the other, the other one is AI. So those seem to be the big three, AI, nukes, and super germs.

    3. CW

      Yeah, it's an interesting one, man. I remember that, uh, sheet, that chart that he has. It's burned into my memory. It's one of the five books that I think everybody should read. In my reading list that I've sent to a million people, it's one of the five books that I think everybody should read. If you want a primer on x-risk, The Precipice is the place to start. And yeah, one-in-six chance that humanity goes extinct within the next 100 years due to AI. And I think that (clears throat) the word the precipice is about a squeezed bottleneck, a treacherous path beyond which there could be this sort of glorious future. Uh, but right now, there is a very particular, very important forced function for... we don't know how long it could continue, but it's definitely, we're definitely not far off it at the moment.

    4. GM

      Yeah. The way I visualize it, I like the precipice metaphor that it's this narrow path sort of up a mountain, but if you've ever seen the movie Free Solo about, uh, what's his name? Alex Honnold climbing up Half Dome, right? I almost think of it as like we've been doing a walk through the woods as humans for like the last 200,000 years, and the, the level of risk we face is relatively low, and suddenly we're climbing up Half Dome without ropes or pitons or any safety gear. (laughs) And if we can just kind of make it to the top, then I think we'll be, you know, at a relatively lower-risk state, hopefully in several decades. Um, but a lot of people in the effective altruism movement think this is what they call a key century, a time of particularly high elevated risk when humanity has to be extra, extra careful and smart-... and risk-averse and very, very self-aware about what we're doing.

    5. CW

      Okay, so

  3. 6:1415:47

    How the Perceived Risk of AI Has Evolved

    1. CW

      lay out the landscape of AI risk. Why is it something that we should be concerned about? How have we got ourselves here? What's changed over time?

    2. GM

      I think the basic intuition that lots of people are developing now is that there are gradations of intelligence in the natural world, right? Squirrels are smarter than squid. Um, (laughs) monkeys are smarter than squirrels. We're smarter than monkeys. But we are not the ultimate level of intelligence. We can easily surpass, uh, human reasoning and planning abilities in lots of ways. And in fact, right, everybody on their smartphone already has apps that are better at doing certain narrow things than we ever could. Like, Google Maps is better at figuring out where to go than I would be with a paper map. Um, face recognition, right, um, is way better (laughs) than, than I am at face recognition. I'm, uh, I've got a little bit of prosopagnosia, like recognizing people. You know, like at the, the Human Behavior and Evolution Society conference we were both at two weeks ago, it's a little challenging for me. Uh, computer vision systems have gotten very good at it. So one issue is we are approaching the point where AI systems are getting more and more general-purpose and smarter and smarter across many, many different domains, and that represents a kind of major evolutionary transition in intelligence, where we could be outclassed. A second thing that I worry a lot more about (laughs) than some people seem to is just the raw speed issue, right? If anybody's played around with ChatGPT, right, and you've asked it to write a little essay or an outline or any kind of language, you know, oh my god, it is so fast. It is way faster at writing material than humans could be. And it's not even particularly optimized to be fast in that way. So what we're facing is potentially AI systems that are smarter than any human at doing a wide range of things, but also that are potentially a hundred, a thousand, maybe a hundred thousand times faster than humans. And the way I like to think about that is, you know, you might be familiar with the, uh, (laughs) the speedster superheroes, like The Flash, right, or Quicksilver, where once they do their kind of super speed, and they're running around, it's as if everybody else is just frozen in place. I think when we confront, um, very fast, powerful, general-purpose AI systems, that's the kind of situation we're gonna be in. We're gonna be outclassed not just in terms of intelligence, but also reaction speed, right? You could potentially have an AI trading bot, right, that's trading equities or crypto or whatever just way, way faster than any human can follow. Um, you could have military AI applications, right, that can kind of simulate scenarios in terms of, um, tactical applications or firefights, where they can kind of run through, like, every possible way that they could engage with an enemy, um, and kind of spin, spin out these, these simulations and then just completely outclass an opponent in terms of tactics and strategy. So I think these two things, right, AIs being smarter, potentially, and also extremely fast, um, (laughs) it takes a while for the full implications of that to sink in, but I think they're, they're kinda worrying.

    3. CW

      Well, let's think about a JCB digger, right? It's larger and stronger than a human, but it is a tool that is under our command. So why should we be concerned about an AI which is faster and smarter than us? It just makes us do things quicker and better than we would have known how to do it, surely.

    4. GM

      Yeah, if we're just outsourcing, (sighs) (smacks lips) you know, suggestions to the AI... Like, with Google Maps, we're, we're saying, "How do I get from A to B," right? And Google Maps is not actually taking control of our Tesla. It's not driving for us. We're not outsourcing our agency, right? That's relatively safe, right? The AI could still manipulate us in lots of ways. It could nudge our decisions in certain directions for its own ends or for the, the interests of somebody else, like (laughs) the people who developed the AI. But once you give it agency, once you outsource decision-making powers, that's where I think the real danger starts to come in. Um, so for example, you know, there's a big difference between, uh, like, a military AI that has the ability to suggest certain courses of action, like "Steer this carrier strike force towards this ocean for this purpose," right, versus an AI that actually has control and decision-making, uh, power over, like, "Okay, it's time to launch a bunch of F-35s to go bomb this country." The trouble is, because of the huge speed advantage of AI, there will be very, very strong incentives to give it agency, right, to make it able to make decisions in a sort of perception-decision-action loop that's, like, way faster than a human could do. Um, there will be commercial incentives to do that if you're in any kind of competitive environment, like, um, finance or military applications or even, uh, you know, market research, so that you can do, um, (smacks lips) optimization of some, you know, anything you wanna optimize, um, better by, by sort of outsourcing some agency to the AI.

    5. CW

      ... as of yet.

    6. GM

      That was a little vague, but if, if that made sense, let me, me know.

    7. CW

      It does. But a- as of yet you haven't talked about artificial general intelligence or some-

    8. GM

      Mm-hmm.

    9. CW

      ... Terminator, Arnold Schwarzenegger apocalypse or it becoming self-aware, recursive self-improvement, machine extrapolated volition. I hav- I haven't heard any of that, so presumably in between now and a concern that I think was very prevalent maybe a decade ago also when Tobio, uh, uh, Nick Bostrom's Superintelligence book came out, which was that once you reached singularity, there's gonna be all of these problems. It seems like there's some pretty dangerous gradations between now and there. So what, what's happened in the world of AI risk concern over the last decade in terms of how that was a problem, it's changed, and then what, what are the gradations between now and, and where we could get to in future?

    10. GM

      Yeah, so when Nick Bostrom, an Oxford philosopher, uh, published Superintelligence, which is a great book, another must-read (laughs) , it's a little technical, it's challenging. I think that was-

    11. CW

      Audible it, audible it. Don't try and read it.

    12. GM

      Okay yeah, that was 2- 2013. Bostrom emphasized very heavily the notion of, like, a self-improving takeoff, where like, an AI could get pretty smart and then it starts optimizing its own software and potentially hardware, and then you get a kind of, um, explosion of capabilities that could potentially happen very, very quickly. Like the AI could sort of go from, like, human level, level intelligence to superintelligence that's smarter than all humans who have ever lived in a matter of perhaps days or weeks or, or months. Now, that's a super scary scenario, I think it's legit to worry about it. I take a slightly different view, and I think a lot of people are also taking a different view that even fairly narrow AI that's n- like, not even as smart as humans in many ways, could still be extremely dangerous. For example, if you have a narrow AI that is very, very good at designing bioweapons, right? And doing kind of gain-of-function research through computer simulating, "Here's how a virus could spread through the world," you could have, like, a bioweapon narrow AI that actually invents an extremely, um, dangerous, uh, pandemic, right? You might think, "Nobody sane would want to do that." Well, there are lots of insane people in the world. There's lots of nihilists and terrorists and, and, you know, de-growthers and people who don't like human civilization who might be motivated to use that kind of narrow AI. You can also have narrow AI applications that could be very politically destabilizing. You know, you could have extremely good video deepfake technology that makes it look as if Vladimir Putin or Xi Jinping or, or, or Joe Biden has declared war on some other major, uh, nation state, and that provokes an extremely panicked, uh, military response, right? That could lead to, like, global thermonuclear war, and we're not really very ready for that kind of thing. And, and that stuff could happen, you know, within a couple of years. Like, people debate when will we get artificial general intelligence. It could be five years, 10 years, 50 years, who knows? But these narrow AI applications are coming down the pike very, very quickly and they could be pretty destabilizing.

  4. 15:4720:07

    Rapid Advancements in Neural Networks

    1. CW

      What has happened with the development of neural networks and large learning models that maybe caught a lot of AI x-risk safety alignment researchers unawares, and, and kind of how has that changed the viewpoint of concerns around AI?

    2. GM

      (inhales) What we're basically seeing is (sighs) very rapid advances in, uh, hardware, um, you know, memory size and speed. So when I was doing neural network research back in the late '80s, right, we were playing around with networks that had, like, maybe a few dozen, maybe a few hundred units and a f- maybe a few hundred or a few thousand parameters, meaning the weights in the network that kind of connect (laughs) the little simulated neurons together. So a few thousand parameters back then. Now it's like trillions of parameters in the large language models, 'cause the hardware is just better, right? We, we can make, uh, GPU chips that are much, much more powerful. And once you get to a certain size of network with enough hidden layers, learning, using the deep learning methods, you get kind of powerful emergent properties popping out that nobody was really quite prepared for, and that's where we've been blindsided by ChatGPT.

    3. CW

      What like?

    4. GM

      Just like, people kind of thought, "Well, if, if you make these large language models kind of predict, like, the next language token, and you feed them a lot of information about the whole content of the internet, um, then maybe they might be able to answer some basic questions," right? Kinda like a Google search. But lo and behold, th- they're way more powerful than people expected, right? Like you can ask ChatGPT to do all kinds of things that really wasn't designed to do, that people didn't kind of grow it to do, like write an outline for a screenplay, um, figure out, uh, (laughs) how to write a summary of AI existential risks. GPT's pretty good at that, in a kinda self-recursive way. Um, it can do math. It can do computer programming. It can do an awful lot of capabilities that people really... (sighs) They thought like, "Maybe that's 10 or 20 years away," and boom, here it is in 2023.... available to everybody, available to the 100 million users of ChatGPT. So that's where the shock has come, that once you get to, you know, a multi-trillion-parameter large language model, man, it's looking pretty close to artificial general intelligence. It's not quite there yet, and it's still fallible, and there's still a lot it can't do, but, you know, if you, if you had a time machine (laughs) and you took ChatGPT back in a laptop to 10 years ago in 2013 and you asked people, "Does this look like, um, artificial generan- general intelligence?" They'd go, "Holy moly, yeah, that's pretty effing close. That's way more advanced than we expected it would be by 2023." So that's the concern. Like, wh- Th- the fact that you can be so surprised by the pace of development means maybe the next step, whatever that is, is gonna blindside us as much or even more.

    5. CW

      And this is the same group of people who said it's one in six as the chance that humanity goes extinct due to AI within the next century, so you... (clears throat) I don't think by anybody's standards that would be considered a conservative estimate. It sounds crazy, you know? You're rolling a dice, and on one side is a button that destroys humanity. So, you know, it- it, again, just that- that's quite a stark claim, I guess.

    6. GM

      Yeah, one in six is literally Russian roulette. So if you're playing Russian roulette and- and the head is not, like, a single individual's head but your whole species, that's- that's what you're doing.

    7. CW

      So what is the definition

  5. 20:0728:25

    The Next Level for ChatGPT

    1. CW

      of AGI, and why doesn't ChatGPT yet m- breach it?

    2. GM

      AGI means artificial general intelligence, and the way that the research groups themselves, like DeepMind and OpenAI and Anthropic, the way they define it is basically an AI system that can do, um, just about everything a human can do in terms of cognition and perception to about the standard that a professional would do it in a job, right? So with an AGI, you should be able to train it to be as good at medical diagnosis as a doctor or as good at teaching a class as- as I am as a (laughs) professor, as good at playing chess as a, as a chess grandmaster, as good at traini- trading equities as a good Wall Street trader, and everything else, right? And the real power comes from the fact that once you have an AGI that can do all of that, that's fairly general, um, purpose, you can copy it, right? You can copy it, and you can make one copy of it do one human job, another copy do another human job, and then they could potentially even trade information about how you, how you do this. So, you know, the explicit goal of OpenAI and its- its CEO Sam, uh, Altman is create, uh, AGI as fast as possible and make it so it can automate most human jobs as- as fast as possible. So that raises lots of issues about unemployment, um, but it also raises lots of issues about existential risk, because among the many jobs, right, that an AGI could learn to do would be, like, be a really good terrorist at making bombs, be a really good military strategist in terms of overcoming Russia or China or America, um, be a really good spy, be a really good propagandist who can shape the outcome of elections, et cetera. So the- the point of AGI is it should be able to do anything people can do about as well as people can do it, and that includes not just all the good stuff, right, like being a- a good veterinarian who fixes dogs or being a good nurse who takes care of people with psychiatric disorders. It also means all the bad stuff that bad people can do.

    3. CW

      What do you think of Sam Altman?

    4. GM

      I think he's a brilliant guy, and I think he has what to him is a compelling vision of the future, and I think he genuinely believes that, uh, developing AGI will lead to a kind of awesome human utopia. Um, he talks as if he understands the extinction risks, but I don't... I- I think there's some deep, deep cognitive dissonance, right? 'Cause on the one hand, if he really took the extinction risks seriously, I think he would shut down OpenAI. They would no longer do research. They would say, "This is radioactive. This is toxic. This is crazy. This is Russian roulette. Let's not do this." Um, and he's not doing that. He's not shutting it down. He's sort of pushing more or less full steam ahead, and he's making some little noises about we need to be careful, and we need to be, you know, uh, we need to get good regulation, blah, blah, blah. But I don't think it's really a heartfelt, um, appreciation of the kind of Toby Ord point that, dude, you're talking about one in six at least risk of human extinction this century. Uh, to most people in the world-... that sounds absolutely insane and reckless, and it's something we did not consent to.

    5. CW

      What do you say to the people that would push back and say, "Well, look, Sam Altman, at least he's fighting for the good guys. He's somebody that's from the West. The choice is between us getting there first or Russia getting there first, or China getting there first. This is a winner takes not only all, but takes the world and takes the world for the rest of time, and then you've got potentials of, uh, bad social lock-in," or whatever it's called that William MacAskill came up with. "We, we, he's on the side of the good guys. If the good guys don't win, the bad guys win. Therefore, we need to make sure the, the good guys win."

    6. GM

      I think it's worth asking j- okay, like, (sighs) I'm American. Lots of Americans think we are automatically the good guys by definition. Anything America does is good, anything that a rival does is bad, and we must win at all costs. One thing I would say is, you know, if it's an arms race into a concrete wall, it's not really a race that you want to engage in, right? It's kind of like a game of chicken, but where nobody can swerve. Um, I think if the likely outcome of the arms race is extinction, then everybody who's contemplating the arms race should try their best to avoid, to avoid getting caught up in it.

    7. CW

      How are you gonna coordinate this? How are you gonna coordinate China and Russia-

    8. GM

      Yeah.

    9. CW

      ... to get on board with your very laudable altruistic desire to get Sam Altman to just go to b- the Canary Islands-

    10. GM

      Mm-hmm.

    11. CW

      ... for the rest of his life without a laptop or an internet connection?

    12. GM

      Yeah. I think there's, th- the, the traditional approach to this, right, is what's called the AI governance model, where you get a bunch of policy wonks and Washington, DC insiders and, uh, people in, in the UK government in Whitehall thinking really hard about how to regulate AI so it's benign, and how to reduce the, the arms race dynamics. And I think that's fine, and that's a good, worthy thing to do. Unfortunately, I think it's way too slow and it's way too easy for the AI industry to, to sort of capture what happens in their own interests. Um, and honestly, the politicians who are involved in this simply don't understand AI well enough, I think, to have a very, um, sensible way of, of approaching this issue. A second strategy, which is something I've been attic- advocating, uh, recently is a little more informal, a little more grassroots and kinda bottom-up, which is, I think there's certain industries where it's okay to just look at them and go, "That's bad. That's evil. That's reckless. We should stigmatize that industry. We should stigmatize everybody who works in that industry, who supplies anything necessary to that industry, who finances that industry, and it's just a bad industry and we want to try to slow it down." We have done that with many, many, you know, uh, industries in, in the past. Crypto (laughs) has been handicapped very, very successfully by adverse PR and moral stigmatization campaigns, by politicians and central bankers and so forth. Uh, the arms industry, right, has been heavily stigmatized. There's lots of ethical investment, um, criteria for, uh, investors that say things like, "Hey, let's not invest in alcohol, tobacco, gambling, arms trading, (laughs) uh, human trafficking," et cetera. Right? And I think, um, at the moment, there's quite a bit of popular opposition to the AI industry and concern about these risks, and I think we should kinda normalize people being able to say, "I didn't vote for this. I don't support this. I don't want these extinction risks imposed on me and my kids. And, uh, the people who doing, who are doing it, um, should be stigmatized."

    13. CW

      I suppose one of the problems

  6. 28:2537:16

    Helping the Public Understand the AI Arms Race

    1. CW

      is that with weapons or crypto or gain-of-function research coming out of a BSL level two lab or something that's not sufficiently secure, people are able to observe, uh, experience, and imagine the problems quite easily. At the moment, all that they've found is a really cool way to get them to write short essays or tell them jokes for their best man speech. So as of yet, the experience of AI, especially neural nets and l- uh, large learning models, has not been what you are concerned about moving forward, which means, where is the incentive for people to get on board with this?

    2. GM

      Yeah. I think there's an important role here for, (sighs) um, imagination and fiction and scenario-building. So remember when, when people were very worried about global thermonuclear war? Like, when I was in college in the '80s, this is all we talked about and all we worried about is, like, how long is it gonna be till the US and the Soviet Union have a massive exchange of ICBMs and we all die? Um, we could visualize that stuff very clearly because, like, Hollywood movies and TV series did a pretty good job of showing what that would look like. None of us had personal experience of Hiroshima or Nagasaki, right? We kind of read the accounts, and it was, it was horrible, but, um, when you're trying to imagine potential harms from new technology, all you really have to go on is sort of what experts say the risks are and then the way that, that, that screenwriters and directors-... uh, visualize those risks in a way that the, the public can understand.

    3. CW

      I don't disagree. The problem you have is that you're not only try- like, uh, no one has ever said, "Look at that-"

    4. GM

      Mm-hmm.

    5. CW

      "... atomic bomb. It gives me so much pleasure and it's so cool." No one has ever said the same thing about an engineered pandemic. But when it comes to ChatGPT, and its iterations downstream, people are finding benefits in it in the now, and they're trying to be told you need to let go of this thing which you see as a positive in order for something which you can't foresee as a negative.

    6. GM

      Yeah, and this is where I think it's important for people to kind of tap into their, uh, their kind of, I think, increasing sophistication that there can be very, very seductive technologies that can have very toxic kind of social side effects. Even, even the s- you know, the discussion about social media itself, I think has moved from, "Oh, wow, this is cool. We can connect with our grandparents, and we can find people to date," to, "Oh my God. Is this creating, like, mass mental illness in Gen Z in a way that we, we really need to rethink how TikTok and Instagram and, and so forth operate?" So I think people have the ability to understand some new technology can be very seductive. It can look great. It's new and shiny, but, you know, there might be, um, kind of a viper hiding inside it that (sighs) could be pretty poisonous. Um, the, the other thing I would, I would add is, you know, when I was... I was teaching online courses for Chinese University of, uh, Hong Kong, Shenzhen, couple years ago, and we talked quite a bit about AI extinction risks. And this is a bunch of Chinese undergrads in Shenzhen. Very smart. They understand existential risk. They're tuned into nuclear war. They understood (laughs) bioweapons 'cause the COVID pandemic was raging, and they think quite a bit about AI, right? 'Cause China had said, "We want to be the leading AI, uh, superpower by 2030." That was their plan a few years ago. And these Chinese students were, like, perfectly willing to understand the risks and to stigmatize AI, right? They had a kind of moral imperative and concern for the future of, of humanity that was just as strong as what my, my American undergrads have. So I think we have this stereotype in the West that, um, other countries like China are sort of full of, um, unthinking (laughs) automatons who've been programmed by their government and their authoritarian regime so that they will just do whatever, you know, Xi Jinping says. But I think actually there's quite a bit more room for, uh, a kind of global grassroots opposition to AI, not just in America and Britain, but also in other countries that are kind of key, um, you know, potential arms race, uh, players.

    7. CW

      Well, isn't China a country in which they've refused to make public a lot of the neural nets and the developments in AI, unlike in America, where citizens can just go in and play around with it? China said, "Well, we can't be sure that this thing isn't going to start telling everyone about Tiananmen Square."

    8. GM

      Mm-hmm.

    9. CW

      "So because we don't have control over it, we're not going to let it loose with the public," which is... It, it has to be curtailing the development because it's not able to do any of the learning that it would've done had it have been iterated over 1.25 billion users, all of whom are asking it how to make a cake this evening.

    10. GM

      Yeah, I think it, it's... (sighs) It's very interesting because insofar as the Chinese government really wants AI mostly for purposes of, of, of social control and social stability, and, and censorship and reducing crime and reducing terrorism, right? They have a much narrower range of AI applications than your typical American AI company wants, right? Chinese government, I think, is, is not really interested in the kind of techno-utopian vision of like the singularity and transhumanism and, "Let's all, like, upload our minds into the matrix," the way that you see in a lot of the Bay Area AI enthusiasts, right? They're not really into that. They just want, like, China to be stable and prosperous. Um, so here again, what's happening, I think, is the American AI companies are at the cutting edge, right? They're far and away more advanced, as far as we know, (laughs) than anything happening in, in China, much less Russia or Brazil or the UK. Um, and they're just kind of trying to play catch-up, right? What's happening is, like, we're far out in front in terms of the arms race. If I was a Chinese, uh, policy expert, I would be freaking out. I'd be wanting to, to play catch-up. I'd be very concerned about AI having a kind of a- America having a kind of AI hegemony, and it would, it would look like a threat to me that I have to respond to. So we are setting the pace in terms of the arms race. If we slow down, right, China might go, "Whew, thank God. Like, we don't have the talent or the resources to play this arms race. We can... Okay, if America's relaxing, we can relax too. We can focus on other issues," (laughs) like, you know, how do you get Chinese people to have kids?

    11. CW

      (laughs)

    12. GM

      Stuff like that, right? (laughs) Um-

    13. CW

      Use AI.

    14. GM

      Yeah. Uh, AI matchmaking, right?

    15. CW

      Yeah.

    16. GM

      That mi- that might be a key, key application for them.... so I think, here again, Americans have to look in the mirror and go, "To what extent are we really setting the pace with the arms race? Are we forcing other countries to try to catch up?"

    17. CW

      That's a really-

    18. GM

      And maybe if-

    19. CW

      ... good point.

    20. GM

      ... maybe if we slow down (sighs) we can all, you know, take a step back, take a breath, pause, think, "W- what, what exactly are we doing? Should we be playing this Russian roulette?"

    21. CW

      Yeah, that's a very good point that I hadn't, I hadn't considered. Uh, I suppose as well that the wake of what is on the internet and what is available and what, what is, um, public knowledge about what's happening in all of these different neural net companies will be slipstreamed in some regard by foreign actors. You know, the, the source was, I think, a long time ago, open sourced, and then it's no longer open sourced, which means that you can't fully see what's going on inside in terms of workings. But I'm sure that you can determine a, a good amount of stuff, or at least a, a non-insignificant amount of stuff from that. One of the

  7. 37:1640:53

    Who Will Be the First to Create AGI?

    1. CW

      words that we haven't used yet that would've been used an awful lot 10 years ago was alignment problem. So, how relevant is discussing the alignment problem now, uh, and a- and actually, uh, before you can get to that, does it look like, if you were to put your money on the table, the front-runner contender for creating AGI, would it be neural networks? Can, can a, a large learning model become sentient and be the AGI thing?

    2. GM

      (sighs) (clicks tongue) I don't know. On this point, righté there's, there's a big variety of opinions and, um, on the one hand, people like, uh, my friend Gary Marcus at New York University argue that large language models just based on deep learning and just based on neural networks cannot do, um, many of the key cognitive tasks that humans do, so can't reach AGI. And he's got various arguments for that and some pretty good recent books on AI. Um, on the other hand, you have people saying, "Well, (sighs) people keep underestimating deep learning, and keep, uh, underestimating what, what it can do, and as far as we can see, there aren't really very many, uh, hard constraints, uh, you know, against it. And moreover, huh, the human brain looks an awful like, uh, m- awful lot like a large neural network w- right?" We don't exactly learn using deep learning methods as, uh, you know, GPT does, but we're basically just a bunch of, um, you know, neurons with connections. So-

    3. CW

      But what do you, what do you think? Is it just, can we just layer transistor upon transistor upon reinforcement, and will we arrive at something that approaches sentience and/or AGI?

    4. GM

      (clicks tongue) Well, most of the (sighs) work I did in, in neural networks back in the day was, was based on the assumption that, no, you can't just train a big, random, kind of incoherent, formless, blank slate (laughs) neural network to do anything. It needs a bunch of structure. It needs an architecture. Um, it might even need an evolved architecture where you have to try a bunch of different ways of wiring up large networks before you can get something that's really smart. Now, I'm more humble. I don't know, you know? Uh, I was excited about that 'cause I'd studied a lot of evolutionary biology and animal behavior, and it looked like, uh, simple nervous systems often had quite a bit of architecture to them. You know, you look at a, a bumblebee or an ant nervous system and it's, it's highly architected. It's not just a glob of neurons. There's, like, ganglia doing different things. There's perceptual, um, (clicks tongue) clumps of nerve cells doing particular things. But at this point, I don't know. Like, it wouldn't surprise me that much if you, you know, GPT-8 or 9 (sighs) proved to be able to do pretty much everything that, that humans can do without having a whole bunch of intrinsic structure. On the other hand, Gary Marcus might be right, (sniffs) um, that you actually need a, a radically different approach. Um, I'm not sure it matters all that much, though, in terms of the AI safety issues, 'cause one way or another, the AI companies will figure out how to do AGI if, (laughs) if we give them the money and the talent and the resources and the social support, they will figure it out sooner or later.

    5. CW

      Why haven't we talked about

  8. 40:5350:42

    Is the Alignment Problem Still a Problem?

    1. CW

      the alignment problem? Why is that a term that I'm seeing less of, uh, as well? Is it just that there's so much AI progression that it's not a problem, or is, is there a new issue with regards to this that neural nets have created?

    2. GM

      (sighs) People are still talking about AI alignment. It's just, um, there's so much more buzz right now about, th- you know, the amazing new capabilities, not just of the large language models, but also of, like, Midjourney and DALL·E and the other generative AI systems for creating images and videos, and the ones that can create music and amazing audio. Um, so people are excited about that. People are extra, extra alarmed about the AI risks, and everybody interested in so-called AI alignment is still trundling along working on it, right? I'm still writing essays about AI alignment. Um, it's just gotten slightly overwhelmed by all the other, you know, shiny new issues.

    3. CW

      What is AI alignment, for the people who aren't inducted?

    4. GM

      (sighs) The basic notion is, how do you get an AI system to be, quote, "aligned" with human values and preferences and goals? Um, and it's kind of like the, the, (sighs) the so-called principal-agent problem in, in companies. Like, if you have a bunch of investors...... right? And they are supporting a company and you create, like, uh, a board of directors, and the board of directors gives power to a CEO. How do you make sure the interests and decision-making of the CEO are aligned with those of the board of directors, and in turn, with the shareholders? Right? That's a kind of corporate alignment problem. Um, a political alignment problem would be how do you elect a president that (laughs) actually serves the best interests of the voters instead of their own interests? In terms of AI, it would be how do you create an AI system that actually respects what the, uh, the human users want it to do, um, or kind of what they really want it to do even if they can't quite (laughs) articulate everything about what they want it to do? Right? And there's, there's lots and lots of, uh, myths and legends going back thousands of years about the pitfalls of getting some power, like some genie that pops out of a lamp, where it says, you know, "What wishes would you like?" And you-

    5. CW

      King Midas turns everything into gold.

    6. GM

      ... and you, and �� right, and you, you give it wishes and then it, it interprets what you want sort of over-literally, right, in a way that's absolutely dis- disastrous. So AI alignment, um, in principle is about making sure the AI is kinda doing what you want and, and even in ways that you can't necessarily articulate, that fit with a whole background of, like, human common sense and, and moral norms, that we might not even be able to articulate to the AI system, but that we would still want it to ac- act as if it understands.

    7. CW

      Yes. However, different individual humans have got different values, and conflict of interests happen a lot. So which human values should AIs be aligned with?

    8. GM

      Yeah. Th- so this is exactly the issue that I've, I've been writing about in my, um, Effective Altruism Forum essays for the last year or so. I've pointed out, like, you can say we want the AI to just respect the views of its, its end user, the consumer who actually buys it. But what if the end user is a terrorist? What if the end user is a political propagandist? What if the end user is in a hostile nation state? What if they're a bad actor, right? Do you really want the AI to do what they want it to do? And second, if you try to aggregate, like, the collective will of humanity, right, y- where you go, "Well, the AGI should sort of do what people in general would want it to do if it could summarize all of our preferences." Well, then you get into interesting issues like, well, men and women have, like, different interests, so should the AI go with the 50% of people who are male or the 50% who are female, or whatever? People have different political views, right? Um, and in one essay I pointed out 80% of humans are still involved in organized religion, right? And yet, the AI industry is dominated by secular atheists who largely have contempt for religion. So if you have an AI that's trying to be aligned with people in general, and people in general have religious values that are being completely ignored and dismissed and mocked by the AI industry itself, right, (laughs) then how do you actually get alignment with, with people? They don't really mean alignment with what people believe. They kind of sort of mean like we want the AI to be aligned with what a good liberal, secular, humanist, democratic Bay Area tech bro values. Right? That's really the bottom line. That's, that's what AI alignment actually boils down to in practice.

    9. CW

      Yes. That ... Does it split the difference, uh ... I remember, uh, to not spoil the end of Superintelligence, but the, like, the good guy that, that comes in at the end is machine extrapolated volition, which is, uh, in a world in which we can't be sure that we're going to give a program an instruction and that program is going to take the instruction and turn us all into paperclips or, or kill us so it can make a cup of coffee, telling the machine, "Do what you think we would have asked you to do had we have had sufficient wisdom to ask you do it," that is roughly machine extrapolated volition. But even that, modeling human preferences needs to be done based on a group of humans. And which preferences do you mean? And when preferences come into conflict, which ones win? And if it's 50/50 between men and women and you split the difference between the two, like, is that optimal? Is that actually optimal? Or is it more optimal to swing it to more- toward one way or another? One group might be incredibly vehement about something, but it might be immoral. Okay, so given the fact that we've been trying to work out ethics and virtue and morality ourselves philosophically for thousands of years and have made some progress but not got anything that's too definitive, please try and put that into code for me. And, and which-

    10. GM

      Mm-hmm.

    11. CW

      ... which morals do you mean? Do you mean the morals of modern secular, 21st century, Western, industrialized, educated people? But why, why, why don't we use the ones from the Roman era, or why don't we d- say, "Well, let's wait for another 3,000 years and see what- which ones come up then?"

    12. GM

      Yeah. And I think there's, there's two additional, uh, sort of alignment problems I've been worried about. One is, um, what I've called embodied values. So, when we think of values and preferences, we typically think of things we can articulate verbally. Like, if someone says, "Wh- what do you want for dinner?" Like, we have a verbal answer we can give. But I made the point that, like, our brain is only 2% of our body mass.... right? Our body is full of all these other tissues and organs that have evolved their own kind of values and agendas, in a way, right? And I call these embodied values. Like, what your immune system really wants to do is fight off pathogens that might infect you. So ideally, you'd want an AI to be aligned not just with the values that our brain can articulate through our words, but that is also kind of, like, biomedically aligned with the interests of our bodies, right? So that we are kept healthy and well and live a long time. Now, if you ask people, "How do those embodied values actually work?" Like, we have no idea, right? We can't even articulate those. So methods for training AI system based on human verbal feedback cannot, even in principle, align with all these embodied values of our, of our bodies. A second issue is, you know, from an evolutionary viewpoint, the development of AGI would be a major evolutionary transition. It's a big thing that's comparable to the evolution of, like, um, DNA, or the evolution of multicellular life, or the evolution of nervous systems. Like, it's a big deal that doesn't just affect humans. It also affects, like, the other 70,000 species of vertebrates and the other 10,000,000 species of invertebrates and all the life on the planet. Now, if you ask, "Okay, how do you align AI not just with humans, right, but all the other kind of organic stakeholders, (laughs) all of the other life forms on Earth who might be affected by AI?" Okay, how does the AI learn the true interests of an elephant or a dolphin or a termite hive, or, you know, all the other life forms that, that matter? I've never seen anybody in the AI industry even sort of seriously talk about this, and yet they portray AI as this kind of world historical thing that will re-engineer the entire, uh, you know, planetary ecosystem.

    13. CW

      Yeah, it could quite quickly become the most super-intelligent, powerful Greta Thunberg ever created, just screaming from the top of some transistor hill. I, I,

  9. 50:4254:55

    The Opposing View to AI Safety-ism

    1. CW

      I read a thing from, uh, Byrne Hobart and Tobias Huber talking about, uh, AI safetyism being more of a risk to society than anything the AI doomers are predicting. Why... I- it seems like you've put forward a case that suggests this is something we should be concerned about, et cetera, et cetera. What is the other side of the fence to this? What is the side of the fence that says, "Stop shitting yourself about safetyism. Let's just crack on"?

    2. GM

      I think if I was going to, like, um, put forward the best possible case against myself, right? The steel man, the, the anti-AI doomers, I would say something like, um, "If you pause or stop the development of AGI, there are huge potential opportunity costs. There are lots of human problems maybe that AI could help solve, right?" And the usual ones that people talk about are, "Oh, AGI could help us, like, solve climate change. It could help, uh, create peace and prosperity. It could reduce global conflict," blah, blah, blah. Um, I do not actually find those very compelling. I think there are often ways to, to solve lots of those problems without developing AI. The one potential application that does give me serious pause, that I've talked and, and thought about a lot is longevity issues. Okay, if you had an AI that could seriously help, uh, develop longevity treatments and anti-aging treatments and help regenerative medicine research and help biotech so that we don't all have to die, right? I'm 58. I would like not to die. I would like to live another 100, 200 years. However long I want to live, I would love to live that long. Maybe if we pause AI, I don't get that benefit, right? Now, personally, I'm willing to die for my kids. I'm willing to forego longevity treatments if we reduce existential risk. Some people might have a different view. Some people might be like, "Well, screw you, Miller. I don't want to have to die just because you're scared of AI." I understand that. I respect that viewpoint. Let's have that debate. Let's talk about it. Um, my personal hunch is that (laughs) if we invested as much directly into longevity research as we're investing in AI, we could actually probably fix aging within, like, 20 or 30 years. Um, but people, (laughs) what pe- what's happening instead as is, people are like, "We know it's really hard to get people to directly support longevity treatment. People are in a kind of pro-death trance. They think it's weird and creepy to head f- towards immortality. So we know we can't sell that as a research program, right? So instead, we're gonna sell them on AI, and then AI will, like, magically deliver the anti-aging cures," right? I've, I've often seen this argument, right? "People won't directly support longevity research, but AI can solve longevity, so that's why we need AI." Does that make sense?

    3. CW

      Yeah, it does. It's, um... I don't know, man. I, I, I'm really trying to stay open-minded to this, but coming from a Bostromite background, you know, I wa- I've always, for nearly a decade now, been so w- wary and touchy-feely about anything that even looks remotely smart that comes out of... You know, Siri. Siri was something that I remember there was a, "Oh my God," like, you know, "Is Siri gonna be conscious? What sort of problems are going to be caused?" Et cetera, et cetera.... but there def- th- th- there feels a part of me that's like, well, look, if you're going to work incredibly hard to fix longevity problems so that you keep people alive, you keep some people alive that are alive right now, and the outcome of that is that everybody is dead within two centuries, that seems like a bad deal. It seems like a, a relatively pointless deal. So, uh, we're talking about,

  10. 54:5559:36

    Most Concerning Current AI Advancements

    1. CW

      uh, terrible outcomes that could occur, um, the tip of the spear or, or the top of the mountains, so to speak. Looking at some of the stuff that's going to occur in the interim between now and then, you know, 'cause this is, uh, w- how many transistors is it going to take? Will the large learning models get us there? Is it GPT-8? Is it GPT-Never? Um, but some of the things we probably can be certain about are what the tools and the technologies are enabling at the moment, stuff like deception, deep fakes, uh, information sanitization and misinformation, elections, politics, persuasion, friend bots, loneliness. What are you most concerned about that is very high likelihood, maybe it's already here, what are the things that people should be looking out for with the rise of AI over the coming years?

    2. GM

      (inhales) I, I think the 2024, um, election cycle in America is going to be absolutely wild and shocking, and is going to involve a lot of, um, narrow AI applications in political propaganda and ads and deep fakes and speech writing, and sort of the mass customization of political propaganda. Um, 'cause one thing AIs can do is track, you know, individual preferences and values and priorities through social media interaction, and get a pretty good model of what every potential voter really cares about, and then potentially customize, uh, political messaging directly to each voter in a way that, like, pushes their own hot buttons very effectively, right? So, like, if you're a Democrat and you care a lot about racism but you don't really care about abortion, then you'll, you'll start to see customized ads that are, like, very anti-racist but they don't talk about abortion. That will be new. We haven't seen that before. Before, political ads were kind of like lowest common denominator TV spots or magazine ads that sort of tried to hit, like, the, the typical undecided voter. But now, I think in 2024, my humble prediction is, (laughs) we're gonna see a lot more, um, narrow AI used for purposes of, uh, political manipulation, and I think people will be shocked at how, (sighs) effective and persuasive it is. And I have no idea what the outcome of the election will be, but I think the outcome for a lot of voters will be to go, "Oh my god, we, we, like, don't have a sort of political immune system that can fight this very well."

    3. CW

      Does that mean that AI at the moment has got theory of mind?

    4. GM

      So theory of mind is like your ability to understand the beliefs and preferences of others. I think large language models do have, pragmatically speaking, theory of mind in that they have absorbed, you know, all the lessons that are available from the internet a- about how to understand b- people's beliefs and desires. So, I think if you ask, uh, like, a GPT, "Can you please write really good ad copy to advertise a particular good or service?" It's pretty good at that. Like, I know people in advertising who are like, "Oh my god, GPT is better at ads than (laughs) , than, than we are, and so we have to use it 'cause our rivals are using it," and I think that also applies to polit- political ads and speech writing.

    5. CW

      Yeah, so functionally, it is able to achieve the same thing as somebody that has theory of mind, and this comes back to, you know, a conversation that everyone is having, which is when do you know if something is conscious or not, you know? I- the, the, um, Turing test turned out to kind of be a bit of a s- uh, insufficient, uh, barometer for working out whether or not something is sufficiently intelligent because there's no way that you can speak to a well-trained ChatGPT model and not think that there is something inside of there. But you don't know if there is any there there, right? Is this just a pee zombie? Is it just making all of the movements and the sounds and the shakes, and you roll this forward into a sufficiently fleshy, sufficiently pink, sufficiently right-proportioned robot that can move around your house, and you go, "That's a human. That's a human. I know it's a human. It looks like a human. It walks and talks like a human." So, at the moment, yeah, the, the function and the outcome that these things are able to achieve is so close to somebody that is able to do that. So, 2024,

  11. 59:361:04:57

    How AI Will Influence Content & Social Media

    1. CW

      election, lots of persuasion, lots of speech writing, um, what about social media and the news landscape and the information landscape and, and people, um, both producing and consuming content, you know, we're coming, we're still in this world of content creators at the moment, how do you think that that's going to be influenced?

    2. GM

      (inhales and exhales) You know, I think what we've got at the moment is ... Well, you know, Noam Chomsky wrote this book, uh, 30 years ago about, you know, uh, manufacturing consent, about the way that in, in so-called democratic societies, the way that power actually works is through subtle semi-voluntary propaganda, right? People are willing to go to school to absorb the indoctrination that we get in public schools, and we expose ourselves voluntarily to certain kinds of-... newspapers and magazines and TV news shows, and then they sort of guide us into what we should think and value. And the engineering of all of that has been done before by humans, by people who do, um, uh, who are Hollywood screenwriters and political speech writers and, uh, ad people, marketing people, public relations people, right? There are millions of humans involved in that venture of trying to manipulate public opinion in various directions. But now they're gonna have these AI systems that hugely increase their reach, their ability to customize messages to particular individuals, their ability to, um, kind of capitalize on big data gathered through social media that can do extremely fast iterative testing of messaging, right? Not just split tests or just focus groups like in traditional marketing, but that do the kind of things that Facebook does in terms of testing all the time which of millions of ads are most effective. So we're gonna have kind of this, (smacks lips) um, AI-powered, like, hegemony warfare, like worldview warfare, where people are advocating their, uh, political, ideological, religious belief systems and they're fighting against enemies, right? And there's gonna be a massive kind of ongoing culture war, which in a way is the only war that matters anymore. Um, but it's gonna be heavily, heavily shaped increasingly by, uh, AI tools.

    3. CW

      What about friend bots and people who just abscond the real world for some virtual best mate or a girlfriend or whatever in their apartment? Do you think that this is something that's likely?

    4. GM

      Yeah, absolutely. This- this is something I wrote about in- in my most, uh, recent essay on, uh, anti- a likely anti-AI backlash. One thing that will provoke, I think, a backlash is, um, people using AI as fake boyfriends, girlfriends, and f- and friends, and getting a kind of, um, pseudo-intimacy and sort of validation from AI systems. And you might think, "Oh, surely people can't be that gullible that they would prefer interacting with just a smart chatbot." But if you think, like, we're talking about a chatbot that could potentially remember every single personal detail about you, that remembers all previous conversations with you, that can try out different ways of interacting with you to see what you like and what you don't like, that is much more attentive than any real life boyfriend or girlfriend, that has infinite patience for listening to all your shit and all your dumb stories and all your neurotic woes, right? In a way that no living lover would ever put up with. And it would be like the perfect, um, you know, combination psychotherapist and friend and girlfriend and- and mentor and confidant and- and all of that. And we're pretty close to be- being able to do that. So I do worry that real life social interaction will look like a pretty poor substitute for those kinds of AI pals. They might be-

    5. CW

      Why would that cause- why- why would that cause a backlash?

    6. GM

      I think it'll be a backlash because people who don't get caught up in having an AI boyfriend or girlfriend will look at people who are caught up in it, like Gen Z staying alone in their apartments and never going out on dates and not getting married and not reproducing, and they might go, "Oh my God, this is the most socially toxic technology ever invented. Like the birthrate is dropping. Nobody's dating. Nobody's having real relationships. This is not sustainable, and therefore we are going to do a moral backlash or religious backlash or political backlash that says, 'This is not the way we want civilization to go.'"

  12. 1:04:571:08:40

    How Accurate Can AI Expert Predictions Be?

    1. CW

      I saw an- a headline which you'll have seen, "50% of A- AI researchers believe that there is a 10% or greater risk that humans go extinct due to our inability to control AI," and the Center for AI Safety had a ton of researchers agree with the statement, "Mitigating the risks of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." The open letter has been signed by more than 350 executives, researchers, and engineers working in AI, plus Elon Musk as well was a part of this movement. How effective are AI experts at predicting this AI development and the risks moving forward? And if 50% of the people say that there is a 10% chance that the technology that they're working on is going to end us, uh, what happens next? Uh, i- i- is there- are there gonna be picket lines of disgruntled, terrified AI researchers outside of OpenAI's lab? What's happening?

    2. GM

      Yeah, I mean, there is a movement actually (laughs) to do literal, uh, protests and demonstrations and- and picket lines. I'm involved in a- in a Slack group that, uh, you know, some members of that are actually organizing, like, uh, pickets in front of OpenAI or in front of DeepMind in London. Um, now of course the point of these letters, and I think I've signed all the (laughs) letters that are going around myself, we know the letters themselves will not stop the AI industry from doing what it's doing. However, the whole point of the letters is to draw public attention to the issue, right? To get...... uh, press coverage to get the general public thinking about these things, you know, hopefully tuning in- into, like, podcasts like this, and reading the books they need to read and becoming, uh, politically motivated to take this seriously as- as an issue. And I think in that regard, uh, the letters have been surprisingly effective. Like, the amount of press coverage given to AI risk in the last couple of months is hugely greater than anything in, like, the previous 10 years. And the government response, like, you know, Biden inviting AI industry executives to the White House and Prime Minister of Britain taking this very seriously and tweeting about it and having, like, AI safety summits in London, uh, there's a lot of public support for slowing down and the AI industry being held accountable and people asking hard questions about, like, what exactly is the endgame here? Like, mass unemployment and then extinction? Is that really (laughs) the direction we wanna go in? Um, and I think, you know, the AI experts themselves (sighs) I hope have a new humility about their ability to predict things, 'cause they- they'll know they've been blindsided by GPT. They- they didn't expect these kinda capabilities this quickly. So when certain AI researchers, like, let's say, la- Yann LeCun (laughs) , who makes fun of AI doomers a lot on Twitter, hopefully people like him will stay open-minded enough that they might actually kinda re-examine their biases. And maybe it'll have a kind of Geoff Hinton moment, right? Like, leading AI researcher, Geoff Hinton, going, at age 75, "Oh, no. Oh, no, I think my life's work might have been really actually kinda evil and imposing risks on people, and I'm gonna resign from Google and blow the whistle and make a big fuss about this."

  13. 1:08:401:11:43

    Something Much Worse than Existential Risk?

    1. GM

    2. CW

      Yeah. There was a tweet as well from Tim Urban that you responded to. Uh, Tim said, "Whether you're optimistic or pessimistic about the future, the fact is that we're facing an existential risk, at worst, and a vastly different future world at best. The world that we're used may not be around for much longer, so let's really enjoy the world that we have it. Visit the places that you've always wanted to visit. Dive into that hobby you've always wanted to try. Spend quality time with your loved ones. Savor each sunny Saturday, each great meal, each moment of fun. If we end up looking back on these days with great nostalgia, we want to at least know we made the most of the time that we had." And you responded and said, "Existential risk isn't anywhere near the worst thing that we could have. There's something called S-risk as well, above and beyond X-risk." What's S-risk?

    3. GM

      S-risk is suffering risk. So with extinction, right, everybody's dead and then they don't experience anything anymore, and that's, like, pretty bad compared to experiencing things and being happy. But as anybody who's ever suffered chronic pain or torture will- will attest, like, there are things worse than death. There are levels of suffering that could potentially be imposed by new technologies that would make us wish we'd gone extinct. I personally haven't taken S-risk that seriously. I don't know that much about it. There's other people who are more expert on it than me. I've only read, like, a few science fiction novels that depict, like, really, really bad S-risks that could be imposed. There's a- there's a novel called Surface Detail by Iain M. Banks, where, long story short, in the far future some religious fundamentalists decide heaven and hell don't really exist, but we should make them exist, so we're gonna upload everybody's brain before they die into a simulated reality. And if we think they've been bad and naughty, we're gonna make them live in a simulated virtual hell for, like, subjective millennia and, like, thousands of years of suffering and torture and mayhem and death. And, like, that would be worse than being dead forever. So that's S-risk. Um, some people worry a lot about it. Um, I haven't really focused on it very much, but I think when people like Tim Urban say, "Yeah, it'll either be great or we'll all be dead, so just smell the daffodils and enjoy your steak and- and visit beautiful Austin, Texas," or whatever- whatever they think we should be doing, (sighs) I think, no, that, like, I want my daughters to grow up being very confident that, A, they won't suffer an extinction risk, and- and, B, they won't suffer, you know, an S-risk.

    4. CW

      Yeah, the, uh, take it a little bit more seriously. One final

  14. 1:11:431:19:29

    Why AI Won’t Save the World

    1. CW

      person who waded in was Marc Andreessen. I'm going to guess that you read his big essay, uh, Why AI Will Save the World recently. What were your thoughts on that?

    2. GM

      (sighs) You know, I used to have so much respect for Andreessen (laughs) . And I think he's just so witlessly stupid about the extinction risks regarding AI. It- it truly baffles me, like, how you can have a brain that big and be so... Honestly, I think what- what happens with a lot of these sort of well-respected rich elders, like LeCun or Andreessen, is, like, they've had a great career, they've done well in business, they've, you know, made a bunch of money. They are technically savvy. I have- I have no doubts they're- they have high IQs.... but I think they radically overestimate their ability to understand issues that they just haven't read very much about. And I think if you haven't read, you know, Nick Bostrom's Superintelligence, you haven't read Tobias Ord's The Precipice, you haven't read some of the other key ideas, if you haven't read Eliezer Yudkowsky's work, then, like, you're just gonna be reinventing a bunch of very amateurish (laughs) objections to that kind of work that, like, were already addressed 10 or 20 years ago by the actual experts who have been thinking about this. So, this is one of those topics where it's very important not to just defer to people because they're rich or famous or smart. Like, you really want to dive deep on have they thought through these issues, have they engaged in meaningful, uh, conversations with other experts in the area? And if they haven't, they probably do not know what they're talking about.

    3. CW

      What would you do if you could step in? What would your prescription or policy be if you had a omnipotent, omniscient, God's eye view?

    4. GM

      I think I would love for the general public just to tune into the issue, to apply their natural survival instincts and their parental instincts to go, "Wow, this looks like a threat to me, my family, my kids, my grandkids. This is a legit threat. I should take it just as seriously as I would take a local crime wave, or just as seriously as people took nuclear war back in the Cold War. And this is a matter of potentially life and death, you know, for me and my family." I want them to personalize it. I don't want them to just think this is some abstract science fiction scenario that is in, like, the distant future. This is the stuff that, like, could affect, you know, my 26-year-old daughter, or my 15-month-old daughter (laughs) , or, like, my actual kids and cousins and nieces and nephews and whatever. And once you take it seriously, then you're motivated to learn more, and to start moralizing the issue, and to go, "Are the people who are charging headlong into this, right, through some combination of, like, greed and hubris and prestige and whatever, are they on our side? Are they pro-human? Are they fighting for my family or are they just caught up in some kind of, um, sort of delusional project that is at heart reckless and evil?"

    5. CW

      And then what?

    6. GM

      And then if they decide this is reckless and evil, let's mo- let's effing morally stigmatize it. Let's say if you work in the AI industry and you are not spending most of your time worried about AI safety, then you're a bad person and I don't want to associate with you, I don't want to date you, I don't want to be your friend, I don't want to invest with you, I don't want to supply you with hardware or software or anything else. Um (clears throat) , I don't want you to be a in our city, state, or country, and I want to, uh, you know, shut it down. Now, I'm not advocating for, like, a violent so-called, like, Butlerian jihad (laughs) where people rise up and they're like, "Smash all the machines and kill all the AI." Like, no, I don't want that, um, mostly 'cause it would be counterproductive in terms of PR. Like, violence doesn't work in the social media era; it just delegitimizes your cause. Um, but I think short of violence, it's important to, uh, be aware that, like, we can use all the techniques of persuasion and activism in this domain that people used in every other social movement that we're familiar with, civil rights movement, the gay rights movement, um, you know, the libertarian movement, uh, crypto. Like, anything that's been at all successful in terms of the public rising up and saying, "We want a change of values," uh, we can use all that in fighting reckless AI development.

    7. CW

      I suppose the challenge, again, that you have is everyone is being distracted into a beautiful field with daisies growing in it and free cake recipes and essays written for them for school and university. So, yeah, I- I- I worry that the- the novelty and the immediate convenience that's been afforded by these new advances is causing it to seem both so benign and enjoyable and distracting and entertaining, uh, that galvanizing people to see this is goi- ... And presumably this is the, one of the primary challenges that you're facing.

    8. GM

      I think what, what I would say to people is, like, you can kinda have your cake and eat it too, in terms of, like, there's a whole range of incredible, um, software and narrow AI that's- that's absolutely wonderful. Like, if you look at the progress in, um, computer graphics in Hollywood movies, it's amazing. Like, I love Avengers movies. And a lot of that would be considered very advanced generative AI, right, by the standards of the 1970s. Right? The fact that many modern, uh, movies are largely kind of AI graphic generated, that- that's pretty cool. The fact that, you know, Google Maps can actually get you from A to B pretty reliably and taking into account, like, traffic, that is narrow AI. That's a huge benefit to many people. And there are probably hundreds of cases where you can have quite safe narrow AI applications that deliver huge quality of life benefits to people, but you don't have to go down the road towards AGI, right, or towards other kinds of highly risky narrow AI. And even in longevity research, like, you could have narrow AI that does biomedical research and synthesizes scientific literatures about which molecules might be helpful and that maybe even can- can help you run larger scale, uh, longevity studies, right? And maybe we can get longevity through that without having to go down the road of extremely, uh, dangerous AGI.

    9. CW

      Geoffrey

  15. 1:19:291:20:05

    Where to Find Geoffrey

    1. CW

      Miller, ladies and gentlemen. If people want to keep up to date with the work that you're doing, whether it be evolutionary psychology or AI risk, where should they go?

    2. GM

      They can go to my website primalpoly.com, and they can also see my essays on, uh, the Effective Altruism Forum, which is EA forum, where I- I publish quite a bit these days.

    3. CW

      Geoffrey, I appreciate you. I'm looking forwards to seeing what we get to talk about next time as well.

    4. GM

      My pleasure, Chris.

    5. CW

      Thank you very much for tuning in. If you enjoyed that episode, then press here for a selection of the best clips from the podcast over the last few weeks. And don't forget to subscribe.

Episode duration: 1:20:05

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Vx29AEKpGUg

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome