Modern WisdomHow Long Could Humanity Continue For? - Will MacAskill
EVERY SPOKEN WORD
150 min read · 30,366 words- 0:00 – 0:25
Intro
- WMWill MacAskill
We are at the very beginning of history. Future generations will see us as the ancients living in the distant past. What are the events that really could have civilizational trajectory level impacts? And then, finally taking action, trying to figure out, okay, what can we do to ensure that we navigate these challenges and try to bring about a wonderful future for our grandchildren and their grandchildren? (air whooshing)
- CWChris Williamson
Given the
- 0:25 – 7:35
Introduction to Long-Term-ism
- CWChris Williamson
fact that we're seeing James Webb Telescope images all over the place at the moment, kind of seems like a smart time to be thinking about far-flung futures and potentials for civilization and stuff like that.
- WMWill MacAskill
Absolutely. James Webb is making very vivid, um, and in high resolution, uh, an inc- incredibly important fact, which is just that we are at the moment, uh, both very small in the universe, (laughs) um, and also very early in it. So almost all of the universe's development is still to come ahead of us.
- CWChris Williamson
That's wild to think about the fact, e- especially on our time scales, right? You know, you think about 20 years as being a very long time in a human lifespan, and then you start to scale stuff up to continents, to the size of a planet, to the size of a solar system or a galaxy or the universe, and it, it puts things into perspective.
- WMWill MacAskill
Yeah. Well, we're used to long-term thinking being o- on the order of a couple of decades or maybe a century at most, but really that's being very myopic. Um, I mean, how long has history gone on for? Well, that's a couple of thousand years. Homo sapiens have been around for a few hundred thousand years. The Earth formed 4 billion years ago. First, um, the Big Bang was a little under 14 billion years ago. And if we don't go extinct in the near future, which we might do, and we might cause our own extinction, then we are at the very beginning of history. Uh, future generations will see us as the ancients living in the distant past. And to see that, we can just use some kind of comparisons. So a typical mammal species lives around a million years. We've been going for 300,000 years. That would put 700,000 years to come. So already on that, by that metric, our life expectancy is very large indeed. But we're not a typical species. We can do a lot of things that other mammals can't. That creates grave risks such as, um, from engineered pathogens or AI that could bring our own, um, demise. But it also means that if we survive those challenges, then we could last much longer again, where the Earth will remain habitable for hundreds of millions of years. And if one- we one day take to the stars, well, the sun itself, um, will, uh, only stop burning in about 8 billion years, and the last stars will be shining in 100 trillion years. So on any of these measures, humanity's life expectancy is truly vast. If you give just a 1% chance to us spreading to, you know, the stars and staying as long as, uh, lasting as long as the stars shine, well, we've got a life expectancy of a trillion years. But even if we stay on Earth, the life expectancy is still tens, many tens of millions of years. And that just means that when we look to the future and when we think about events that might occur in our lifetime that could impact that future, that could change humanity's course, well, you know, we should just boggle at the stakes that are involved.
- CWChris Williamson
And when you talk about long-termism, that is looking at the future as something which needs to be taken seriously. There is this sort of grand potential for human flourishing, for human life, for all of the good stuff that could occur for a very, very long time over a very, very wide distance, and we need to take it seriously.
- WMWill MacAskill
Exactly. Long-termism is about taking the interests of future generations seriously and appreciating just how big that future could be if we play our cards right, and how good it could be. And then from there, trying to think, okay, what are the things that could be happening in our lifetime, like engineered pandemics, like greater than human level artificial intelligence, like World War III? What are the events that really could have, you know, c- civilizational trajectory level impacts? And then finally taking action, trying to figure out, okay, what can we do to ensure that we navigate these challenges and try to bring about, uh, you know, a wonderful future for our grandchildren and their grandchildren and their grandchildren and so on.
- CWChris Williamson
It's nice to hear or, or to think that you've been focused so much on long-termism because I fell in love with The Precipice by Toby Ord that I know you work with, and anybody just-
- WMWill MacAskill
Very close friend of mine.
- CWChris Williamson
He's phenomenal, man. But anybody that reads that book, especially the, the beginning, the premise that he talks about, right? And the premise is he, he believes that humanity is at a very particularly unique, dangerous inflection point in between sort of pre-history and our, um, civilizational inheritance that we could continue on and, and, and be lovely and flourishing with. And in that first chapter, he talks about the fact that the huge vast inheritance potential that we have downstream from us is, i- i- is crazily big, and yet almost all of the things that we do now are focused on such a narrow time span. We're not thinking... Even the most long term of long-termism projections don't-
- WMWill MacAskill
Yeah.
- CWChris Williamson
... get into the hundreds of thousands or millions of years like you're talking about.
- WMWill MacAskill
Uh, exactly. We focus on this tiny window. And, you know, in many cases that makes sense. You can't control what people in the year 3000 have for breakfast. Um, but surprisingly, there are things that we can affect now that do impact people in the year 3000, uh, where number one among those, which Toby talks about at length, uh, and where, you know, you can see the precipice and-... What We Owe The Future, my book, as kind of complements to each other.
- CWChris Williamson
Absolutely, yeah.
- WMWill MacAskill
Uh, both plowing this very similar furrow. Yeah, one of the things that Toby talks about, um, at length is ways in which we can cause our own extinction. Uh, so obviously asteroids, super volcanoes, there are natural risks that, uh, could wipe us all out. Thankfully, we've actually done a pretty good job, at least, of navigating asteroids. Um, Spaceguard, a NASA program, just came together, spent only on the order of about $5 million per year, um, but has basically eliminated the risk from asteroids. Uh, it's very, very unlikely that... We now know it's very, very unlikely that we'll get, um, hit by some kind of dinosaur-killer, uh, in the next few centuries. But we are creating new risks. So the era of nuclear weapons created an era of, like, new destructive power. And the next generation of weapons of mass destruction could be considerably worse again. Um, engineered pathogens could create, uh, the ability to create, create new viruses that could kill, you know, hundreds of millions of people, billions of people, perhaps everyone on the planet. And if, you know, a large bioweapons program starts up, uh, focused on such things, uh, and we have seen large bioweapons programs in the past, uh, that could be fairly dangerous indeed. Um, and that's exactly the sort of thing that we want to ensure goes to kind of zero as a, as a risk.
- 7:35 – 15:13
Why Our Present Choices Matter
- WMWill MacAskill
- CWChris Williamson
So, how are our actions now important? Is it about investing in, uh, trajectories that people move down in the future? What, what difference does what we do in 2022 have in 20,022?
- WMWill MacAskill
Yeah. So, uh, I think there are two ways of impacting the long-term future. So one is ensuring we have a future at all, such as by reducing the risk of extinction or reducing the, uh, chance of civilization just collapsing and then never recovering. A second way is by making the future better, assuming we do survive. Improving kind of future civilization's quality of life. And on the first of these, uh, well, one thing we can do is carefully, carefully navigate new technologies. That can mean accelerating defensive technology, beneficial technology, and it can mean, uh, putting in policy such that we either slow offensive technology or just choose not to build it. So on the accelerating offensive tech- or def- defensive technology, uh, one thing for example is, uh, far UVC lighting, where this is, um, a certain small spectrum of light. Uh, if you implant the light into a light bulb and, uh, it kind of irradiates a room, um, just as a normal light bulb does, then it also kills off the pathogens in that room. And this is very early stage research but seems, you know, it's quite exciting. Um, I think with some foundations I advise, we're going to be, um, funding it, uh, to a significant degree, where if this checks out, if it's sufficiently efficacious, if it's sufficiently safe, um, then we could launch a campaign to ensure this is installed in all light bulbs all around the world. We would have made it, you know, very, very hard, near impossible to have another pandemic. Um, and along the way would have eradicated, uh, basically all respiratory disease. And, you know, that's pretty exciting. That's something that we can do, um, by taking these risks seriously-
- CWChris Williamson
(coughs)
- WMWill MacAskill
... to say, look, we can have an enormous benef- impact, not just on the present generation, but on the course of all future generations to come, creating a safer world for, you know, the next generation.
- CWChris Williamson
Is this a particularly crucial time, do you think, in the history of the future?
- WMWill MacAskill
Uh, yeah, I think there's good arguments for thinking that we're certainly at a very unusual time. Uh, I don't want to make the claim that we're necessarily at the very most important time. Perhaps the next century will be even more important. Um, and I think there were some hugely crucial times in the past as well. But we're at a very unusual time compared to both the history, both history and the future. One reason for this is just that the world is so interconnected. So, for most of human history, um, or a large chunk of human history, there just weren't, wasn't a con- global connection. There were people in the Americas, people in, um, Asia and Africa, people in Australia, and they had, just didn't know each other at all.
- CWChris Williamson
(coughs)
- WMWill MacAskill
Even within the land mass of Eurasia, you know, in the early, the first couple of centuries AD, the Han Dynasty and, um, the Roman Empire, they comprised about 30% of the population each, but they barely knew of each other. They were like, you know, tales that one would tell of a distant civilization. Whereas now we're global and we're inconnect- interconnected. And that means that, say you have a certain message that you want to get out there. In principle, it can get out to the entire world or in a, a, you know, more darkly, if you want to, you know, achieve conquest and domination, you can potentially do so over the entire world. And in the future, again, I mean, we are talking about galactic scale thinking and, uh, the James Webb telescope. In the future, we'll be disconnected again. A- if one day we took to the stars, we're in different solar systems, then even to our closest solar system, there and back, communication would take eight years. And at some point in the very distant future, well, actually, different galactic groups will be disconnected such that you could never communicate between one an- another. Although I'll caveat, that's very far away indeed, about 150 billion years. So hopefully we're still going by then, but no guarantees. Um, so that's one... The fact that we're just so interconnected is one way in which, um, uh, the world, like one way in which the present time is so important. Um, and so un- well, what I'm going to say is so unusual, um, and seemingly important because it means that...... you know, we can have this, like, battle of ideas and, um, competition between values. And if one value system, like, took power, then it would take power over everything. And, uh, that's pretty, you know, potentially pretty worrying. Uh, a second way, um, in which the present is so unusual is just how fast technological progress is ch- is happening. So, for almost all of human history when we were hunter-gatherers, um, economic growth, uh, which is, you know, one measure of technological progress, uh, was going at about basically close to zero. Very, very slow, um, accumulation of better stone tools, um, spear throwers, things like that. Uh, agricultural revolution meant that sped up a little bit, developed farming, better farming techniques, but we're still growing at about 0.1% per year. Over the last couple of centuries, we've been growing at more like 2 or 3% per year in the frontier economies. Now, can we keep that up? And most of that growth is driven by technological advancement that enables us both to have more people and for those people to have better material quality of life, life. Now, how long can we keep that going for? Uh, you know, sometimes you get this idea of, "Oh, well, future science is just boundless. We can never, uh, never come to the end of it." But we've only been going properly for a few hundred years, since the Scientific Revolution, and it seems to be hard. Like, if we go at this pace, well, at some point, we will have figured out pretty much everything there is to figure out. But we can think about this economically as well, where, uh, if we keep growing at 2% per year, after 10,000 years, uh, civil- because of the power of compound growth, uh, civili- you know, the world economy or the total economy would be 10 to the power of 89 times current, um, civilization. That's a very big number. And to put it in context, uh, there are only 10 to the power of 67 atoms within 1,000, 10,000 light years. So, we would need to have an economy a trillion times the size of the current world economy for every atom within reachable distance. And that's just extremely implausible. Um, I really just don't think that's possible. And so that suggests we're at this period of rapid technological growth and rapid technological advancement that cannot continue, and that means we're moving through the tech tree at an unusually fast rate. And that means just a lot of change. It also means that we're at a kind of unusually high density period in terms of developing technology that could be very important, used for good, or, um, very harmful. You know, used either to lead to civilizational collapse, to end civilization, uh, or for, to allow kind of certain ideologies to gain power and to influence how the course of the future goes.
- 15:13 – 23:49
Possible Futures of Technology & Morals
- WMWill MacAskill
- CWChris Williamson
Early on in our history, we weren't moving sufficiently quickly to be able to develop anything that would be that much of a surprise because it was iterated at a much slower rate. Potentially, or it seems like further into the future, not only are we going to have such advanced technologies that any dangerous technologies can probably be mitigated at least a little bit, but also, again, it's going to slow down. You're gonna have this sort of S-shaped curve, right? Like flat, to hockey stick, to then-
- WMWill MacAskill
Yep.
- CWChris Williamson
... start to flatten off again. And at both of the relatively flat areas of that, not much change, which means therefore-
- WMWill MacAskill
Yep.
- CWChris Williamson
... a relatively low amount of risk. I'm gonna guess this links in with, uh, Nick Bostrom's ball from the urn thing, right? That there's just fewer balls that can be picked out of the urn when the change isn't occurring so quickly.
- WMWill MacAskill
Exactly. So, if we're thinking, if we're asking, is now an unusually important time? Well, yeah. Uh, Nick Bostrom has this analogy of, um, technological progress is like throwing balls from an urn, and if you pick a green ball, then it's like a very good thing. If you pick a red ball, then it's a very, you know, bad thing. Maybe it's even catastrophic. And we're picking balls from the urn just very quickly. Um, I mean, I'm actually not sure. Like, most balls are both green and red, depending on which way you look at them. (laughs) Uh, most technologies can be used for good or ill. Fission gave us nuclear reactors. It also gave us the bomb. Um, but, you know, we're picking balls out of this urn at a faster rate than we did for almost all of humanity's history, or, and that we will do for almost all of humanity's future, um, even if we don't go extinct in the next few centuries.
- CWChris Williamson
What are you talking about when you say trajectory change?
- WMWill MacAskill
Terrific. So, uh, we've talked so far about kind of safeguarding civilization, ensuring that we just make it through the next few centuries. Um, and that's been the kinda main focus of discussion when it comes to existential risk. Um, but we also, we don't wanna merely make sure that we have a future at all. We also wanna make sure that that future is as good as possible. We want to avoid horrific dystopian futures. We want to try and create a future that is positive and flourishing. And so trajectory change refers to, uh, efforts that you can make to make the future better in those scenarios where the future lasts a long- we- civilization lasts a long time. And how can you do that? Well, a number of ways, but I think the most likely way is to influence the values that will guide the future, like the moral beliefs and norms, where, uh, you know, at the moment, we are used to living through periods of great moral change. You know, the gay rights movement emerges in the '70s, and it's only a few decades that, um, you know, gay marriage is, uh, legalized, and that's like, you know, rapid, uh, moral change compared to history. Um, but that might change in the future. Um, this...... moral change that we ... fast moral change that we know of, might end in the future. Because often, uh, set moral worldviews or ideologies, they try to ... in their nature, they often try to take power and they try to lock themselves in. So we saw this with, uh, the rise of fascism, with the Nazis during World War II. Hitler's Night of the Long Knives, gets into power, crushes the ideological competition. Similarly with Stalin's, uh, purges, gets into power, crushes the competition. Similarly with, um, Pol Pot, uh, and the Khmer Rouge. Because if you're an ideology and you want power, (laughs) then you want to ensure that that ideological competition goes away. And my worry is that this could happen with the world as a whole, where, you know, in the 20th century, we already had two ideologies, um, fascism and, uh, Stalinism that really were aiming at global domination. Um, and I think, you know, luckily, we were not in a state where the world came close to that, um, but it's not like that's a million miles away in terms of how history could have gone. And so then when I look to the next century, well, one major worry would be, for example, um, there's an outbreak of a third World War, uh, which I think is more likely than people would otherwise think. Uh, we're used to this long period of peace, but that really could change. I think it's something like one in three in our lifetimes. Uh, and the result of that could be single world government, single world ideology, and depending on what that ideology is like, could be very bad indeed, you know. Something like what you got in 1984 with George Orwell, with The Handmaid's Tale, mark that word. Um, and then finally, I think that with the development of new technology and in particular AI, uh, that ideology could last, persist for a very long time indeed, potentially just as long as civilization does.
- CWChris Williamson
So value lock-in isn't necessarily a bad thing as long as the values that are locked in are values that we would want over the long term.
- WMWill MacAskill
Uh, right. So if you ... If it were the case that the values that got locked in were the best ones, the, you know, the ones that we would all come to given sufficient reflection and 10,000 years to think and reason and empathize, um-
- CWChris Williamson
Machine-extrapolated volition has been utilized correctly and all of that.
- WMWill MacAskill
Yeah. Exactly, yeah. Um, then, then that would be okay. However, we should n- appreciate that we today are very, very far away from having the right moral views. I mean, it would be this amazing coincidence if all of history, and through all of history people had abominable moral views, supporting slave owning or patriarchy or, uh, atrocious views towards, uh, criminals or people of different sexual orientations or people of other nations. Um, but we in the early 21st century, um-
- CWChris Williamson
Nailed it.
- WMWill MacAskill
... in West- Western countries, yeah-
- CWChris Williamson
Nailed it.
- WMWill MacAskill
... we've figured it all out, yep, um, would be very surprising indeed. And so what we want to ensure is that we don't end moral progress kind of too soon. And if anyone kind of came to power in the world and were like, "Yes, I'm going to lock in my values now. I'm going to ensure that the things I want persist forever," I think that would be very bad. I think there'd be a loss of almost all value we could achieve, because we need to keep morally progressing into the future.
- CWChris Williamson
That's what I was going to say. Uh, uh, is there an argument here to be made that optionality or whatever the word would be, um, or that the regular change of a particular moral structure should be something which is optimized for? That even if you were to potentially get rid of a moral structure that was more optimal and switch to one which is less optimal than that, the fact that you've baked in the ability to switch helps you to mitigate some of the dangers of having complete lock-in for the rest of time overall?
- WMWill MacAskill
That's exactly right. We want to have a world where, um, we can change our moral views over time, and in particular where we can change them over time, um, in, uh, light of good argument or, you know, empathy or just, like, moral considerations. And there might be many points of lock-in in the future. So the design of the first world government would be such a point of lock-in. Uh, s- first, um, space settlement as well. Like, you can imagine there's a race dynamic and everyone's just trying to get out into space to claim those resources as fast as possible. Um, I actually think that, you know, a couple of examples of lock-in in the past were, uh, the colonial era where you got this, like, single kind of worldview suddenly just spreading all over the world, a kind of Western Christian, uh, worldview. Um, earlier times were the first world religions where again, you've got this kind of bubbling of ideas, then it compresses into, you know, the sanctified holy book and, um, then persists for thousands of years. Or also actually just the rise of Homo sapiens as well. There used to be many homo species. Uh, we used to have a diversity of human species and then there were only one because one was a little bit more powerful and you wanna, um, you know... Predictably, that, it means that the competition, um, is destroyed.
- 23:49 – 34:35
Will’s Problem with ‘Happy Birthday’
- WMWill MacAskill
And so-
- CWChris Williamson
What's your problem with the happy birthday song?
- WMWill MacAskill
(laughs) Um, thank you for asking me about that. That's not even in the, in the book. Uh, What Will The Future Be? does not discuss Happy Birthday. So I use the example of Happy Birthday as an example of bad lock-in to illustrate the fact that, uh, even the fact that something becomes universal does not mean that it's necessarily, like, the right solution, as it were, like, the best state of affairs. So Happy Birthday is by far and away the most sung song in the world. It's, um, the song that's used to s- to recognize someone's birthday, um, in-... uh, at least many of the major languages. And it's terrible. It's like, it's a bit slow, so it's like a little bit like a dirge. Um, the emphasis is on the you, birthday to you. Oh, sorry. So the emphasis is, uh, not on the you which you s- expect. Um, it's on the to, which is like, why on the preposition? It doesn't make sense. Uh, but then it also has like an octave leap, um, like Happy Birthday. It's like... And no one can sing it, so everyone's... Because people are singing at different ranges, um, so this is meant to be like a communal song, you know, your family gets together. So you want a song with like a really pretty small kind of melodic range. But instead it has this interval, and it means that like everyone just suddenly changes key at that point. (laughs) And then like now, like half your family are singing in one key and half your family are singing in this other key, and it's just a cacophony. And there's no reason at all that it couldn't be much better. You can go on YouTube for people creating new versions of Happy Birthday that sound much better. Um, and why did that happen? Well, there didn't used to be a happy birthday song. In fact, the melody was very different. It's called Good Morning to All, I think. Um, but, um, in, uh, I think it was the early days of, um, radio, gramophones and so on, um, I think perhaps like that was a mo- what I call a moment of plasticity. So little moments in history when things could go one way, they could go another way, and we can really have an influence over, um, what di- what direction happens. But then after that moment of plasticity, there's a kind of ossification. And so at that moment of p- plasticity, perhaps any of these songs could have like become the one that like gained the most popularity. But once Happy Birthday is the most popular song, um, once it's known that that's what you do to sing happy bir- to recognize someone's birthday, well, then you're kind of locked in. It's very hard to switch from that. 'Cause if I start singing some different melody, then everyone's like, "What is this?" And in the case of Happy Birthday, it's probably just, you know, we could, uh, create, you know, there could be some government diktat that says, "Okay, we'll all stop singing Happy Birthday 'cause that doesn't make any sense. We're gonna sing this different melody instead." And perhaps that would work. It's not sufficiently important issue that I think that would happen. Um, some things like that have happened in the past. So, uh, I think it was Sweden that used to drive on the left side of the road, but its neighbors drove on the right. And so they had one day, um, uh, where they just switched. They were like, "Okay, we're also gonna drive on the road, right side of the road as well." I think it was 1978. And they had this huge kind of government campaign about it. They had like songs about it. Um, the, a big song competition, the winner of which was called, uh, Get To, Keep To The Right, Svensson. Um, and it was successful. They actually managed to switch from being locked into driving on the left side of road, to driving on the right side of the road. In the case of Happy Birthday-
- CWChris Williamson
(coughs)
- WMWill MacAskill
... I don't think that's going to happen. Um, but yeah, Happy Birthday illustrates how the fact that this song became so widely known, um, does not at all, and culturally, that's almost culturally universal, does not at all suggest that it's the very best thing, um, that it's the best way we could have sung Happy Birthday. And I think Happy Birthday has an important lesson about future dystopia, which is that model norms (laughs) and model memes, they can take over the entire world without necessarily being the best ones. And I think we're living at a moment of plasticity now, um, with respect to what are the moral beliefs of our time, that may not occur in the future. If we end up with this like single world culture, whether that's through conquest or just through the kind of merging of different ideas, and suddenly just everyone believes that X is right and Y is wrong, then what's the pressure to change from that? Um, I don't think there would be much pressure. And if those views are wrong, (laughs) if it's like the melody of Happy Birthday and not the melody of some better, um, song, then that could be very bad and that would be a bad thing that persists for a very long time.
- CWChris Williamson
What it shows is the power of culture to be able to enforce norms. A lot of the time when you think about the future and potentially bad outcomes, you think about the 1984 dictatorial, bureaucratic, evil world government in tall buildings telling people that they're supposed to do something. But one of the most powerful enforcement mechanisms is social approval and disapproval, and just grandfathered-in expectations about what you're supposed to do. And what you have, one of the biggest problems you come up against is when the people that are in charge, the bureaucratic organization or the government or whatever, when they maybe even try to change something for the better and that runs counter to the flow-
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
... that you've got going through the culture, that's when you get uprisings and revolutions. Sometimes kind of like idiotic ones. But the point is that culture is so important. We saw this, I use this example all the time. We saw this with the word woke. So think about the fact that woke was originally used, uh, in rap songs. Then it kind of got weaponi- or utilized and adopted by people on the left that wanted to identify as someone that was kind of socially aware and cared about social justice issues. And then very quickly, no one needed to mandate that woke was going to be the sort of thing that you didn't want to be associated with. But all of the comedians and satirists and people online managed to culturally enforce a norm where woke became such a toxic term that you didn't need to tell people not to use it. No one wanted to be associated with it because it was just such a, an uncool word. Well, what's cool? What, show me the, the cool mandate or the cool policy. Doesn't exist-
- WMWill MacAskill
Yeah.
- CWChris Williamson
... simply enforced by norms.
- WMWill MacAskill
Yeah, I mean, there's just a huge amount of, uh, what humans do is determined by, yeah, these cultural attitudes of just what's high status, what's cool and what isn't. And, you know, we can see this, so take conspicuous consumption, the-... fact that, uh, people like to show off how rich they are, and that can be, you know, across many different cultures, that is used, um, as a way to show, like, um, you know, how successful you've been. There are different ways of doing that, that are cool in different cir- in different societies over time. So it could be buying fast cars, having expensive woc- watches or, if we go into the past, um, having very nice fabrics or, uh, things like that. It could be owning slaves. So in the Roman Empire, um, the mor- slave- slaves were the status symbol and some Roman senators had thousands of slaves. Um, it could be philanthropy. So this is at least true to some extent in the United States, that engaging in philanthropic activity is a way of demonstrating conspicuous, of conspicuous consumption. And which of these do we have? I think it's largely a cultural issue. Um, very largely a cultural issue. And that really matters because (laughs) whether the demonstration of conspicuous consumption, which is, I think, you know, just ve- human, again, a human universal, whether that's done in a particular culture through philanthropy, through buying fast cars, or through, um, slave owning, makes a very big difference to the well-being of the world. And I certainly know, um, I certainly know which I'd prefer. And I think, yeah, social scientists are only really starting to appreciate the importance of culture in the last few decades. It's the sort of thing that hasn't gotten enough attention because, well, it's kind of ephemeral. Um, it's like you can't quantify it or measure it as m- uh, um, as much as perhaps other things, like laws or economic-
- CWChris Williamson
Economics.
- WMWill MacAskill
... matters. Exactly. But over the course of writing this book, just more and more, I got convinced that culture is an enormous force. It's almost, it's generally culture that influences, um, political change rather than the other way around. If you get political change without cultural change, then, um, that often doesn't go well. And in the book, in What We Owe The Future, I focus in particular on the abolition of slavery, which when I was kind of, you know, before writing this book, I would have thought as just clearly kind of an econo- economic matter, uh, something that was kind of inevitable as, um, our technology improved, slavery was just no longer, um, kind of viable means of production. But I think I was wrong, actually. I think that the primary driver of the abolition of slavery throughout the world was a cultural change, and that was actually based on people considering moral arguments and making changes on the basis of moral arguments.
- CWChris Williamson
(coughs)
- WMWill MacAskill
Um, and I think in the future, we could have, uh, you know, equally large changes that could or not coudl- not occur based on what moral arguments are present.
- CWChris Williamson
I've got a fix for the Happy Birthday problem, by the way, uh, which is-
- WMWill MacAskill
Hit me.
- CWChris Williamson
So, uh, we can't... I, I'm not strong enough in my mental capacities to change the actual tune, but you can safeguard yourself from not being able to do the octave by starting the song one key lower than you think that you need to.
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
Everybody should do this. Everybody starts belting out Happy Birthday pretty close to the upper bound-
- WMWill MacAskill
Yeah.
- CWChris Williamson
... of where they can go in terms of melody. No, no, no, no, no. Bring it back.
- WMWill MacAskill
(clears throat)
- CWChris Williamson
Give yourself some headroom. That's what you need.
- WMWill MacAskill
Yeah. Okay.
- CWChris Williamson
And then when it comes to that, when it comes to that key change-
- WMWill MacAskill
Everything goes baritone.
- CWChris Williamson
... you can nail it. Yeah, yeah, exactly. It's like you've got this sort of beautiful, warm sound.
- WMWill MacAskill
Okay, well, y- You heard it here first, life hacks for everyone.
- CWChris Williamson
(laughs) That's it, life-
- 34:35 – 42:50
Preventing Human Extinction
- CWChris Williamson
i- it's really interesting to think about what, what cultures and, and, and stuff lock in over a longer term, but presumably this means that we need to safeguard civilization from a bunch of suboptimal futures. And you've got three different ones. Extinction, uh, you've got global civilizational collapse, and you've got stagnation. So starting with, uh, extinction, what's the, like, do you th- are we gonna go extinct? Like, what's, what's, what's gonna happen?
- WMWill MacAskill
Yeah, I think probably not. Um, uh, so in general, you know, it's hard to kill everybody, thankfully. (laughs) Um, uh, there are eight billion people in a very diverse array of environments with diverse societies, um, and thankfully, there are very few people in the world that really want to kill everyone. Um, so the scenarios where that happens, I think, um, something's gone really pretty badly wrong. But it's not zero at all. So, um, if we just cont- consider pandemics, what's the risk of, uh, an engineered pandemic? That is, a pandemic that's not from natural causes, but that is the result of design in a lab, where we already have technology to improve the destructive power of the viruses now, and that's just getting better and better every year. And so, you know, it's not very long that it will be quite widely accessible that we'll have the power to create viruses that could kill hundreds of millions, billions of people, maybe even more. What extinction risk would I put at that? Maybe something like 0.5%, um, this century, um, and much higher that there will be some engineered pandemic that would kill, uh, very large numbers of people, maybe that's like 20% or, or something in a, um, by the end of the century. Uh, and that's just far too high. (laughs) Um, like, far, far too high. Uh, because there are things we can do. So I mentioned far-UVC lighting. There's also early detection. So we could just be monitoring wastewater all around the world, scanning it for, um, DNA.... uh, ignoring human DNA, and seeing, is there anything new in here? Is there something we should be worried about? And if so, then we can act, um, kind of immediately. There's also just more advanced personal protective equipment, so masks that are just, you know, super protective, not just, like, you know, the cloth masks you get, but full head things that would ensure that if you were a key worker, you would be guaranteed to avoid infection. That's something we could be working on now as well. So yeah. This is a just... How we respond to this is contingent. It's kind of, it's up to us. Um, we can choose to get that risk kind of, yeah, way down to zero where it ought to be.
- CWChris Williamson
What's your opinion on Nick Bostrom's Vulnerable World Hypothesis?
- WMWill MacAskill
Uh, so it is a hypothesis, so to explain, the hypothesis is that there could be some technology in our future that gives the power to destroy the world to basically everyone in the world. Um, and if so, then it would seem like it would be very likely that the world would end pretty soon. And he gives the analogy of imagine if it was as easy to create, let's say, a doomsday virus as it is to just put sand in a microwave. Um, then it just seems like we wouldn't last very long because there's just so many actors each making their own independent choices that, uh, we would just, um... Someone at some point would do so. Uh, I think it's very unlikely, to be honest, that the future looks like that. Um, the main reason that is that, um, we just ban technology all the time. (laughs) Um, so there are many technologies that we don't like, so take human cloning or something. We could clone humans now if we wanted to, and we choose not to, um, on ethical grounds because it's, um, taboo, and that's kind of globally enforced. Um, in his essay, um, on the Vulnerable World Hypothesis, Nick can think... Nick, you know, if we were in this vulnerable world, would that mean that the only solution would be, um, some very powerful surveillance state? Uh, I think b- I think, like, no. Obviously, that would be like a really bad outcome too, um, and what we can do instead is just like have strong international norms about, um, what technologies we do allow to develop and which we don't, where, um... Yeah. One is that humanity is at least somewhat good at kind of recognizing risks and taking action on the basis of them, and actually, in general, being quite cautious with respect to new technology.
- CWChris Williamson
Mm-hmm.
- WMWill MacAskill
And then secondly, technology is often used for defensive measures as well as offensive. And so, in general, um, uh, you know, in general, I think the, uh, Vulnerable World Hypothesis, it's, it's possible that that will occur in the future, um, but, uh, I think I'm a little more optimistic than perhaps Nick might be.
- CWChris Williamson
Given the, uh, risk or potential future of an extreme surveillance state, which would be one potential solution to try and constrain the degrees of freedom-
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
... that people can do fuckery with whatever it is that's-
- WMWill MacAskill
Yep.
- CWChris Williamson
... that they've got. Um, can you see a potential human future where an effective long-term as civilization is basically incompatible with democracy?
- WMWill MacAskill
I mean, you could... So in We Are the Future, I talk a lot about this is why the kind of considering both values side of things and, like, risk side of things is so important because, you know, if you're only focused on the re- on the extinction side of the spectrum, then you might think, "Okay, we need some undemocratic civilization that can monitor the beha- everyone's behavior so that no one can pose a risk to the future of, uh, civilization." And to be clear, Nick Bostrom
- NANarrator
laughing ]
- WMWill MacAskill
... and Toby Ord, they don't believe this, but this is a kind of a straw man view that you could come away with. But then you've got this authoritarian state that I think has lost most of the value that we could have had. (laughs) Um, it's not just about making sure that the future is long, it's also about making sure that it's good. Um, and so, you know, is the g- ideal governance of the distant future democracy? I don't know. Maybe we can do something much better. (laughs) Um, democracy would have been unheard of, uh, you know, for most of, most cultures, um, throughout history. Uh, you know, perhaps there's something we haven't even thought of that on reflection, uh, we would think is an even better mode of governance. Um, but I think I'd be very worried about something that's more authoritarian, precisely for the reason that I think we could easily lose, um, most value in the course. Um, as a result of that, where, you know, the great and actually quite fragile thing about liberal democracy that we have, um, in places like the US and the UK is just that you've got a great vibrancy of debate and discussion, and therefore are able to kind of drive forward moral progress. People are able to, like, have moral campaigns. People are able to criticize the views of people who are in, that're in power. And when you think and reflect on kind of human psychology, that's, like, a surprising thing. (laughs) Um, and the fact that it's actually quite rare in history should make you appreciate that, um, it's really something I think should be, uh, treasured and tried, and we should try and protect. Um, and so yeah, anything that's like, "Oh, we need to, like, strong, really strong kind of government in order to reduce risks of extinction even further," I'm, I'm generally like, "Look, can we get 90% of the risk reduction by other means?" And I think that often you can.
- CWChris Williamson
If you're concerned
- 42:50 – 48:25
Is Earth Over-Populated?
- CWChris Williamson
about extinction, presumably more people on the planet would spread the risk more, would make complete extinction more difficult because the virus or the AI or the asteroid or the super volcano or whatever simply has got more work to do. And for every human that you add, there is a potential chance that they may survive and there may be a few of them and then maybe they could repopulate. What's your view on whether or not the world has too many or too few people in it?
- WMWill MacAskill
Uh, it's a great question. Um, and there are considerations on either side of the ledger. So, it's a very common kind of idea that there are too many people, resource depletion and climate change, and you shouldn't have kids because they will contribute to climate change. Um, but I think those are... And, you know, there are- it's true. I contribute to climate change, um, uh, through my life. Uh, if I were to have kids, they would as well. Um, but that's only looking at one aspect of human, of what people do because people do good things too, and they innovate and they, uh, you know, effect model change. Um, they contribute to infrastructure, pay taxes that benefit all of society. Um, I'd also say they just, if they have, if someone has a sufficiently good life, that's just a (snaps fingers) living is good for them as well and that's a sort of moral consideration we should take into account. Um, but then, yeah, you're right, actually. Having more people, um, I mean, so actually, yeah, so I'll go back a step. For those reasons I actually think that on balance, um, we should have more people rather than fewer. Um, the benefits from an additional person, in particular via both technological and model innovation, um, as well as the benefits to them, kind of outweigh, uh, the negative effects, especially given that you can, you can counteract those negative effects. So, if you're having a child, you can offset the, their carbon emissions and actually you can do so for, you know, a few hundred dollars a year. It's a very small fraction of the cost to have a child. Um, but then how does having kids, putting aside climate change, how does having more people in the world impact extinction risk? It's interesting, lots of considerations on either side. So, you're totally right that it spreads the risk. We've got more people in more diverse environments, a- and that makes us safer. Um, and it's actually un- a little unclear to me whether, like, in the world today, this year, uh, is extinction risk higher or lower than it was 1,000 years ago? Now, 1,000 years ago, we couldn't have, um, uh, you know, couldn't have blown ourselves up with nuclear weapons. But the extinction risk from nuclear, you know, at least with current arsenal sizes, not with kind of future, potentially much larger arsenal sizes, um, extinction risk is pretty hard. And that's partly because there's just, it's, y- extinction is really pretty unlikely, I think. And that's partly because there are just so many people in the world today, and we have like great, you know, we have technology that can protect ourselves. Whereas 1,000 years ago, well, there was, you know, there was a risk of asteroid strikes and it's not clear that the world would have been able to come back from that. Um, however, I think that most of the extinction risks we face, you know, having a difference between eight billion people and 10 billion people, it's gonna be pretty small. Like, we already inhabit basically all the inhabitable areas of Earth and that's the much bigger consideration compared to sheer population size. Um, the biggest consideration, I think, is this, is relates to stagnation where it's rel- you know, it's relatively plausible to me that technological progress will slow down over the co- um, over the coming century and centuries to come. Um, basically, if we don't, if AI doesn't speed it up, then I think there's a good chance it slows down. And that's because we're just n- not able to add more and more researchers to work on R&D. So, you know, further technological progress is just harder and harder and harder the more we do of it. Um, but we, in the past, we've solved that by just adding more and more people doing R&D and, like, trying to do technological innovation. Um, that's by both just having a larger population, so we have eight billion people alive today, um, it was, I think, 200 years ago we had a billion people. Um, but also increasing the proportion of people devoted to R&D. And we know that population is gonna, like, peak maybe by about 2050 and then afterwards decline, um, maybe as late as 2100. We're not, we don't really know. And we just, there's only so far you can go by increasing the, um, proportion of the world, of your population devoted to R&D. And that does suggest, and if we stagnate, a period, a very risky period, when, let's say, we have very advanced bioweapons but not the technology to def- defend against them, I think that would be a bad thing from the perspective of civilization. It would increase the risk of us going extinct. And in that respect, having, you know, there being more people would be helpful, it would give us a longer leads time, it would, you know, help further technological progress. Um, okay, I've spoken for quite a long time, but that being said, I don't think this is, like, a huge issue either way.
- 48:25 – 59:17
Risks of Technological Stagnation
- WMWill MacAskill
- CWChris Williamson
Can you dig in a little bit more to the risk of technological stagnation? You know, uh, why is it that there's kind of an embedded growth obligation within technological progress?
- WMWill MacAskill
Uh, yeah. So, that was, uh, pretty quick. So, many people who think about the long term, or who focus on the future, they're often pretty bullish on economic growth.... and the argument for this is like, "Oh, well, it compounds over time." Um, you know, if you're, uh, getting... You know, even just increasing your growth rate by like 1% over 70 years means you've kind of made people on 70 years twice as rich as they otherwise would be. There's this huge difference and it compounds over time. Um, that's actually not a reason why I am concerned about technological stagnation because, uh, you know, as I suggested earlier, uh, I just don't think you can have economic growth that compounds for very long periods of time. That's... You get to the situation where you've got 10 to the power of 89 civilizations for worth of output, and it's just... This just doesn't seem plausible. So instead, at some point, we're going to just plateau, and that means that if you speed up economic growth now, well, then you get to the plateau a little bit earlier. And that's kind of good into the intervening years, but not good over the long term, or it, like, makes no difference over the long term. So, one way of putting this is that I think that, in general, tech progress is kind of, kind of inevitable. It, like, probably will keep happening, maybe at just a faster or a slower rate. However, stagnation would be very different, and that's where it's not just that growth slows, but we just stop growing altogether, um, or even s- the economy starts to shrink, uh, where we're just, like, not inventing new things that are improving people's quality of life. And that, I think, could be quite bad, um, if we're at this period of high extinction risk. So, as an intuition pump, suppose that technology had completely stagnated in the 1920s. So, we reach the 1920s, and then after that there's no more innovation ever. Would that have been good? Would that have been sustainable? And the answer is no, I think, um, if we'd stayed at 1920s level of te- of, uh, technology, the only way we could have supported civilization is by burning fossil fuels and burning them indefinitely until we'd burned all of them, um, that would have obviously caused, like, extreme levels of climate change and an absolute catastrophe. And also then we would start to regress because we just could no long- we would ru- have run out of fossil fuels and, um, we would no longer be able to power civilization. It was only by further technological development that gave us clean sources of energy like nuclear power, um, and solar panels. And I think we could enter similar unsustainable states in the near future where, again, bioweapons are the main one, where imagine... Okay, now we're at 2050 levels of technology. We have advanced bioweapons, the sorts of things that in principle could kill everyone on the planet, but not the technology to defend against them. And now imagine we're at that level of technology for a thousand years. I really don't rate our chances in that world. Um, the risk, even if it was low, even if it was just 1% per century, well over a thousand years, okay, 10,000 years, hund- you know, the risk will add up, and over the long run, we're almost certainly doomed. So, we need... That, that consideration suggests that we need to, at least in a measured way, kind of navigate our way through the time of perils, the time of heightened, um, existential risk so that we can get out the other side and be te- sufficiently technologically advanced that, uh, we aren't facing risks of catastrophe just every year that add up over time and instead have a position of, yeah, what my colleague Toby Ord calls "existential security", where actually we've gotten risk to a very low level indeed.
- CWChris Williamson
Yeah, you want to get to sort of x-risk mastery in one form or another. One of the things that I always thought about when I considered long-termism, especially after Toby's book, was, well, why, why aren't the smart people in x-risk campaigning for unbelievably slow technological development? Uh, let's say that the urn ball analogy works and that there's dangers with every new technology that you develop, why not take 10,000 years to add in another line of computer code, right, to the, to the AI that we're doing? Uh, why not? You know, if, if what we've said is true and that there's this basically limitless, endless duration for the potential of humanity, and yeah, we need to get off Earth within 2 billion years or whatever, or the o- oceans are going to boil, but we've got time.
- WMWill MacAskill
(laughs)
- CWChris Williamson
Why not mandate or somehow enforce a- an insane slowing of technology? But it sounds like one of the reasons that you can't do that is because you need to be able to continue the conveyor belt of technological progress in order to protect ourselves from some of the negative externalities of previous technologies that we've already locked in their existence of. Is that right?
- WMWill MacAskill
(laughs) Yeah, I mean, you make it sound like there's, like latticed conveyer belt or something.
- CWChris Williamson
Is that, is that not kind of how it is?
- WMWill MacAskill
Like... Well, we- we've already started. I mean, we now have, you know, uh, close to 10,000 nuclear warheads ready to launch. We're running that risk every single year. Um, and so hopefully there's a state in the future that does not have such high risks, um, and then we can just, like, stay in that state of technological advancement. I should say that, yeah, I'm pro-tech growth, not everyone who endorses long-termism is, by any means. Um, other people actually would want technological progress to go more slowly and more sustainably. Uh, one thing we all agree is we want certain technologies to be slowed and others to move faster, so defensive-
- CWChris Williamson
What would be some examples of those?
- WMWill MacAskill
Well, uh, again, things are often easiest... We haven't talked much about AI, but things are often easiest to talk about in the bio-risk case where this far-UVC lighting, that if it works, like safely sterilizes a room, that's a defensive technology. Let's have that as fast as possible.... technology that allows you to redesign viruses, um, to make them more deadly, let's just delay that. (laughs) How about we do that in the future and, uh, not just now? Um, so, and that's an idea that Nick Bostrom calls differential technological progress. So basically, we all endorse, I think, almost everyone would endorse that paradigm. But then if we're talking about tech or economic growth as a whole, should we go faster, should we go slower, um, I lean on the faster side. Other people would lean on the slower side. Um, some people think that with AI in particular, we should really just be trying to slow that down enormously if we could. Um, perhaps even just say, like, "Look, there are certain sorts of AI technology that, uh, we shouldn't allow, like human cloning, like, in the same way that globally we don't allow human cloning." But the main thing is just, you know, when we're taking action, we need to consider the tractability of what we're trying to do. And I think it's extremely hard to slow down tech progress or economic growth as a whole, for the world as a whole. So let's say, you know, I'm a, I become an activist. I dedicate my life to doing this, and I convince the UK to stop growing. Doesn't make a difference in the long term because all the other countries in the world are going to keep growing. Okay, let's say I manage to... I'm superhero activist, managed to convince, um, you know, 109... I've actually forgotten how many countries there are in the world, but I convince every country but one to keep, um, to just stop growing. Well, the last country keeps growing. Before long, it's just become the entire world economy, um, because if you've got compound growth, even if you're the small country growing at 2% per year when all of the other countries are stagnant, um, within a couple of hundred years, then you will be the world economy, and you, and the activism of those other, all those other countries, um, will have been absolutely for naught.
- CWChris Williamson
Yeah, this is-
- WMWill MacAskill
And that's how I feel about the kind of de-growth movement in general, which comes from a very different, um, perspective, kind of environmentalism, is whether or not it's a good idea, um, and I, you know, I tend to think that the sentiment is not great. But whether or not it's a good idea, it's also just ultimately futile because it would need to be a global effort. And I think given the, you know, given the just next level of difficulty we're talking about in trying to do that, there are just better things we could be doing, such as accelerating the good tech, delaying the bad tech.
- CWChris Williamson
It's a combination of a lack of a God's eye perspective and ability to deploy stuff with some sort of Malthusian trap and a tragedy of the commons for the future. It's like all of that kind of mixed up together-
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
... to create this sort of terrible, terrible potential.
- WMWill MacAskill
Yeah, exactly. And in general, you know, one thing I really always want to clarify with long-termism and work on existential risk and the stuff that, what we do with effective altruism in general, and also what I'm talking about in What We Owe the Future, is that I'm p- proposing action on the margin. So it's like, take the world as it is, have a perspective on the world as a whole, and how are resources being allocated? Should we ch- Like, in what way are they misallocated? Should we change them a bit? So when I'm advocating for long-termism, I'm not saying all resources should go to positively impacting the long term. What I am saying is that at the moment, um, you know, 0.01% of the world's resources are g- are focused on representing and trying to defend the interests of future generations. And maybe it should be higher than that, maybe 0.1%. That would be nice. Maybe even as high as 1%. And so similarly, if we're thinking, "Oh, sh- how fast should the world grow or not?" In order for that, for... The action-relevant question is like, what should I do? Should I try and speed it up or slow it down on the margin? That's the action-revel- relevant question, not this, "Oh, what... You know, if I could control the actions of everyone in the world, what should I do?" Because I don't, I don't. All we can ever do is make this little difference on the margin.
- 59:17 – 1:07:19
The Current Focus on Climate Change
- WMWill MacAskill
- CWChris Williamson
What do you think about the volume of attention that's being paid to climate change?
- WMWill MacAskill
Uh, over- my overall view is enormously positive about it. So, you know, when you look at, uh, different moral traditions over time, it's actually remarkably abo- concern for the future is, like, remarkably rare. Um, surprisingly, like, you know, for the book, I was really, I went to be like, "Oh, I want to find, you know, Christian thinkers talking about the distant future and, like, what we owe future generations, and, uh, Hindu thinkers and Buddhist thinkers and Confucians." Um, and it, you know, it's not like I did the deepest dive, but it was kind of surprisingly hard. Um, there is act- actually more thought in kind of, um, indigenous, um, philosophy such as the Ithaca. Um, but yeah, then, but certainly kind of secular, um, post-Enlightenment thought, it's, like, surprisingly there. And then over the course of the, you know, 20th century and then certainly the last few decades, we've had this enormous upsurge from the environmentalist, um, uh, movement that really is standing up for future generations. And they've seen kind of one part of this, which is focus on, you know, stewarding the resources, especially irrevocable losses like species loss and the particular problem of climate change. And I really feel like, oh wow, this is just, this amazing and like, again, kind of contingent thing that there is been, that there has been this groundswell of concern for, um...... how the, our actions could be impacting, yeah, not just the present day, but also, um, w- the world we leave for our kids and our grandkids. And then the thing I just want to say on top of that is like, okay, yeah, this is this great moral insight. Um, and that moral insight makes you concerned about climate change, you have a bunch of other things you should be concerned about too. (laughs)
- CWChris Williamson
But dude, that's-
- WMWill MacAskill
(laughs)
- CWChris Williamson
I mean, that really is the main takeaway from reading Toby's book. And he's got that, uh, table of the, uh, chance within the next century of something happening. And a, a supernova explosion is one in 100 billion or something like that, and-
- WMWill MacAskill
Yeah. Yeah.
- CWChris Williamson
... a super volcano is one in 10 billion or something. And you start to move your way down, and you get to a climate change, which I think is either one in 10,000 or one in 1,000 over the next 100 years. And then you get to engineered pandemics and unknown unknowns and AI safety, and it's one in 10, and I think-
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
... the overall risk is maybe one in six. Uh, so when I read that, it, it did, it does make me... I, I understand your point, right, that anything that encourages people to think about the future generally of the planet and of humanity is smart, but I worry that there is a little bit of a, a value lock-in that's going on here, where anything that detracts away from a focus on climate change is seen as almost like heresy.
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
And that all of our future existential risk conversations have been completely captured by a conversation about climate change. Dude, five or seven years ago, the only people talking about AI safety were getting laughed out of a room.
- WMWill MacAskill
(laughs) Absolutely.
- CWChris Williamson
Like, conferences.
- WMWill MacAskill
Ut- ut- utterly fringe. I mean, I was there. I was like, the early drafts of Nick Bostrom's Superintelligence, I was part of these seminar rooms. And I was like this young guy trying to figure out, um, what it's, you know, what should I be into or buying, and I was like, "This is..." I mean, I was into it, I was curious in it, and I was helping, and we were having conversations. But it was also, it felt very, very fringe.
- CWChris Williamson
Laughed out of the room, like.
- WMWill MacAskill
And now it's kind of, yeah, now it's much more mainstream.
- CWChris Williamson
That, that's my concern.
- WMWill MacAskill
Uh-
- CWChris Williamson
And that was, that was the takeaway from Toby's book. And, and also one of the reasons why I think that, I know that you guys do testing on messaging and stuff, and I really, really think that that's one of the most important areas. Look at me bro-sciencing my way into stuff that you guys deconstruct very, very fine-tuned on a daily basis. For me, I, I'm so compelled by the ideas that came from Nick's work and Toby's work-
- WMWill MacAskill
Yep.
- CWChris Williamson
... and your stuff and Anders and, you know, like, pick your favorite existential risk philosopher.
- WMWill MacAskill
Yep.
- CWChris Williamson
And it blows my mind that that hasn't made even a fragment of the impact of a Swedish girl shouting at adults on a stage.
- WMWill MacAskill
(laughs)
- CWChris Williamson
And it, it-
- WMWill MacAskill
Fa-
- CWChris Williamson
... I find myself being less drawn and almost triggered sometimes by the, uh, environmental movement because of how much attention is paid to it and how little attention is paid to other x risks that I think should take priority.
- WMWill MacAskill
Uh, for sure. So, a general thought of just, if someone's saying, "My cause is x," and it's the only thing that matters and everything should be determined in terms of my, like, how it impacts my cause, that's just like, social media does not help with this. (laughs) Um, but, uh, yeah, that is just, like, systematically, um, not a good way of thinking. And it ha- and it happens. One thing, uh, that writing the book, writing What We Owe The Future has made me appreciate is just that, like, change, model change takes time. So, if we look at climate change, so okay, now it has... I mean, yeah, it's interesting. I see, I listen, I hear climate change, and I feel kind of inspired by it where I'm like, "Okay, cool. Give us a few decades. (laughs) And we're gonna be, maybe we're gonna be there." Where concern about the climate, you know, it actually goes back a really long way. So, the first quantitative estimate of the imp- the impact of emitting CO2 goes back to a, um, climatologist called Sven- Svante Arrhenius in 1896. And his estimate was actually pretty good. Um, it was like n- a little bit on the, a little on the high side. And then the term the greenhouse gas, greenhouse effect was early 20th century. Frank, Frank Capar- Capra, um, the director of It's A Wonderful Life had a documentary about climate change in the 1950s. Um, so the level of concern was there. And then, you know, scientific consensus was the '60s and '70s, then it's like the '90s, not really naughties, but like, the, um, activism around climate change starts to really happen and build up. And so when I think about AI and bio-risk and so on, I'm kinda like, it's like we're in the 1950s with climate change. (laughs)
- CWChris Williamson
Okay.
- 1:07:19 – 1:22:04
Extinction Vs Collapse
- CWChris Williamson
about before, so you've got extinction, but you've also got unrecoverable global collapse. What's the difference between those two things?
- WMWill MacAskill
Yeah.
- CWChris Williamson
And how would a global collapse happen?
- WMWill MacAskill
Okay, so, um, I said extinction seems pretty hard. I mean, I still gave it .5%. That's a lot when you're talking about, you know-
- CWChris Williamson
That's only .5 over the next 100 years. It was 20% for the rest-
- WMWill MacAskill
That's over the next-
- CWChris Williamson
(laughs)
- WMWill MacAskill
Oh, yeah. I mean, no, 20% was the risk of some major cata- like, hundreds of millions of dead-
- CWChris Williamson
Okay, okay, okay.
- WMWill MacAskill
... kind of level pandemic.
- CWChris Williamson
Got you.
- WMWill MacAskill
Um, yeah, I mean, terrifi- like-
- CWChris Williamson
Still not good.
- WMWill MacAskill
You could decide whether to trust me or not, but utterly terrifying, I think. Um, yes, so I mentioned, like, killing literally everyone. Um, okay, that's like, uh, you know, maybe that's, you know, low, you know, low, but very, severely significant likelihood. Um, a catastrophe that just set us back to pre-industrial levels of technology, so killed 99% or 99.9% of the world's population, at least you might think that might be much more likely, and could occur from a va- like, could also occur from pandemics, um, could occ- occur from an all-out nuclear war that leads to nuclear winter, um, you know, perhaps could occ- uh, occur via other means. Um, would we get, would we come back? Because if, if not, then the unrecovered collapse of civilization would plausibly be just as bad or almost as bad as outright extinction. You know, we would go to f- like, a farming society again, or even hunter-gatherer society, limp along. An asteroid would, you know, uh, cause all the, you know, wipe us out, um, over... It would just be a matter of time, essentially. And so I, you know, there hadn't been that much work, um, before, before What We Owe the Future on this question of, how likely is a civilizational collapse? If there was one, like, would we actually come back? And so I really wanted to do a, a deep dive into this and really try to think that, yeah, assess the question of like, "Well, yeah, would we come back? And if not, why not?" And I actually kind of came out pretty optimistic. Um, certainly over the course of doing research, uh, for What We Owe the Future, I came out being a lot more positive, thinking it's like well over 90%, I think, that civilization would bounce back if we moved back to pre-industrial levels of technology. Um, and that's for a few reasons. So one is that, uh, if you look at local catastrophes, they very rarely l- lead to collapse. Um, I'm actually not sure if I even know any examples, um, of collapse in the relevant sense where you take the fall of the Roman Empire. That was a collapse of a civilization, but firstly, it was only local. There's never been a kind of global collapse of civilization, and secondly, it's not like people disappeared from the region. It's just that, like, technology went backwards for a while and, um, you know, other indicators of advanced civilization kind of went back for a while. Whereas if, you know, we would be thinking about that happening on a global scale and going to kind of pre-industrial levels. And so even if you take something like the Black Death, um, in Europe, which killed somewhere between like 25% and 60% of the population of Europe, there wasn't civilizational collapse. It was a, like, enormously tragic, um, with such a loss of life, but civilization, kind of European civilization kept going. And in fact, there was the Industrial Revolution just a f- a few centuries later. Um, and yeah, in the book, I discuss many other ways in which locally, um, s- societies have like taken these enormous knocks and then kind of bounced back. So I give the example of Hiroshima as well, where again, prior to the search, I'd had this image in my mind of Hiroshima, even now as just this, like, smoking ruin, whereas it's not true at all. Within 13 years, the population was back to, um, the population before it was, um, uh, had an atomic bomb dropped on it. Um, now it's this, like, flourishing city. So that's kind of one reason. The second is just how much, uh, technology, um, that could be imitated or information in libraries that people could use in order to like, t- you know, make technological advancement again, where, you know, the early innovators, they were doing this from scratch. There was nothing to copy. Whereas if you're like, "Oh, there's this thing. It seems to, like, burn oil in order to, like, make a motor go around. Like, I want, like, maybe we can copy this." It becomes, like, much easier, especially then there's materials in libraries too. And then the final consideration is just that, um, uh, it, if you think, if you try and just go through a list of like, what are the things that could stop us from getting to today's level of technology again, you kind of come up short. I think the one that could, could be decisive is fossil fuels, um, where we've already kind of used up easily accessible oil, and over the course of a few hundred years, we would use up easily accessible coal. Um, but at least for the time being, uh, we have enough in the way of fossil fuels that even a catastrophe that sent us back to, even that sent us back to the Stone Age, we would co- we would come back and if we were industrializing again, we'd be, we'd have enough to get to today's level of, uh-... technology, at least.
- CWChris Williamson
What's your idea about coal mines and what we should do with them?
- WMWill MacAskill
(laughs) Yeah. So, um, in, in the book, I talk about, uh, kind of clean tech in particular, as this, like, just very robustly good thing we can do. Um, but the reasons for that aren't always the most intuitive. (laughs) So one, you know, there are many reasons, I think, for wanting to keep coal in the ground. Climate change is one. Enormous health pollution, the air pollution and health costs from burning fossil fuels. Um, but one is just, we might need it in the future. Um, if there's a catastrophe that sends civilization back to agricultural or pre-agri- agricultural levels of technology, and we need to re-industrialize, well, we got to where we are by burning prodigious amounts of coal, um, and we might need that again. Uh, and yet we're burning through it.
- CWChris Williamson
(clears throat)
- WMWill MacAskill
And so yeah, I think the best thing to do is just invest in clean tech. So, um, that's not just solar panels, but also alternative fuels, uh, su- what's called super hot rock geothermal, where you drill just really far into the ground and, um, harness the heat from, uh, closer to the mantle. Um, but one idea that I looked into is just, can you just buy coal mines and shut them down? Um, that was the kind of, like, no-brain, like, (laughs) take. Um, and, like, could you do this at scale? Could you do this as a way of carbon offsetting, where a large group of people get together, they all contribute to a fund that, like, pays for the coal mine to be retired? Um, there are people looking into this. Um, uh, and I commissioned some research to look into it. It seems pretty hard, unfortunately, mainly for regulatory reasons, um, where, uh, governments have... I mean, there are just very powerful fossil fuel lobbyists, and they don't like you buying coal mines (laughs) and shutting them down. So, there's kind of use-it-or-lose-it laws, where, um, uh, if you try and buy a coal mine with the purpose of shutting it down, the government just voids your contract because they say if you buy a coal mine, you have to use the coal. Um, and that, I think, you know, even if you can get around that, that drives up the price a lot more. Um, people perhaps just shift to other coal. So unfortunately, I'm, like, a little more negative on that particular strategy than I used to be. But, uh, for a while, I was just really taken with it. I just, I just really loved this idea of just like, "We own this coal mine now. We're going to turn it into this museum, um, museum for obsolete technology." And it'll have, like... You can go around, like a theme park (laughs) , um, in the, like, uh, the little trucks that you see in Indiana Jones, like, going along the carts. Um, I had the whole vision. But, uh, maybe one day, I'll still do it, but-
- CWChris Williamson
Well, we've got-
- WMWill MacAskill
... it might not be the most impactful thing.
- CWChris Williamson
We've got those seed banks, right? There's... Is it in Iceland, I think, or in-
- WMWill MacAskill
In Svalbard-
- CWChris Williamson
That's it.
- WMWill MacAskill
... um, in Norway. Yeah.
- CWChris Williamson
And that's a bunch of every plant's seed on the planet. And presumably, there must be equivalent backups digitally of every book probably distributed across the world. And they'll be in underground bunkers or on the side of a mountain or something like that. So that if there was some sort of huge collapse or any kind of existential risk that had a bit of kinetic energy to it, that it would be, it would be kept away. And what that does is it, it shortcuts the, um, pain and the investment that you need in order to be able to actually find out what to do. You can just go back and read what you need to do.
- WMWill MacAskill
Yeah.
- CWChris Williamson
And then you get to rediscover the technologies. And I, I've never heard anyone else talk about it, the fact that one of the advantages we had was that all of the coal and the oil was relatively close to the surface. And you pick the-
- WMWill MacAskill
Mm-hmm.
- CWChris Williamson
By design, you pick the low-hanging fruit first. But that doesn't fru- future-proof you against a re- collapse recoveries potentially well, because if you do collapse and you need to recover, presumably you're going to be able to get low-hanging fruit, but not the less, the higher-hanging fruit-
- WMWill MacAskill
Yeah.
- 1:22:04 – 1:31:42
Our Goal for the Future of Humanity
- CWChris Williamson
and the potential duration that we're moving toward for our genetic inheritance and civilizational future, the risks that we've got, the way that we can mitigate them, how we can move back against them, what's the actual goal? What, what should our goal for the future of humanity be? What are we, what are we even optimizing for?
- WMWill MacAskill
What are we building for? So, I think the key thing I want to say is that we don't know yet, and that's okay. So imagine you're a teenager, and you're wondering like, "Oh, what's the goal of my life?" And you just want to be as happy as possible, let's say. Now, you might just not, not know yet, you know? And s- what do you want to do? Well, okay, as a teenager you want to not die, 'cause if you die then you're gonna (laughs) , um, uh, you know you're not going to have a flourishing life after that point. Um, so similarly, we wanna make sure we don't go extinct. But then you just want to have lots of possible futures open to you, and be able to have time to, like, figure stuff out, try things out, see what's actually is gonna make a, um, give you the most flourishing life. And that's okay. So you can have a plan that is about getting yourself to a position such that you can figure out what to do for the rest of your life, and I think the same is true for humanity as a whole. We want to, we should try to get ourselves to a state where we are not facing existential catastrophe. We are, we do have many possible futures open to us. We're able to reflect and deliberate, um, and then make further moral progress so that we can then collectively figure out, from a place of, kind of, much more enlightened, much more enlightened, reasoned perspective, what we should do with the, you know, potentially vast future that's ahead of us. And, uh, I call these ideas, this idea of kind of exploring and trying to figure out, um, different things, a mor- a morally exploratory society in the book. With the kind of, the limit case of that, I call the long, the long reflection, which is, um, where it's like, okay, we've got to a state where, uh, we've kind of solved the most pressing needs. Do we want to immediately rush to just settling, settling the stars with, um, whatever our favorite moral views are? I'm like, "No. We've got time." As we talked about at the very start of this podcast, we've got an awful lot of time, and that means we can, before we engage in any activities that lock in a particular worldview, such as space settlement or formation of a world government, then we really take the time to ensure that we've gotten, you know, gotten to the end of moral progress, that we've really figured out all we, that we can.
- CWChris Williamson
Are you of the mind that we should move more slowly with moral progress and our considerations for what we should do once we've got ourselves to technological maturity than in technological progress en route to get there?
- WMWill MacAskill
Uh, so I think with technological progress, how fast we should go is kind of tough, um, where...... you know, there are these reasons I think why at the moment, going faster technologically is, like, a little better. It's not kind of that big a deal. I do think that if you can make moral progress faster, you just want to go as fast as you can (laughs) because you make the everything better in the future. It's just that I think that real moral progress might take time. Um, so this might be true for technological progress too. Maybe it takes, um, you know, I've, I've said that we can't go as fast as we're currently going, but assuming we slow, perhaps the whole project takes, you know, millions of years to come. I don't know. But the same might be true for morality as well. And unlike... With technology, there's always incentive to build better technology. It gives you more power. Um, it means you can do more things, whatever your values are. Uh, so there's, like, always strong incentives to do that. With moral progress, that's not true. Um, if I'm, if I don't care, if I have all the power and I don't care about having a better moral point of view, there's no law of nature. There's also no real competitive dynamics that force me to have, get to a better moral perspective. And so, what I'm really doing and kind of sketching out this idea of the long reflection is just to be, is to really say we need to keep working on, you know, getting better morals, getting better, um, values. Uh, we don't want to just, like, jump into the first thing that we think is good, you know, early 21st century Western liberal-ish kind of moralities. Like, no, let's re- like, we can take our, we can take our time before engaging in kind of any of these big projects.
Episode duration: 1:34:19
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode gXBvfL2zTkU
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome