Dwarkesh PodcastWill MacAskill - Longtermism, Effective Altruism, History, & Technology
EVERY SPOKEN WORD
115 min read · 22,830 words- 0:00 – 1:18
Intro
- WMWill MacAskill
... you know, we should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now." Instead, we should think, "We wanna s- ensure that we spend like a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out." Perhaps if the Industrial Revolution had happened in India rather than in, uh, Western Europe, then perhaps we wouldn't have wide-scale factory farming. And then academia, I think, has just developed a culture where you don't tackle such problems in academia. (laughs) Partly that's because they fall through cracks of different disciplines, and partly because they just seem too big, or too grand, or too speculative. The idea of long reflection is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason, and empathy, and debate, and good-hearted kind of moral inquiry to guide which values we end up with.
- DPDwarkesh Patel
(Instrumental music) Okay. Today, I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.
- WMWill MacAskill
Thanks so much for having me on.
- 1:18 – 8:42
Effective Altruism and Western values
- WMWill MacAskill
- DPDwarkesh Patel
So my first question is, what is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?
- WMWill MacAskill
Uh, yeah, I p- think it probably is kind of contingent. Maybe not on the order of like this would never have happened, but at least on the order of decades. Uh, evidence for the reason why Effective Altruism is somewhat contingent is just that similar ideas have been promoted at many times during history, and not taken on. So we can go all the way back to ancient China, the Mohists defended kind of impartial view of morality, and took very strategic actions to try and, um, help all people, in particular, um, providing defensive, um, assistance to cities under siege. Uh, then of course there were the early utilitarians. Effective altruism is broader than utilitarianism, but has some similarities. Uh, and then even Peter Singer in the '70s, he had been promoting the idea that we should be giving most of our income to help the very poor, and hadn't had a lot of traction until, yeah, even like early 2010 after GiveWell launched, after Giving What We Can launched. What explains the rise of it? I mean, I think it was a good idea waiting to happen at some point. Uh, I think the internet helped to gather together a lot of like-minded people that weren't possible otherwise, and I think there were some particularly lucky events, like Ellie meeting Holden and me meeting Toby, that helped catalyze it at the particular time it did.
- DPDwarkesh Patel
Hmm. Now, if it's true, as you say in the book, that moral values are very contingent, then th- shouldn't that make us suspect that, uh, modern Western values aren't pro- probably aren't that good, they're probably mediocre or worse? Because ex ante you would expect to end up with, you know, the w- with the median of all the values we could have had at this point, and obviously we'd be biased in favor of whatever values we h- we were brought up in?
- WMWill MacAskill
Absolutely. I think taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, "Wow, I'm so glad that, uh, uh, moral progress happened the way it did and, um, we don't have Jewish people around anymore. What huge moral progress we had then." Like, that's a terrifying thought, and I think it should make us take seriously the fact that we're very far away from the moral truth right now. So I think it sh- you know, one of the lessons I draw in the book is, you know, we should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now." Instead, we should think, "We wanna s- ensure that we spend like a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out."
- DPDwarkesh Patel
Yeah, so y- that- that makes a lot of sense, but I guess I'm asking a s- slightly separate question, which is, not only are there possible values that could be better than ours, but should we expect our values... I mean, we have this sense that we've made moral progress, so like, th- things are better than they were before, or better than in most possible other worlds, uh, in 2100 or 2022, I mean. Uh, should we not even expect that to be the case? Like, should our prior just be that, yeah, these are kind of meh values?
- WMWill MacAskill
I think our prior should be that our values are, you know, as good as what one of- would have expected kind of on average. And then you can make an assessment like, is the world... are the values of the world today, is that going like particularly well? You know, and there are some arguments you could make for saying no. Um, perhaps if the Industrial Revolution had happened in India rather than in, uh, Western Europe, then perhaps we wouldn't have wide-scale factory farming, which I think is like a moral atrocity. Having said that, my all things considered view is actually to think that like we're doing better than average. Like, if civilization were to su-
- NANarrator
(sighs)
- WMWill MacAskill
... early draw, then things would look worse in terms of our moral beliefs and attitudes. Where I think the abolition of slavery, the feminist movement, liberalism itself, demo- democracy, these are all things that like we relatively easily could have lost and are huge kind of gains.
- DPDwarkesh Patel
Hmm. But then if that's true, does that make the prospect of a long reflection kind of dangerous? Because if prog- moral progress is sort of a random walk and we've ended up with a lucky lottery, then you're kind of just maybe reversing, um, w- w- maybe you're risking regression to the mean if you just have a thousand years of progress.
- WMWill MacAskill
I think that moral progress isn't a random walk in general. Like, there are many forces that act on culture and on what people believe, and one of them is just what's right, morally speaking, like what does our best argument support? That is a force. And I think it's like a somewhat weak force, unfortunately, and the idea of long reflection is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason, and empathy, and debate, and good-hearted kind of moral inquiry to guide which values we end up with.
- DPDwarkesh Patel
Okay. So in the book you make this interesting analogy where humans at this point in history are like teenagers. Uh, but a- another common, um, impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so do, do you think it makes sense to extend the analogy this way and suggest that maybe we should be, you know, Burkean small L long-termists and reject these, um, inside view esoteric threats?
- WMWill MacAskill
Like, I think the Burkean arguments for, you know, taking history seriously and, like, what kind of moral views have, like, stood the test of time, you know, I think that's important arguments to engage with. My view kinda goes the opposite actually, which is that, you know, we are cultural creatures and we in our nature are very inclined to agree with o- what other people think, agree with tradition, even if it- we- we don't understand the underlying mechanisms. I think that works well in a low change environment. So the environment we evolve towards, like things didn't change very much. We were hunter-gatherers and small bands for hundreds of thousands of years, millions of years if, um, you include other homo species. Uh, whereas now we're in this period of, like, enormous change (laughs) where the economy is doubling every 20 years, um, new technology is alive every single year. That is, like, unprecedented and I think actually means we should much more than would make sense historically just be trying to figu- figure things out more from first principles.
- DPDwarkesh Patel
Hmm. Interesting. Uh, but at current margins, do you h- think that's still the case? Like, w- if a lot of, uh, EA and long-termist thought is first principles thought, do you think b- more history would be better than the marginal first principles thinker?
- WMWill MacAskill
I think two things. So if it's about an understanding of history, then yeah, I actually would love, uh, EA to have a better historical understanding. Mainly just like as a on the margin thing. You know, I think the most important subjects if you want to do good in the world in an EA way are philosophy and economics. But we've got that, like, in abundance.
- DPDwarkesh Patel
(laughs)
- WMWill MacAskill
Um, whereas there's very little in the EA community in terms of historical knowledge. And I certainly felt like I learnt a huge amount over the last few years, um, understanding that better. Should there be even more first principles, um, thinking? Uh, yeah, probably. I mean, I think the kind of first principles thinking really, or what you might call first principles thinking, I think, like, paid off pretty well in the course of the coronavirus pandemic, where from January even 2020 my Facebook wall was completely saturated with people freaking out, um, or, like, at least taking it very, very seriously in a way that the ex- existing institutions weren't. And they weren't because they were just in this mode of like, "Oh, business as usual. Don't panic." They weren't properly updating to a new environment and new evidence.
- 8:42 – 12:57
The contingency of technology
- WMWill MacAskill
- DPDwarkesh Patel
Hmm. Now, in the book you point out several examples of societies that went through hardship. I, I mean, hardship is putting it mildly, but you know, um, uh, Hiroshima after the bomb, Vietnam after the bombings, and then, uh, yeah, the Europe after the Black Death. And they, they seem to have rebounded relatively quick- quickly. Does this make you think that perhaps the role of contingency in history, especially economic history, is not that large and it implies a sort of solo model of growth where, yeah, the, i- even if bad things happen you can kind of just rebound and it really didn't matter?
- WMWill MacAskill
Uh, yeah, in term, in economic terms. I mean, I think that's the big difference between economic or technological progress and moral progress, where in the long run at least I think economic or technological progress is very non-contingent. I mean, it's actually fascinating some historical contingencies you can see in technology. The Egyptians, um, had an early version of the steam engine. Semaphore was only developed very late yet could have been invented like thousands of years in the past. Similarly with like Kay's flying shuttle. But in the long run, like, the instrumental benefits of tech progress and the incentives towards tech progress and economic, um, growth are just kind of so strong that it means, like, I think we get there in the end in a very wide array of circumstances. And in particular, just imagine there's 1000 different societies and none are growing but one is. Then in the long run that one becomes the whole economy.
- DPDwarkesh Patel
Uh, yeah, it seems that particular example you gave of the Egyptians having some ancient form of a steam engine, maybe that points towards there being more contingency because maybe the steam engine comes into many societies but it only gets turned into an industry revolution in one.
- WMWill MacAskill
In that particular case there's a big debate about whether, um, quality of metalwork was actually possible to build a proper steam engine at that time. Um, sorry, I was mentioning those examples to say, like, historically you actually do get, like, some amazing examples of contingency prior to the Industrial Revolution. I think it's still contingency only on the order of, you know, centuries to thousands of years. And then in the, like, post-Industrial Revolution world, I think there's, like, much less contingency. It's much harder to see the... There are some, but I think it's much harder to see technologies that wouldn't have happened, you know, within decades, um, if they hadn't been developed when they were.
- DPDwarkesh Patel
Okay. So I, I guess maybe the general model here is, o- of yours is that there's maybe the, these general purpose changes in the state of technology and those are very contingent and it would be very important to, like, try to engineer one of those. But other than that, it's gonna get done by some guy starting c- creating a startup anyways?
- WMWill MacAskill
Uh, no, I mean, I think more generally. So even the case of, like, the steam engine or semaphore that I was pointing to, which historically are s- seem kind of maybe contingent, I think in the long run they get developed. Where, you know, if the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point or, like, would similar technologies have been developed, um, that were vital in the Industrial Revolution? And I'm like yes, because there are very strong incentives for doing so. If you've just got a whole bunch of cultures and they're all in a random walk and one hits upon like, "Hey, we're gonna do industry" where like a culture that's into making textiles and, like, doing that in an aut- more automated way, as was true of England in, uh, the 18th century, then that ch- economy just takes over the world. Um, and so that's why there's this structural reason, I think, why economic growth is, is like much, much less contingent than, like, moral progress.
- DPDwarkesh Patel
Mm. Okay, so usually people, uh, w- when they think of something, somebody like Norman Borlaug and the Green Revolution, it's like, oh, that, if you could have done something like that, you'd be like the greatest person in the 20th century. O- obviously he's still a very good man and everything, but, uh, so that, that, would that not be your view? Like you think the Green Revolution would have happened anyways?
- WMWill MacAskill
Uh, yeah. So um, Norman Borlaug is sometimes credited with saving a billion lives. Um, I think he was huge. I think he was like enormously important and good force for the world. Um, I think it's not the case that had Norman Borlaug not existed, a billion people would have died. Rather, similar developments would have happened shortly afterwards. Perhaps he saved tens of millions of lives, and that's a lot of lives for a person to save. But it's not as many as just simply saying, "Oh, this tech was developed, this tech was used, a billion people who would have otherwise been at risk of starvation used his technology." Um, and in fact, even at the time, there were um, you know, not long afterwards, um, similar kind of agricultural
- 12:57 – 18:55
Who changes history?
- WMWill MacAskill
developments.
- DPDwarkesh Patel
Yeah. Okay. So then counterfactually, w- what, what group of people, what, like what kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers or...
- WMWill MacAskill
Uh, p- not quite moral philosophers, although perhaps s- um, sometimes. Uh, I think, you know, there are some examples. So, um, just sticking on science technology. So if you look at Einstein, um, his theory of special relativity would have been developed, you know, very shortly afterwards. However, his theory of general relativity, I think was plausibly like decades in advance. So you do sometimes get these like, oh, surprising leaps. But I think we're still only talking about decades rather than millennia. And so who really does make a very long term difference? Yeah, I think it's like moral philosophers could be one. Like I think Marx, um, and Engels made this like enormous, very long run difference. So did religious leaders. I think that Muhammad, Jesus, Confucius, um, made an enormous and contingent long run difference. Um, and moral activists as well. So, um, abolitionist campaigners, um, the Quakers, uh, and you know, yeah, p- other groups too.
- DPDwarkesh Patel
If, if, so if you think that the changeover in the landscape of ideas is very quick today, is it... Would you still think that maybe somebody like Marx has been, will be considered very influential in the long future? Because I mean communism lasted less than a century, right? Maybe it's like longer and consequences are huge, but-
- WMWill MacAskill
It's all in expectation. So um, as things in fact turned out, probably Marx will not be very influential over the long term future. But that could have gone another way. It's not like such a wildly different history where rather than liberalism emerging dominant in the 20th century, it was communism. And then if it had, totally on the table for me that that like persists for the next extremely long, long time, where if you develop certain... The better technology gets, the better a ruling ideology is to kind of cement its, like it cements its ideology and persist for the very long time. And so you can get like a set of knock on effects where, okay, communism wins the war of ideas in the 20th century. Let's say in the limit forms of world government based around world state, based around those ideas. Then via kind of anti-aging technology or genetic, um, enhancement technology or cloning or artificial intelligence, it's then able to build a society that like literally persists forever in accordance with that ideology.
- DPDwarkesh Patel
Yeah. The death of dictators is especially interesting when you're thinking about contingency because well, yeah, when Mao dies or when Stalin dies, there's like huge changes in the regime, which makes you think, yeah, the, the actual individual there was very important and who they happened to be was, uh, contingent and persistent and... Or at least important in some interesting ways.
- WMWill MacAskill
For sure. So if you've got a dictatorship, then you've got a single person ruling the whole of a society, and that means it's just heavily contingent, like what are the views and values and beliefs and personality of that person.
- DPDwarkesh Patel
Yeah. So going back to stagnation, in the book, you, you're very concerned about fertility because it seems your model about how progress... Or like how scientific and technological progress happens is number of people times average researcher productivity. Um, and then yeah, if research productivity is declining and also the number of people isn't growing that fast, then that's concerning.
- WMWill MacAskill
It's, yeah, number of people times fraction of the population devoted to R&D.
- DPDwarkesh Patel
Yeah, thanks for the clarification. It seems that there have been a lot of intense concentrations of like talent and progress in history, you know, like Venice, Athens, Bell Labs, or even something like FTX, right? You, uh, like there's 20 developers making this like multi-billion dollar company. Do these, uh, do these examples suggest that maybe organization and congregation of researchers matters more than the actual total amount?
- WMWill MacAskill
Uh, so I actually think the model, um, works reasonably pretty well. So throughout history, you're starting from this very low baseline, like very low, um, technological like level compared to, to compared to today. And most people aren't even trying to innovate, or if they're trying to innovate, it might be in things that we wouldn't now call like science technology. So it might be theology. It might, um... So one argument for why Baghdad lost its golden age, scientific golden age, is because the political landscape changed such that what was incentivized was theological, theological investigation rather than scientific investigation, um, in the kind of 10th, 11th century AD. Similarly, one argument for why did Britain have a scientific, um, and industrial revolution both in Germany was because all of the intellectual talent in Germany was focused on making amazing music. And that doesn't compound in the way that making textiles cheaper does. And so if you look at like Sparta versus Athens, for example, like what was the difference between Sparta and Athens? I think it's just that like they had different cultures such that in Athens, intellectual inquiry was rewarded. Um, and because they're starting from a much lower base, even just, you know, hundreds of people or thousands of people trying to do this thing that looks vaguely like science or vaguely like what we now think of as intellectual inquiry have, has these enormous kind of impacts.
- DPDwarkesh Patel
I see. But then if you take an example like Bell Labs, right? So late 20th century, the low hanging fruit is mostly gone. But then you have this one small organization that does six Nobel Prizes, I think. So yeah, then is it, is it a sort of, uh, kind of a coincidence and lucky break?
- WMWill MacAskill
... a, yeah, I wouldn't s- I wouldn't say that at all. And I should acknowledge the like, the model where what you're working with is just size of the population times like what faction of the population you're putting towards R&D. That's like a toy model. It's like maybe the simplest model you could have of it. And so Bell Labs, like I think is like punching above its weight. I think you obviously can create like, you know, amazing things from like a certain environment where not only are you getting like the very most productive people, but you're putting them in an environment where they're like 10 times more productive than they would otherwise be. However, I think what I would say is like when you're looking at the grand sweep of history, those effects are like comparatively small compared to just like sheer like, compared to like the broader culture of a society, or
- 18:55 – 26:51
Longtermist institutional reform
- WMWill MacAskill
the sheer size of a population.
- DPDwarkesh Patel
I wanna talk about your paper on long-termist institutional reform. So yeah, you, you, you, one of the things you advocate in this paper is that we should have, uh, one, one, one of the houses be dedicated to where it's long-term its priorities. C- c- can you name like some specific performance metrics you would use to judge like, like or incentivize the group of people who make up this body?
- WMWill MacAskill
Uh, sure. Yeah. I mean, the thing I'll caveat with long-termist institutions is like, I'm actually like pretty pessimistic about them, in the se- you know, I have this paper exploring it, but there's just this fundamental issue that like if you're tryin' to represent or even give consideration to future people, you just have to face the fact that they're not around and they can't lobby for themselves, and so you're gonna have co-option by, um, you know, people in the present. However, you could have this like assembly of people who are, have, um, some sort of like legal, like legality power. How would you constitute that? Like, my best guess is you just like have a random selection from the popu- um, from the population. Um, how would you ensure the incentives are aligned? Well, there are things that like you can try, like okay, in 30 years time, their performance will be assessed by, uh, a panel of people who look back and say like, "Okay, was, was the policies that were being recommended here, um, were they good or not?" And perhaps the people who were part of this, um, uh, assembly, their pensions are getting paid (laughs) on the basis of that assessment. And then secondly, the people in the 30 years time, their assessment, both their policies and their assessment of the previous, you know, the 30 years previous futures assembly get assessed by another assembly 30 years after that, and so on. And so it's like... And they're like, is some like math and economic analysis such like under certain conditions this checks out, like you have this like backwards chaining, um, where people in 1,000 years time were evaluating the people in nine- uh, 970 years time who are evaluating the people in 910 years time. And like, can you get that to work? I mean, like maybe in theory. I'm like (laughs) again, a little bit more skeptical in practice, but you know, I would love some country to try it and see, uh, see what happens. The other thing I should say actually is just like, there is some evidence as well that you can just get people to take the interest of future generations more seriously by just telling them like, "This is your role." There was one study that like got people to do- like put on ceremonial robes, and act as like trustees of the future. And they really did make like different policy recommendations than when they were just acting like on the basis of their own beliefs of self-interest.
- DPDwarkesh Patel
Yeah. Uh, but if you are on that board that is judging these people 30 years, uh, before you, is there something you would be like, "Okay, this is the metric I care about most. The, uh, expected, uh, I don't know, future GDP growth or something that you think would be the most, uh, informative about like how good those decisions were?"
- WMWill MacAskill
Uh, yeah, I mean, there are things you could do. Like, you know, it could be, um, yeah, GDP like of the country. It could be like, you could agree on like a set of metrics like, you know, homelessness rate, perhaps like some expert measure of like technological progress. Um, I think you would absolutely want there to be, um, expert assessment of, um, like risk of catastrophe as well. Um, we don't have this at the moment, but like you could imagine like you have a panel of super forecasters who are predicting like what are, what is the chance of like a war between great powers occurring in the next 10 years, and that gets aggregated into like a war index. I think that would be like a lot more (laughs) important an index than like the stock market index, and we don't have it. But you could imagine that being kind of fed in as well, 'cause you wouldn't want something which is just like, oh, you're only e- uh, like incentivizing economic growth, um, at the expense of like tail risks.
- DPDwarkesh Patel
Would that be your objection to a scheme like Robin Hanson's about just, uh, maximizing expected f- future GDP using prediction markets and making decisions that way?
- WMWill MacAskill
Uh, yeah, I mean, I think maximizing ex- future GDP is more an idea I associate with, um, Tyler Cowen.
- DPDwarkesh Patel
It could be any metric, but yeah.
- WMWill MacAskill
Okay, yeah, then Robin Hanson's idea of futarchy where you've got, uh, vote on values bet on beliefs and people can just, you know, vote on what collection of goods they want to have where GDP might be one of them, but also unemployment rate or also like whatever. Beyond that, it's just pure prediction markets. Um, again, it's something I'd love to see tried and I think it's an idea in a vein of just like speculative political philosophy or like reasoning about like how could a society be extraordinarily different. That is kind of, and really differently structured, that is incredibly neglected. Do I think it'll work in practice? Like probably not. Most of these ideas wouldn't work in practice. Um, you do have issues when it comes to prediction markets where they can be gamed or they're just simply not liquid enough. So it's been notable since he developed those ideas and really worked in prediction markets. There hasn't been like a lot of success at prediction markets where there's, at least there has been a fair amount more success on kind of forecasting. Now perhaps you can solve those things, um, you, you know, have laws about what things can be voted on like or predicted in the kind of grand prediction market and it's not all of those things. You maybe have government subsidies so, to ensure there's enough liquidity. But like overall, I think it's like pretty promising and like...... I'd love to see it, like, you know, you could try it out on, like, a city level or something. Like, see how it goes.
- DPDwarkesh Patel
L- let's take a scenario where the government starts taking the impact on the long term seriously and institutes some reforms to integrate that, uh, perspective in. You, you can take an example of, y- you could take a look at the environmental movement for an example of this where, you know, there's environmental review boards that will try to assess the environmental impact of new projects and they can, uh, repeal any proposals on this beh- uh, based on this. And then, so the impact here, uh, uh, at least in some states and in some cases has been that groups that have no strong, uh, plausible interest in the environment are able to game these sorts of mechanisms in order to, uh, i- in some cases prevent projects that would actually help the environment, especially when you're talking about lo- something, uh, long-termism where it's, it would be, it'd take a long time to assess what the actual impact of something is, but then gov- policymakers are tasked with evaluating the long term impacts of something. Are you worried that it would be a system that would be really easy to game by malicious actors? And then, like, what, what do you think happened wrong with the way that environmentalism was codified into law?
- WMWill MacAskill
Uh, yeah, I mean, it's absolutely a worry. Like, you know, as in potentially just a devastating worry, where, uh, yeah, like, you create something, you're trying to represent future people, th- they're not actually around to lobby themselves, so it can just be co-opted and, um, yeah, my understanding of environmental impact statements has been similar and it's kind of for similar reasons where it's not like the environment can represent itself. Um, it can't say what, like, what its interests are. And so what is the right answer there? Like, again, it's super tough. Maybe there are these sp- speculative (laughs) proposals about, you know, having a representative body that, like, assesses these things and are, like, judged by people in 30 years time. That's kind of best we've got at the moment but I think at the moment, it's just like we need, like, a lot more thought to see if, like, any of these pro- any of these proposals, like, would actually be robustly good for the long term, rather than just things that are, like, more narrowly focused. So regulation to, um, have liability insurance for dangerous bio labs is not in any way, like, about trying to represent the interests of future generations, but it's very good for the long term. And so at the moment, I kind of primarily think that, like, if long termists are trying to change the government, like, let's focus on, like, fairly narrow set of institutional changes that are very good for the long term, even though they're just not in the game of, like, representing the future. That's not to say I'm, like, opposed to all such things, but, like, there are major problems with, implementation problems with any of them.
- DPDwarkesh Patel
I see. I, I guess we don't know how we would do it correctly. Uh, d- do you at least have an idea of what went wrong with, like, w- wha- how could environmentalism have been codified better? Like, why, why, why was that in some cases not a success?
- WMWill MacAskill
Uh, yeah, honestly, I just don't have a good understanding of that. I don't know if it's intrinsic to the matter, um, or if you could have had some system that, like, wouldn't have been co-opted
- 26:51 – 29:52
Are companies longtermist?
- WMWill MacAskill
in the longer term.
- DPDwarkesh Patel
Okay, so are corporations the most long-termist institutions we have today? Like, th- their incentive, theoretically, is to maximize, uh, future cashflow, um, w- which is, uh, uh, at least, uh, they explicitly and theoretically have a, a, should have, uh, an incentive to try to do the most good they can for their own company, which im- implies that, yeah, if there's an existential risk, then the company can't be around.
- WMWill MacAskill
Yeah, I don't think so. I mean, I think different sorts of institutions have different kind of rates of decay associated with them. So a corporation, even a corporation that is in the kind of top 200 biggest companies, I think has a half-life of only about 10 years. It's actually, like, they're surprisingly short-lived. Whereas if you look at, say, universities, um, well, you know, Oxford and Cambridge are kind of 800 years old. Um, I think it's University of Bologna is even older. These are, like, very long-lived institutions and you do get, like, Corpus Christ Church at, um, Oxford was making a decision about, like, should it have some new tradition that would, like, reoccur only every 400 years and it's like, yeah, that's the sort of decision it makes because it's such a long-lived institution. Similarly then, religions, like, can be even longer lived again. And I think that, like, that kind of natural half-life really affects what sort of decisions a com- like, a company versus a university versus a religion, religious institution would make.
- DPDwarkesh Patel
But does that suggest maybe there's... Is there something fragile and dangerous about trying to make your institution last for a long time? I- i- if companies try to do that and they're not able to?
- WMWill MacAskill
Yeah, I mean, companies are composed of people. You know, is it in any... the interest of a company to last for the long time? It's like, well, is it in the interests of the people who constitute the company, like the CEO and the board and, um, the shareholders, for that company to last a long time? And it's like, no, they don't particularly care (laughs) . Um, at least most, you know, some of them do, but, uh, most don't. Where there's other institutions, yeah, I mean, I think it goes both ways where in some cases, like, this is the issue of lock-in that I talk about at length in We Are The Future is that you get these moments of plasticity, the formation of a new institution, whether that's the, you know, Christian Church or, uh, the Constitution of the United States, and that, like, locks in a certain set of norms and that can be really good if the set of norms and laws is, are good. Like, I think the kind of US Constitution, I don't know, looking back, it seems kind of miraculous or something. It was like the first, like, uh, the first democratic constitution, um, uh, as I understand it was, like, created over the period of four months. It really seems to have stood the test of time. Or alternatively, it could be, like, extremely dangerous. There were obviously, like, horrible things in that... I mean, we'll stick with the US Constitution. There were horrible things in there and it was the, like, legal right to slavery was proposed as, like, a constitutional amendment. If that had gone in, that would have been, like, a horrible, a horrible kind of piece of lock-in. And so I think it's hard to answer in the abstract because it really depends on, like, what is the thing that's persisting for the long time?
- 29:52 – 35:47
Living in an era of plasticity
- WMWill MacAskill
- DPDwarkesh Patel
Mm-hmm. So you say in the book that you expect our current era to be a moment of plasticity. Wh- wh- why, why do you think that is?
- WMWill MacAskill
Yeah, I think this specific time's a moment of plasticity for two reasons. One is that, so the world is completely... is, like, unified in a way that's very historically unusual. Um, you can communicate with anyone, basically instantaneously, and there's great diversity of moral views. So we can have arguments, we can fi- you know-... uh, like people coming on your podcast and, like, debate, like, what's morally correct. It's plausible to me that one of, like, many different kind of sets of moral views, like, might kind of win out or become, like, the most popular, um, ultimately. And then secondly, so we're at this period where things really can change. Um, but it's a moment of plasticity because it also at least plausibly could come to an end, where I think there are various ways that in the coming decades or centuries the moral change that we're used to could end. Uh, so if there was a single, um, global culture or world government, um, again like before that, you know, if there was like global communist state or globalist Nat- global Nazi state, um, or other sort of world government that preferred ideological conformity, then combined with technology, I think it becomes kind of unclear. Like, why that would, why would that end over the long term? And I think the key technology here, there's artificial intelligence where the point in time, which may be sooner than we think, for all we know, um, where the rulers of the world are, uh, digital rather than biological, that could persist, you know, once you've got that plus kind of global hegemony of a single ideology, then there's not much reason it seems to me for, um, that set of values to, like, change over time. You've got immortal leaders, um, and no competition. And what are the other kind of sources of value change over time? I think they can be accounted for too.
- DPDwarkesh Patel
But i- isn't the fact that we are in a time, uh, of interconnectedness that won't last if we settle space, isn't that a reason for thinking that lock-in is not especially likely? If, if your overlords are many, many, uh, millions of light years away, then, you know, h- how well can they control you?
- WMWill MacAskill
Uh, well, I think the worry that I have is that, um, the control will happen before the point of space settlement. So I think it's totally right that if, you know, one day we took to space and there's many different, um, uh, settlements of different solar systems and they, you know, are pursuing different visions of the good, then I think, like, you know, you've main- you're probably gonna maintain diversity for the very long time. I think it's, like, just given the kind of physics of the matter, I think, like, once a solar system has been settled, then it's very hard for other solar sy- other civilizations to, like, come along and, um, conquer you. At least if we're, like, at period of level of technological maturity where, um, you know, there aren't, like, new groundbreaking technologies to be discovered. Um, but I'm worried that the control will happen earlier. Like, I'm worried the control might happen this century within our lifetimes. I don't say that's very likely, but I think it's, like, seriously on the table 10% or something.
- DPDwarkesh Patel
Mm-hmm. Uh, yeah, so going back to, uh, the long term, with the long term as a movement, there's many instructive foundations that were set up about a century ago, like, you know, Rockefeller Fa- uh, Foundation, Carnegie, uh, four foundations, and they don't seem to be, uh, especially creative or impactful, um, especially today. L- li- what, what do you think went wrong? Wh- wh- why was it there, I- if not value drift, I, I guess just some decay of competence and, uh, leadership and insight?
- WMWill MacAskill
Yeah, I don't have super strong views about those particular examples. Um, but two natural thoughts. One is that for organizations that want to persist a long time and keep having influence for the long time, historically they've tended to specify their goals in far too narrow terms. So one fun example is Benjamin Franklin. He invested, um, uh, 1,000 pounds, uh, for each of the cities of Philadelphia and Boston, um, to pay out after 100 years and then 200 years for different fractions of the su- of the amount invested, but he specified it very p- specifically. It was to, like, help blacksmith apprentices and so on. And it's like, "Oh man, this doesn't make much sense, like, once you're in the year 2000." Um, whereas he could have said something much more general, like, "For the prosperity of people in Philadelphia, for the prosperity of people in Boston." And then it would have had, like, at least plausibly more impact. The second is just maybe a regression to the mean argument, where, um, you know, you have some new foundation and it's doing, like, extraordinarily amount of good, um, as I think the Rockefeller Foundation did. Just over time, if you're saying that it's exceptional in some dimension, it's probably gonna get more close to average on that dimension, um, just as a matter of, like, you're changing the people who are involved. If you've picked some, if there are some people who are, like, exceptionally competent and farsighted, the next people just statistically are probably gonna be less so.
- DPDwarkesh Patel
So going back to that dead hand problem, where if you sp- uh, if you specify your mission too narrowly then, yeah, it, it doesn't make sense in the future. Is, is there a trade-off where if you just... if you're too broad, then again you have the ability of future actors, maybe they're malicious or maybe they're just, like, not as smart or as, uh, as creative as you are, to take the movement in ways that you would not approve of? Uh, so if it just like do good for Philadelphia, but then, yeah, it just turns into something that Ben Franklin would not have thought is good for Philadelphia.
- WMWill MacAskill
Uh, yeah, I mean, it depends crucially on what your values and views are, um, where if Benjamin Franklin... and I don't think this was true, but if he was like, "No, I just really care about blacksmith's apprentices and nothing else matters," then he was correct to specify it in another way. But I think as a matter of fact, certainly my own values, but I think more generally, they tend to be quite a bit more broad than that. And then, um, uh, secondly, like, in general, I expect people in the future to be, like, smarter and more capable. Like, that's certainly the trend over time. In which case, like, if, you know, we're sharing similar broad goals and they're implementing it in a different way, then, um, I think probably
- 35:47 – 40:13
How good can the future be?
- WMWill MacAskill
they're right and I'm wrong.
- DPDwarkesh Patel
Let's talk about how good we should expect the future to be. Um, ha- have you come across Robin Hanson's argument that in the future we'll all just be, uh, subsistence level ends because there'll be a lot of competition and then you- you'll just try to, like, minimize compute per digital person, which will just be a miserable, like, barely living, uh, ba- barely worth living experience, uh, for, for every entity?
- WMWill MacAskill
Uh, yeah, I'm familiar with the argument, but we should distinguish the idea that ends are at subsistence level from the idea that we would have bad lives. So, um, subsistence means that, uh, given their... yeah, you get a kind of balance of income per capita and population growth.... um, such that, uh, if they were any poorer then, um, uh, deaths would be, kind of, outwi- outweighing, kind of, additional births. That actually doesn't tell you about their wellbeing, so you could be (laughs) very poor as an emulated being. However, you'd just be in bliss all of the time. Um, that is, like, perfectly consistent with the Malthusian theory, and so it might seem (laughs) still not like... It might still seem fairly far away from the best possible future. Uh, that future still could be, like, very good. Like, those ends, while at subsistence, still could have lives like thousands of times better than ours.
- DPDwarkesh Patel
Speaking of being poor and happy, there, there was a very interesting section in the chapter where you mentioned this study you had commissioned where, um, it, it... You were trying to find out if people in, like, the developing world had lives worth living. And it turns out that 19% of Indians would not want to relive their life every moment, uh, but, uh, I think it was 31% of Americans said that they would not. Uh, yeah, so wh- why are Indians (laughs) seemingly much happier at less than a tenth of the GDP per capita?
- WMWill MacAskill
Uh, yeah, I think the numbers are lower than that, from memory at least. Um, I think it was more like... And it depends exactly on the question asked, but, uh, from memory, it's something more like 9% of Indians, like, wouldn't want to live their lives again if they had the option, and, uh, like 13% of Americans or something. But you are right that, um, uh, on this metric of how many people are happy to have lived, um, how many people think that they are not happy to have lived, um, the Indians we sa- we, um, surveyed, uh, were more, kind of, optimistic about their lives, like, happier with their lives than people in the US were. Um, honestly, I just don't wanna generalize too far from that because we were sampling comparatively poorer Americans, comparatively well off Indians, so perhaps it's just a sample effect. There are also, like, weird interactions with, um, Hinduism and the belief in reincarnation that I think, like, could, um, you know, could just, like, mess up the kind of generalizability of this as well. On one hand, yeah. So I basically don't wanna, like, draw any strong conclusions from that. But it is pretty striking as a piece of information given that normally what you find when you look at people's wellbeing is that richer countries are, you know, considerably happier than poorer countries, on average at least.
- DPDwarkesh Patel
Yeah, I, I guess, but y- you do generalize in a sense that y- you use it as, uh, evidence that, uh, th- that most lives are worth living, that most lives today are worth living, right?
- WMWill MacAskill
Yeah, exactly. So I put together various, um, bits of evidence where, um, apro- very approximately, like 10% of people in the United States, 10% of people in India, um, seem to think that their lives are net negative. You know, they wouldn't... They think it... They contain more suffering than happiness. They wouldn't want to be reborn and live the same life if they could. Um, and if you look at, like, other studies as well, um, like, there's another sc- study that just looks at people in United States or United States and other generally rich countries, and asks them about, uh, how much of their conscious life they'd be willing... they would want to skip if they could, where by skipping it just means like you blink and then you come to the end of whatever activity you're engaging with. So perhaps I hate this podcast so much that I would just rather be unconscious than be talking to you, in which case I would have the option of skipping, obviously not to. Um, but I'd have the option of skipping and, uh, you know, it would be 30 minutes later and it would all be done. If you look at that and then also ask people about, like, the trade-offs they would be will- willing to make as a measure of intensity of how much they're enjoying or how much they're not enjoying a certain experience, you get the conclusion that, like, yeah, from memory again, a little over 10% of people, uh, were... On, on balance regarded their life as, um... That, that day, in fact, that w- that was being surveyed, as worse than if they'd been unconscious the
- 40:13 – 46:31
Contra Tyler Cowen on what’s most important
- WMWill MacAskill
entire day.
- DPDwarkesh Patel
Ju- jumping topics here a little bit. On the 80,000 Hours podcast, you s- said that you expect scientists who are explicitly trying to maximize their impact, that, uh, trying to do so might have an adverse impact because, uh, y- yeah, they might be ignoring the foundational, uh, research that w- wouldn't be obvious in this way of thinking, but might be more important. Do you think this could be a general problem with longtermism, that if you're, like, really trying to, like, find the most important things that are important long term, you, you might be missing things that... Yeah, would- wouldn't it be obvious thinking this way?
- WMWill MacAskill
Uh, yeah, I think it's a risk. So, um, among the ways that people could argue, you know, against my general set of views, one way that I find com-... So, you know, I argue that in general we should be doing, like, fairly specific and targeted things like, uh, trying to make AI safe, um, trying to help govern, um, the rise of AI, um, trying to reduce, like, worst case pandemics that could kill us all, trying to prevent Third World War, so ensure that good values, um, are promoted and avoid value lock-in. Um, but s- what, what some people could argue and people like Tyler Cowen, Patrick Cal- Calison- Collison, um, I think, like, take this line is, man, it's just very hard to predict the kind of future, the, like, future impact of your actions, and it's kind of a mug's game to even try. So instead, what you should do is just look at, like, what things have had... Done loads of good kind of consistently in the past and, um, try to just do the same things. And then they in particular might argue that that means technological progress. It might mean boosting economic growth. Yeah, I guess, like, I just dispute that, um, but it's not something I feel like I can give a completely knockdown argument to 'cause it's about the, you know... When will we find out who's right? Like, maybe in, you know, a thousand years time or something. (laughs) Um, but I just... One piece of evidence, um, is just, like, the success, uh, forecasters in general. Um, again, the fact that like... I mean, this also is true for Tyler Cowen, but like, you know, people in effective altruism, um, were just realizing that, uh, the coronavirus pandemic was gonna be a really big deal from a, like, very early stage, was worrying about pandemics, like, far in advance. Um, I think there are some things that are just, like, actually quite predictable. So, Moore's Law has held up for over 70 years. Um, I think the idea that, like, AI systems are gonna get much, much larger, um, the leading models are gonna get more and more...... um, powerful. That's like on trend. Similarly, the idea that like we will be, soon be able to develop viruses of like unprecedented destructive power. Again, that's just like, I think that's not actually that controversial a claim. And so even though I think that, um... Yeah, for loads of things, it's just very hard to predict and they're gonna be like tons of surprises. But there are some things, and I think especially when it comes to like fairly long-standing technological trends, where we really can make pretty reasonable predictions, at least about like the range of possibilities that are s- really on the table.
- DPDwarkesh Patel
But, uh, the, the... It kinda sounds like you're saying, uh, the things we know are important now are important now. Th- the... If, if something d- did turn out like a thousand years looking back to be the, uh, very important, yeah, it, it wouldn't be salient to us now.
- WMWill MacAskill
What I was saying with, um, me versus Patrick Colliton and Tyler Cowen, like who is correct? Well, in some sense you can... We will only get that information (laughs) in like a thousand years time, um, because we're talking about which strategy is gonna have a bigger impact on the long term. We might get like suggestive evidence earlier. So if we're... If kind of me and others, um, engaging in long-termism are making kind of specific, measurable forecasts about what is gonna happen with AI or advances in biotechnology and then are able to take action such that we are relatively clearly reducing certain risks, I think that's like pretty good evidence in favor of, um, our strategy. Whereas if in contrast they're doing like, well, all sorts of stuff, like not really trying to make, have like firm predictions about what's gonna happen, but then things just pop out of that where we think, "Oh, that was like really good from a long-term per- future perspective." You know, after, let's say we measure this kind of in 10 years time, well that would be good evidence for their view.
- DPDwarkesh Patel
What you were saying earlier about the contingency in technology implies that even given their worldview, th- uh, maybe you should think that technological... So e- even if you're trying to maximize what in the past has had the most impact, if what's had the most impact in the past is changing values, then y- yeah, th- then economic growth might not... Might be the most important thing or like trying to change the rate of economic growth.
- WMWill MacAskill
Yeah, I mean, I really do take seriously the argument of like, look at how people have acted in the past, f- especially for people who are trying to make a long-lasting impact. What things did they do that made sense and whatnot? So towards the end of the 19th century, John Stuart Mill, um, and, uh, the other early utilitarians had this like long-termist little wave where they started t- taking the interests of, um, future generations very seriously. And their main concern was that Britain would run out of coal and theref- therefore future generations would be impoverished. And it's pretty striking because they had, um, a very bad understanding of how the economy works. Um, they, you know, hadn't... Didn't predict that well, we would be able to transition away from coal, um, because of continued innovation. And secondly, they had like enormously wrong views about how much coal and fossil fuels there were in the world. And so that particular action just didn't make any sense given like what we know now. In fact, that particular action of trying to keep coal in the ground, given, um, Britain at the time where, to be clear, we're talking about much lower amounts of coal, so small amounts of coal. So the climate change effect is like not... Is kind of negligible at that level. You know, it actually probably would've been harmful. But we could look at other things that John Stuart Mill could have done, such as like promoting better values. He like campaigned for, um, uh, women's suffrage. He was the first British MP, I think, in fact, even the first politician in the world, um, to promote women's suffrage. That seems to me (laughs) pretty good. That seems to have stood the test of time. And you know, that's one historical data point, but like potentially we can draw a, a kind of more general lesson there.
- 46:31 – 52:29
AI and the centralization of power
- WMWill MacAskill
- DPDwarkesh Patel
Hmm. Do you think the ability of y- global policymakers to come to a consensus is on net a good or, or a bad thing? I mean, on, on the positive, maybe it helps prevent some dangerous tech from taking off, but yeah, on the negative side, it prevented human challenge trials, maybe it causes some sort of lock-in in the future. On, o- on net, like what, w- what do you think about that trend?
- WMWill MacAskill
Yeah. The question of global integration, you're absolutely right, it has two... It's, it's double-sided where on the one hand it can help us reduce, you know, global catastrophic risks. So the fact that the world was able to come, come together and ban chlorofluorocarbons was, you know, a... One of the great events of the last, uh, 50 years allowing the hole in the ozone layer to repair itself. Um, but on the other hand, like if it just means we all converge towards some kind of monoculture and we lose out on diversity, well that's like... Uh, yeah, that's like potentially pretty bad. We could actually lose out on like most possible va- most possible value that way. Um, and I think the solution is like you do the b- good bits and don't have the bad bits. So, um, you know, in a liberal constit-... You know, you can have like... You can have a country that is bound in certain ways by its constitution and by certain laws, yet still enables like a flourishing diversity of, um, moral thought, um, and different ways of life. And so similarly in the world, you could have very strong, you know, regulation and treaties in... Just that deal with certain global public goods like mitigation of climate change, um, prevention of development of like, um, the next generation of weapons of mass destruction, without thereby having like some very strong arm like global government that implements, um, you know, a particular vision of the world. Which way are we going? At the moment it seems to me like we've been going in like a pretty good and like not too worrying direction, but I think that could change.
- DPDwarkesh Patel
Yeah. I- i- it seems the historical trend is when you have a federated, uh, political body th- that, uh, w- where even if constitutionally the, the, the central power is constrained, that over time the, uh, th- like they, they just tend to gain more power. Like y- you can look at the US, you can look at the European Union, but yeah, th- that, that seems to be the trend.
- WMWill MacAskill
Uh, yeah, and I think that's like...... again, depending on like the culture that's embodied there, it's, um, potentially a worry. It might not be if the culture itself is like liberal and promoting of moral diversity and moral change and moral progress. Um, but that needn't be the case.
- DPDwarkesh Patel
Y- now your theory of moral change implies that after a small group starts advocating for a specific idea, it may take like a century or more before that idea reaches common purchase. To the extent that you think this is a very important century, um, I- I know you have disagreements about that, uh, with- with others, but yeah, to the extent that's true, does that mean that maybe there just isn't enough time for long-termism to gain, uh, gain power that way, by changing moral values?
- WMWill MacAskill
Yeah, I mean, there are lots of people I know and respect very well who think that artificial general intelligence will, um, very, very likely lead to, you know, singularity level, um, technological progress, um, so extremely rapid kind of rates of technological progress, and that that will happen more likely than not within the next 10, 20 years. Um, and if so, then you're right. Value changes are, um, something that pays off kind of slowly over time. I mean, in the world today... So I talk about moral change taking centuries. That's definitely more true historically. I think we can have much faster change. So, you know, the growth of the effective altruism movement is something I know well, um, where that's growing at something like 30% per year. Compound returns mean that like, actually it's not long. You know, that's not growth, that's not change that happens on the order of kind of centuries. I think if you look at other model movements like, um, gay rights movement, very fast, um, very fast moral change by historical standards. So I think yes, if you're thinking, look, we've got 10 years till the end of history, then, um, probably don't just very broadly try and promote better values. But I think we should have at least a very significant probability mass on the idea that we will not hit some like historical end this century, and in those worlds then, you know, promoting better values, um, could pay off like very well.
- DPDwarkesh Patel
Have you heard of Sly Mole Ty Mole's potato diet?
- WMWill MacAskill
I have indeed heard of Sly Mole Ty Mole's potato diet, and I was tempted, I was tempted as a gimmick to try it. Um, but they're onto something because, um, as I'm sure you know, the potato is close to a superfood and you could survive indefinitely on just buttery mashed potatoes if you occasionally supplement with something like, uh, lentils or oats.
- DPDwarkesh Patel
Mm-hmm. Yeah, okay, interesting. Uh, a qu- a question about your career. W- why are you, uh, still a professor? Does it, does it still allow you to do the things that you would have otherwise have been doing, like converting more SBFs and, uh, making, um, moral philosophy arguments for EA or... Yeah, c- c- curious about that.
- WMWill MacAskill
Uh, yeah, I mean, it's fairly open to me, um, what I should do, but my best guess, and, you know, I do spend significant amounts of time, um, co-founding organizations or being on the board of those organizations I've helped to set up. Um, and more recently, um, yeah, working very closely with the Future Fund, you know, it's, um, SBF's new foundation, um, and helping them do as much good as possible. That being said, if there's like a single best guess for what I ought to do kind of longer term, and certainly that plays to my strengths better, you know, it's like developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's, um, understandable and gets more people to, you know, get off- off their seats and try to do a lot of good, and especially for the long term. Um, and I think I've had like a lot of impact that way. And from those purp- from that perspective, um, uh, having an Oxford professorship
- 52:29 – 57:02
The problems with academia
- WMWill MacAskill
is pretty helpful.
- DPDwarkesh Patel
Sure, yeah. Uh, by the way, why- why- why do you think that there's... You mentioned in the book and elsewhere that there's a scarcity of people thinking about these big picture questions about, yeah, how- how contingent is history? How... Are people happy generally? Like, wh- wh- are these just questions that are too hard for other people to... Or they just don't care enough? Like, what's going on? Why aren't there more people talking about this?
- WMWill MacAskill
I just think there's many, many issues that are enormously important but are just not incentivized basically anywhere in the world, where, um, companies don't incentivize work on them because they're like too big picture. So some of these are like, yeah, is the future good rather than bad? If there was a global civilizational collapse, would we recover? You know, how likely is a centuries-long stagnation? There's almost no work done on any of these topics. And yeah, companies aren't instituted too grand in scale. And then academia, I think it's just developed a culture where you don't tackle such problems in academia (laughs) . Partly that's because they fall through cracks of different disciplines, and partly because they just seem too big or too grand or too speculative. Um, whereas academia is much more in the mode in general of like making these kind of incremental in- gains in our understanding. But it didn't always used to be that way. Like, if you look back before the kind of institutionalization of, um, academic research, philosophers would have all... You know, you weren't a real philosopher unless you had some grand unifying theory of like not just ethics and political philosophy, but also metaphysics, and logic, and epistemology, and, um, th- probably the natural sciences too, like economics. And, uh, you know, I think... I'm not saying that like all of academic inquiry should be like that, but, um, should there be at least some people whose role is to like really think about the big picture? And I think, yes.
- DPDwarkesh Patel
Mm-hmm. Will I be able to send my kids to Macaskill University? Uh, what- what's the status on that project?
- WMWill MacAskill
Um, I'm really pretty interested in the idea of having... Yeah, creating new- n- new university. There is a project that is, um, I've been in discussion about with another person who's like fairly excited about making it happen. Will it go ahead? I mean, time will see. Time will tell. Um, but yeah, I just think you can both do education like far, far better than currently exists. I also think you can probably do research far, far better than currently exists. Um, it's extremely hard to kind of break in to giving kind of very... Especially like creating something that's very prestigious because the leading universities are-... almost all kind of hundreds of years old. But, like, maybe it's possible and I think it would, could generate enormous amounts of value, um, if we were able to pull it off.
- DPDwarkesh Patel
Yeah, yeah. Okay, uh, excellent. All right. So the book is What We Owe The Future, and I understand pre-orders help a lot grow, right? So yeah, it, it was such an interesting read because the- how often does somebody write a book about the- even the questions they consider to be the most important, even if they're not the most important questions? Um, yeah, just a kind of like big picture thinking, but you're also looking at, uh, a- any, like, very specific questions and issues that come up. It was just super interesting read.
- WMWill MacAskill
Great. Well, thank you so much.
- DPDwarkesh Patel
A- anywhere else they can find you or any other information that they might need to know?
- WMWill MacAskill
Uh, yeah, sure. So What We Owe The Future out on August 16th in the U.S. and 1st of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMacAskill. If you want to try and use your time or money to do good, um, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of their income, 10% or more, to the charities that do the most good and has a list of recommended charities. 80,000 Hours, if you want to use your career to do good, is a place to go for advice on what careers, like, really have the biggest impact at all and they provide one on one coaching too. And so, uh, yeah, all of these ways are kind of- if you're feeling inspired, if you think, "Look, I actually really want to do good in the world, I care about future people and I want to help make their lives go better," then as well as reading What We Owe The Future, Giving What We Can and 80,000 Hours are the sources you can go to and, uh, get involved.
- DPDwarkesh Patel
Awesome. Thanks so much for coming on the podcast. This is a lot of fun.
- WMWill MacAskill
Thanks so much. Yeah, I loved it. Cool.
- DPDwarkesh Patel
Thanks for watching. I hope you enjoyed that episode. If you did and you want to support the podcast, the most helpful thing you can do is share it on social media and with your friends. Other than that, please like and subscribe on YouTube and leave good reviews on podcast platforms. Cheers! I'll see you next time. (music)
Episode duration: 57:02
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode SMEfl5maB8k
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome