Skip to content
Dwarkesh PodcastDwarkesh Podcast

Fin Moorhouse - Longtermism, Space, & Entrepreneurship

Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He is the cohost of the Hear This Idea Podcast. Learn about Fin, the blog prize, Hear This Idea Podcast: https://www.finmoorhouse.com/ Podcast website: https://www.dwarkeshpatel.com/p/fin-moorhouse Apple Podcasts: https://apple.co/3cCQQbD Spotify: https://spoti.fi/3vi2XS5 Follow me: https://twitter.com/dwarkesh_sp Follow Fin: https://twitter.com/finmoorhouse Timestamps: 0:00:00 Preview 0:01:04 Introduction 0:03:39 EA Prizes & Criticism 0:10:41 Longtermism 0:13:46 Improving Mental Models 0:21:44 EA & Profit vs Nonprofit Entrepreneurship 0:31:40 Backtesting EA 0:36:48 EA Billionares 0:39:26 EA Decisions & Many Worlds Interpretation 0:51:40 EA Talent Search 0:53:32 EA & Encouraging Youth 1:00:11 Long Reflection 1:04:50 Long Term Coordination 1:22:00 On Podcasting 1:24:34 Audiobooks Imitating Conversation 1:27:58 Underappreciated Podcasting Skills 1:39:02 Space Governance 1:43:03 Space Safety & 1st Principles 1:47:38 Von Neuman Probes 1:51:06 Space Race & First Strike 1:52:39 Space Colonization & AI 1:57:30 Building a Startup 2:00:02 What is EA Underrating? 2:11:01 EA Career Steps 2:16:10 Closing Remarks

Fin MoorhouseguestDwarkesh Patelhost
Jul 27, 20222h 20mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:04

    Preview

    1. FM

      ... the scope for, like, what we could achieve is, like, really extraordinarily large (laughs) . Like, maybe kind of larger than most people kind of typically entertain. And having this kind of property of being, like, anti-fragile with respect to being wrong, like, really celebrating and endorsing changing your mind in a kind of loud and public way. But you can also do a thing which is try to make a lot of money and just, you know, make a useful product and then use the success of that first thing to then just think squarely, like, "How do I just do the most good?" So you have, like, $10 million in the bank and you make another $10 million, does your life get twice as good? (laughs) Obviously not, right? If, on the other hand, you just care about, like, making the world go well, (laughs) then the world's an extremely big place and so you basically don't run into these diminishing returns, like, at all. There's this question of, like, if the many worlds view is true, what, if anything, could that mean with respect to questions about, like, what should we do or what's important?

    2. DP

      (instrumental music)

  2. 1:043:39

    Introduction

    1. DP

      Today I have the pleasure of interviewing Finn Moorhouse, who is a research scholar at the Oxford University's Future of Humanity Institute and he's also an assistant to Toby Ord and also the host of the Hear This Idea podcast. Um, Finn, I know you've got a ton of other projects under your belt, so do, do you wanna talk about all the different things you're working on and how you got into EA and this kind of research?

    2. FM

      I think you nailed the broad strokes there. I think, yeah, I've kind of failed to specialize in any particular thing and so I found myself just dabbling in projects that seem interesting to me, trying to help get some projects off the ground and just doing research on, you know, things which seem maybe underrated. I probably won't bore you with the, the list of things. And then, yeah, how'd I get into EA? Actually also a fairly boring story, unfortunately. I really loved philosophy. I really loved kind of pestering people by asking them all these questions, you know, "Why are you not... Why are you still eating meat?" Read kind of Peter Singer and Will MacAskill and I realized I just wasn't actually living these things a- these things out myself. I think there's some just, like, force of consistency (laughs) that pushed me into really getting involved. And I think the second piece was just the people. Um, I was lucky enough to have this student group where I went to university and I think there's some dynamic of realizing that this isn't just a kind of free floating set of ideas, but there's also just, like, a community of people I re- really get on with and have all these, like, incredibly interesting kind of personalities and interests, so, um, th- those two things, I think.

    3. DP

      Yeah. And then so w- what was the process like of... (laughs) I know a lot of people who are vaguely interested in EA, but not a lot of them then, uh, very quickly transition to, you know, working on research with top EA researchers.

    4. FM

      Mm-hmm.

    5. DP

      So, uh, yeah, just walk me through how you ended up where you are.

    6. FM

      Yeah. I think I got lucky with the timing of the pandemic, which is not something I suppose many people can say. I did my degree, I was quite unsure about what I wanted to do. There was some option of taking some kind of close to default path of maybe something like, you know, consulting or whatever, and then I was kind of, I guess, forced into this natural break where I had time to step back and I, you know, I guess I was lucky enough that I could afford to kind of spend a few months just like figuring out what I wanted to do with my life. And that space was enough to, like, maybe start, like, reading more about these ideas, also to try kind of teaching myself skills I hadn't really tried yet, so try to, you know, learn to code for a lot of this time and so on. And then I just thought, "Well, I might as well wing it. There are some things I could apply to. I don't really rate my chances, but the cost to applying to these things is so low it seems worth it." And then, um, yeah, I guess I got very lucky and (laughs) here I am.

  3. 3:3910:41

    EA Prizes & Criticism

    1. FM

    2. DP

      Awesome. Okay. So let's talk about one of these things you're working on which is that you're, uh, you've set up, um, and are going to be helping judging these prizes about, uh, EA writing. One is you're giving out, uh, five prizes for $100,000 each for, um, blogs that discuss effective altruists-related ideas, another is, uh, five prizes of $20,000 each to criticize EA ideas.

    3. FM

      Mm-hmm.

    4. DP

      So, uh, uh, talk more about these prizes, wh- why is, why, uh, why is now an important time to be, uh, talking about and criticizing EA?

    5. FM

      That is a good question. Um, honestly I'm reluctant to frame this as, like, me personally, like (laughs) -

    6. DP

      Okay, for sure. (laughs)

    7. FM

      ... but yeah, I certainly have helped, help set up these, um, initiatives. Yeah.

    8. DP

      Well, so, so, uh, I heard, I, I heard on the inside that actually you've been, uh, you've been pulling a lot of the weight on these projects.

    9. FM

      Certainly, yeah. I've, um, found myself, uh, with the time to kind of like get these things over the line, which I'm, yeah, I'm pretty happy with. So yeah, the criticism thing, let's start with that. I want to say something like in general being receptive to criticism is just, like, obviously really important and if, as a movement, you want to succeed, where succeed means not just, like, achieve things in the world, but also, like, end up having just close to correct beliefs as you can get, then having this kind of property of being like anti-fragile with respect to being wrong, like really celebrating and endorsing changing your mind in a kind of loud and public way, that seems really important. And so, I don't know, this is just like a kind of prima facie obvious case of wanting to incentivize criticism, but you might also ask, like, "Why now?" There's a few things going on there. One is I think the effective altruism movement overall has reached this place where it's actually beginning to do, like, a lot of really incredible things, there's a lot of, like, funders now kind of excited to find, find kind of fairly ambitious, scalable projects, and so it seems like if there's a kind of an inflection point, you want to get the criticism out the door and you want to respond to it, like, earlier rather than later because you wanna set the path in the right direction rather than adjust course which is more expensive later on. Will MacAskill made this point a few months ago. You can also point to this dynamic in some other social movements where the kind of really exciting beliefs are kind of, have this, like, period of plasticity in the early days, they kind of ossify and you end up with this, like, set of beliefs that's kind of like trendy, um, or socially rewarded to hold.... in some sense, you feel like you need to hold certain beliefs in order to kind of get credit from, you know, certain people and the cost to, like, publicly questioning some pr- practices or beliefs become too high, and that is just like a failure mode, and it seems-

    10. DP

      Mm-hmm.

    11. FM

      ... like one of the more salient failure modes for a movement like this. So it just seems really important to, like, be quite, quite proactive about celebrating this dynamic where you notice you're doing something wrong and then you change track and then maybe that means shutting something down, right? You said our project, our project seems really exciting, you get some, like, feedback back from the world, feedback looks more negative than you expected, and so you stop doing the project, and in some important sense, that is, like, a success, like, you did the correct thing.

    12. DP

      (laughs)

    13. FM

      And it's important to celebrate that. So I think these are some of the things that go through my, through my head, just like framing criticism in like this kind of positive way. Yeah, seems pretty important.

    14. DP

      Right, right, I mean, and analogously it said that, uh, losses are as important as profit in terms of motivating, uh, economic incentives and it seems to work similar here.

    15. FM

      Yeah.

    16. DP

      In a Slack w- we were talking and when you mentioned that, uh, m- maybe one of the reasons it's important now is if, if a prize of $20,000 can help somebody help us figure out how to... Or not me, I don't have the money, but like, help SBF figure out how to, uh, how to better allocate like $10 million, that- that's a steal. It's really impressive that effective altruism is a movement that is willing to find criticism of itself. I, I don't know, is there any other example of a movement in history where, where that's been so interested in criticizing itself and becoming anti-fragile in this way?

    17. FM

      I guess one thing I want to say is like, the proof is in the pudding here, like it's one thing to kind of make noises to the effect that you're like interested in being criticized, and I'm sure lots of movements make that, another thing is to like really follow through on them and, you know, EA is a fairly young movement so I guess time will tell whether it really does that well. I'm very hopeful. I also want to say that this like particular prize is like, you know, one kind of part of a much, um, a much bigger thing hopefully. That's a great question. I actually don't know if I have good answers but that's not to say that there are none. I'm sure there are. Like political liberalism as a strand of thought like in political philosophy comes to mind as maybe an example. One other random thing I want to point out or mention, you mentioned profits and just like doing the maths and what's the like EV of like investing and just red teaming an idea, like shooting an idea down. I think thinking about the difference between the for-profit and non-profit space is quite an interesting analogy here. Um, you have this very obvious feedback mechanism in for profit land which is, you have an idea, no matter how excited you are about the idea, you can very quickly learn whether the world is, uh, as excited, which is to say you can just fail, and that's like a tight useful feedback loop to figure out what you're, whether what you're doing is, um, worth doing. Those feedback loops don't, by default, exist if you don't expect to get anything back when you're doing these projects. And so that's like a reason to want to implement those things like artificially. Like one way you can do this is with like charity evaluators which in some sense impose a kind of market-like mechanism where like now you have an incentive to actually be achieving the thing that you're like ostensibly setting out to achieve because there's this third actor that's, or party, that's kind of assessing whether you're, you're getting it. But I think that, that framing, I mean we could try saying, saying more about it, but that's like a really useful framing, I think, to me anyway.

    18. DP

      Mm-hmm. And uh, y- we, yeah, one other reason this seems important to me is if you have a movement that's about like 10 years old, um, like this, you know we have, we have like, uh, strains of ideas that are thousands of years old that have significant improvements made to them-

    19. FM

      Mm-hmm.

    20. DP

      ... uh, that- that were, that were missing before. So just on that, uh, that alone it seems to me that the reason to expect some mistakes e- either at a sort of like theoretical level or in the applications, th- that- that does seem like I do have-

    21. FM

      Yeah.

    22. DP

      ... a strong prior that there are such mistakes that could be identified in a reasonable amount of time.

    23. FM

      Yeah. I guess one framing that I like as well is not just thinking about here's a set of claims we have, we want to like figure out what's wrong, but some really good criticism can look like, look you just missed this distinction which is like a really important distinction to make, or you missed this like addition to this kind of naïve like conceptual framework you're using and it's really important to make that addition. A lot of people are like skeptical about progress in- in kind of non-emperical fields or like philosophy for instance, it's like, "Oh we've been thinking about these questions for thousands of years but we're still kind of unsure," and I think that misses like a really important kind of progress which is something you might call like conceptual engineering or something, which is finding these like really useful distinctions and then like building structures on top of them. And so it's not like you're making claims which are necessarily true or false but there are-

    24. DP

      Mm-hmm.

    25. FM

      ... other kinds of useful criticism which include just like getting all kind of models like more- more useful.

  4. 10:4113:46

    Longtermism

    1. FM

    2. DP

      Speaking of, uh, just m- making progress on questions like these, one thing that's really surprising to me and maybe this is just like my ignorance of the philosophical history here, it's super surprising to me that a movement like long-termism, at least in its modern form, it- it took thousands of years of philosophy before somebody had the idea that, "Oh, like, the future could be really big, therefore the future matters a lot." Um, and so maybe you could say like, oh, you know, there's been lots of movements in history that have emphasized... I mean existential risk maybe wasn't a prominent thing to think about before nuclear weapons, but that- that have emphasized that civilizational collapse is a very prominent factor that, uh, might be very bad for many centuries so we should try to make sure our society is stable or something, but, uh, do you have some sense of... You have a philosophy background, so do you have some sense, what is the philosophical background here and to- to the extent that these are relatively new ideas, how- how did it take so long?

    3. FM

      Yeah, that's like such a good question I think. One name that comes to mind straight away is this historian called, uh, Tom Moynihan who... So he wrote this book about something like the history of how people think, um, about existential risk and then more recently he's been doing work on the question you asked which is like, what took people so long to reach this like, what now seems like a fairly natural thought? I think part of what's going on here is it- it's really hard to, or easy I should say, to underrate just how much I guess it's somewhat related to what I mentioned in the last question, just how much kind of conceptual apparatus we have going on that's like-... a bit like the water we swim in now, and so it's hard to notice. So, one example that comes to mind is thinking about probability as this thing we can talk formally about. This is, like, a shockingly new, uh, thought. Also, the idea that human history might end and furthermore that that might be within our control, that is to decide or to prevent that happening (laughs) prematurely. These are all, like, really surprisingly new thoughts, and I think it just, like, requires a lot of imagination and effort to put yourself into the shoes of people living earlier on who just didn't have the kind of... Yeah, like I said, the kind of tools for thinking that make these ideas pop out much more naturally. And of course, as soon as those i- those tools are in place, then the like, the inclusions fall out pretty quickly. But it's not easy and I agree that like, I appreciate that actually wasn't a very good answer, um, just because it's such a hard question. (laughs)

    4. DP

      (laughs) Um, yeah, so, you know, what's interesting is that more recently, um, maybe I'm u- unaware of the full context of the argument here, but, uh, I think I've heard, uh, Holden Karnofsky write somewhere that he, he thinks there, there's more value in thinking about the issues that EA has already identified rather than identifying some sort of unknown risk that, for example, what AI might have been, like, 10, 20 years ago, AI alignment, I mean. The, g- given this historical experience that you can have some very fundamental tools for thinking about the world missing and consequently miss some very important moral implications, uh, does, does that imply that we should expect the space that AI alignment occupies in terms of our priorities, should we expect something as big or bigger coming up? Or just generally tools we're thinking like, you know, expected value of thinking, for example?

  5. 13:4621:44

    Improving Mental Models

    1. DP

    2. FM

      Yeah. That's a good question. Um, I think one thing I wanna say there is it seems pretty likely that the most important, like, kind of useful concepts for finding important things are also gonna be the lowest hanging. And I don't know, I think it's, like, very roughly correct that we did in fact, like, over the course of building out kind of conceptual frameworks, we picked the m- like, the most important ideas first, and now we're kind of like refining things and adding maybe somewhat more peripheral things. Um, that's at least... If that, like, trend is roughly gonna hold, then that's a reason for, um, not expecting to find, like, some kind of earth-shattering new concept from left field. Although I think that's, like, a very weak and vague argument, to be honest. Um-

    3. DP

      Um, also, I guess, I guess it depends on what you think your time span is. Like, if your time span is the entire sp- span of time that humans have been thinking about things, then maybe you would think that actually it's kind of strange that it took, like, 3,000 years or before, maybe even longer, uh, I guess it depends on when you define the start point, it took, you know, 3,000 years for people to realize, "Hey, we should think in terms of probabilities and in terms of-"

    4. FM

      (laughs)

    5. DP

      ... "expected impact." So e-

    6. FM

      Yeah.

    7. DP

      ... in that sense, maybe it's, like, I don't know, it took, like, 3,000 years of thinking to get to this very basic, uh, very basic idea. What, what seems to us like a very, uh-

    8. FM

      Yeah. Yeah, yeah, yeah.

    9. DP

      ... important and basic idea.

    10. FM

      I feel like maybe I have, I wanna say, two things. Uh, if you imagined lining up, like, every person who ever lived just, like, in a row and then you kind of, like, walked along that line and saw how much progress people have made across the line, so you're going across people rather than across time, then I think, like, progress in how people think about stuff looks a lot more, like, linear, and in fact started earlier than maybe you might think by just looking at, like, progress over time. And if it was faster early on than... If you're kind of following the very long run trend, then maybe you should expect, like, um, not to find these kind of, again, totally left field ideas soon. But I think a second thing, which is maybe more important, is like, I also buy this idea that in some sense, um, progress about thinking, in thinking about what's, like, most important is really kind of boundless. Like, David Deutsch talks about this kind of thought a lot. When you come up with new ideas, that just generates new problems, new questions, actually some more ideas. Um, that's very well and good. I think there's some sense in which, you know, one priority now could just be framed as giving us time (laughs) to, like, make that progress. And even if you thought that, like, we have this kind of boundless capacity to come up with a bunch of new important ideas, it's pretty obvious that that's, like, a prerequisite. And therefore, in some sense, that's like a robust argument for thinking that, like, um, trying not to kind of throw humanity off course and, um, preventing, mitigating some of these catastrophic risks is always just gonna shake out as, like, a pretty important thing to do. Maybe one of the most important things.

    11. DP

      Yeah. The, the, I, I, I, I, I think that's reasonable. Um, but, but then there, then there's a question of like, even if you think that the existential risk is the most important thing, um, to what extent have we discovered all the... Uh, a- again, that like x-risk, uh, argument. And I... By the way, the... Earlier what you said about, uh, you know, trying to extrapolate what we might know from the limits of physical laws, um, i- y- you know, if, if that can kind of constrain what we think might be possible, I, I think that's an interesting idea. But I, I, I wonder like partly, like one argument is just that like we don't even know how to define those physical constraints. And like before you had a theory of computation, it wouldn't even make sense to say like, "Oh, uh, like this much matter can sustain like so much, so much, uh, FLOPS, uh, floating point operations per second." And then second is like we are, if you know that number, it still doesn't tell you like what you could do with it. I, you know what, what I think, um, uh, an interesting thing that Ho- uh, Karnofsky talks about is, uh, he has this article called This Can't Go On, where he makes the argument that, listen, if you just have a- compounding economic growth, at some point you'll get to the point where, uh, um, you know, like you'll have ma- many or many, many, many times Earth's economy per atom in the effectible universe. And so it's hard to see like how you could keep having economic growth beyond that point. But that, that itself seems like, um, I, I don't know, if that's true, then there has to be like a physical law that's like the maximum g- uh, GDP per atom is this, right?

    12. FM

      Mm-hmm.

    13. DP

      Like if the... Uh, uh if there's no such constant, then you can like, you should be able to surpass it. I, I guess it still leaves a lot to be disarg- like m- uh, e- even if you could know such a number, you, you don't know like how interesting or what, what, what kinds of things could be done at that point.

    14. FM

      Yeah, I guess first one is, you know, even if you think that, like, preventing these kind of very large scale risks that might, like, curtail human potential, even if you think that's just incredibly important, you might miss some of those risks because you're- you're just unable to articulate them or really, like, conceptualize them. I feel like I just want to say at some point, uh, we have a pretty good understanding of kind of roughly what looks- what looks most important. Like, for instance, if you kind of, I don't know, get stranded on a camping trip and you're like, "We need to just survive long enough to-

    15. DP

      (laughs)

    16. FM

      ... to make it out." And it's like, "Okay, what do we look out for? I don't really know what the wildlife is here because I haven't been here before, but probably it's going to look a bit like this. I can at least imagine, you know, the risk of dying of thirst, even though I've never died of thirst before." And then it's like, "What- what if we haven't cons- like, even begun to tho- think of, like, the other..."

    17. DP

      (laughs)

    18. FM

      And it's like, "Yeah, maybe." But it's kind of there's just some, like, you know, table-thumping practical reason for, uh, focusing on the things which are most salient and, like, definitely spending some time thinking about things we haven't thought of yet. But, um, it's not like that list is just, like, completely endless. And there's a kind of, I guess, a reason for that. And then you said the second thing, which I don't actually know if I have, like, a ton of interesting things to say about, although maybe you could try, like, kind of zooming in on what- what you're interested in there.

    19. DP

      Uh, I- I- I come to think of it, I don't think the second thing has, uh, big implications for this argument. But th- the two, um, uh, yeah, uh, uh, we have, like, 20 other topics that are just as interesting-

    20. FM

      (laughs)

    21. DP

      ... uh, that we- we- we haven't moved onto. But yeah, but j- j- just as a, um, I don't know, uh, as a closing note, the- the analogy is- is very interesting to me, the camping trip, you're trying to, like, see- do what needs to be done to survive. I don't know. Okay, so to extend an analogy, it might be like, I don't know, somebody like Eliezer discovers, oh, that berry that we were all about to eat because we feel like that's the only way to get sustenance here while we're, you know, uh, just a- almost starving. Um-

    22. FM

      Yep.

    23. DP

      Don't eat that berry because that berry is poisonous. Um, and- and then- then you- maybe somebody could point out, okay, so given the fact that we've discovered one poisonous food in this environment, should we expect there to be other poisonous foods-

    24. FM

      Mm-hmm.

    25. DP

      ... uh, that we don't know about? Uh, but I- I- I don't know. I- I don't know if there's anything more to say on that topic. Uh-

    26. FM

      I mean, one thing, well, like one, I guess, kind of angle you could put on this is you can ask this question like, we have precedent for a lot of things. Like, we know now that, uh, igniting nuclear weapons does not ignite the atmosphere, which was a worry that some people had.

    27. DP

      Hmm.

    28. FM

      Um, so we at- we at least have some kind of bounds on how bad certain things can be. And so if you ask this question like, what is worth worrying about most in terms of what kinds of risks might, um, reach this level of potentially posing an existential risk? Well, it's going to be the kinds of things we haven't done yet, that we haven't, like, got some experience with. And so you can ask this question like, what is... What things are there in the space of, like, kind of big seeming but totally novel, precedent-free changes or events? And it actually does seem like you can kind of try generating that- that list and getting at answers. This is why maybe, or at least one reason why, AI sticks out because it's like, fulfills this criteria of being pretty potentially big and transformative and also the kind of thing (laughs) we don't have any experience with yet. But again, it's not as if that list is, like, in some sense endless. Like, there are only so many things we can do in the space of, uh, decades, right?

    29. DP

      Okay. Yeah. So moving on to another topic,

  6. 21:4431:40

    EA & Profit vs Nonprofit Entrepreneurship

    1. DP

      we were talking about, uh, for-profit entrepreneurship as, uh, uh, as a potentially impactful thing you can do.

    2. FM

      Mm-hmm.

    3. DP

      Sorry, is it... May- maybe not in this conversation, but like we- we separately, we had, um-

    4. FM

      We would at one point.

    5. DP

      Yeah, yeah. Um, yeah, so, okay, so to clarify, this is not just for-profit in order to, uh, do earning to give, so you become a billionaire and you give your wealth away. To what extent can you identify opportunities where you can just build a co- profitable company that solves, uh, an important problem area or, you know, makes people's, uh, lives better? Um, o- one example of this is Wave. It's a company, for example, that-

    6. FM

      Mm-hmm.

    7. DP

      ... uh, helps with, uh, you know, transferring money and banking services in Africa. Um, probably has boosted people's, uh, wellbeing in all kinds of different ways. Um, so to what extent can we expect just a f- a bunch of for-profit, uh, opportunities for making people's lives better?

    8. FM

      Yeah, that's a great question. And there is really a sense in which some of the more, like, innovative, big for-profit companies just are, like, doing an incredibly useful thing for the world. They're like providing a service that wouldn't otherwise exist and people are obviously using it because they are a successful prof- for-profit company. Yeah, so I guess the question is something like, you know, you're stepping back, you're asking, "How can I, like, have a ton of impacts with what I do?" The question is, are we like underrating just starting a company? So I feel like I want to throw out a bunch of kind of disconnected (laughs) observations, we'll see if they, like, tie together. There is a reason why you might in general expect a non-profit route to do well. And this is like obviously very naïve and simple, but where there is a for-profit opportunity, you should just expect people to kind of take it. Like, this is why we don't see $20 bills lying on the sidewalk. But the natural incentives for, uh, in some sense taking opportunities to like help people where there isn't, um, a profit opportunity, they're gonna be weaker. And so if you're thinking about the, like, difference you make compared to whether you do something or whether you don't do it, in general, you might expect that to be bigger where you're doing something non-profit. And like in particular, this is where there isn't a market for a good thing. So it might be because the things you're helping like aren't humans, might be because they, like, live in the future so they can't (laughs) , um, pay for something. It could also be because maybe you want to, or get a really impactful technology off the ground. In those cases, you get a kind of free rider dynamic, I think, where there's less reason to like... Where you can't protect the IP and patent something, there's less reason to be the first mover. And so this is like, maybe it's not for-profit, but-... starting a, or helping kind of get a technology off the ground, which could eventually be a space for a bunch of for-profit companies to make a lot of money, that seems really exciting. Also, creating markets where there aren't markets seems really exciting. So for instance, setting up, like, AMCs, advanced market commitments, um, or prizes or just giving... Yeah, creating incentives where there aren't any so you get the, like, efficiency and competition kind of gains that you get from a for-profit space. That seems great. But that's not really answering your question, 'cause the question is like, what about actual for-profit companies? (laughs) I don't know what I have to say here, like, in terms of whether they're being underrated. Um, yeah, actually, I'm just curious what, what you think. (laughs)

    9. DP

      Okay, so I, I, I think I have, like, four different reactions to, um, what you said.

    10. FM

      (laughs)

    11. DP

      I've been remembering the number four, just in case I'm at three and I'm like, "I think I had another thing to say." Okay, so, um, yeah. So I, I, I had a draft of an essay about this that I didn't end up publishing, but w- that led to a lot of interesting discussions between, uh, us. So that, that's why, uh, we might have, uh... I don't know, in case the audience feels like they're interrupting a conversation that was already-

    12. FM

      (laughs)

    13. DP

      ... uh, preceded the beginning here. Uh, uh, so one is that, um, to what extent should we expect this market to be efficient? So, uh, o- one thing you can think is, listen, the amount of potential startup ideas are so vast and the amount of great founders is so small that-

    14. FM

      Mm-hmm.

    15. DP

      ... you can have a situation where the, the most profitable ideas are... Yeah. That... It's... Right. That, like, somebody like Elon Musk will come up and, like, pluck up, like, all the, the, maybe, like, the, uh, hundred billion dollar ideas. But y- if you have, like, a company like, uh, Wave, I'm, I'm sure they're doing really well. But, uh, you know, if it's not obvious how it becomes the next Google or something, and I guess more importantly if it requires a lot of context, for example, you talked about, like, um, neglected groups. Um, I, I guess this doesn't solve for animals and, um, future people, but if you have somebody, uh, something in global health where you're like, a neglected group is, for example, people living in Africa, right? Um, i- the people who could be building companies don't necessarily have, uh, experience, uh, with the problems that these neglected groups have. So if you ha- it's po- very likely that, or I guess it's possible, that you could come upon an idea if you were specifically looking at how to help, for example, you know, people suffering from poverty in the wo- in the, the poorest parts of the world. You could, like, identify a problem that just, like, people who are programmers in Silicon Valley just wouldn't know about. Uh, okay, so a, a bunch of other ideas regarding, uh, the other things you said. One is, okay, maybe, maybe a lot of progress depends on fundamental new technologies and companies coming at the point where the technology is already available and somebody needs to really implement and put all these ideas together. Yeah, two things on that. One is, uh, g- like, we don't need to go in a rabbit hole on this. One is the argument that actually the, the invention itself, uh, n- not the invention, the innovation itself is a very important aspect and potentially a bottleneck aspect of this, of getting an invention off the ground and scaled. Another is, if you can build $100 billion company or a trillion-dollar company, or maybe not even, just like a billion-dollar company, you have the resources to actually invest in R&D. I mean, think of a company like Google, right? Like, how many billions of dollars have they basically poured down the drain on, like, harebrained schemes? Um, uh, y- you, you can have, like, reactions to DeepMind with regards to AI alignment, but, I mean, just, like, uh, o- other kinds of research things they've done seem to be, like, really interesting and, uh, really useful. Um, and, um, yeah, all the other FAANG companies have a, like, a program like this, like Microsoft Research, or I, I don't know what Amazon's thing is.

    16. FM

      Mm-hmm.

    17. DP

      And then another thing you can point out is with regards to setting up a market that would make other kinds of ideas possible, um, and other kinds of businesses possible, uh, i- i- in some sense, you could maybe make the argument that that's, that maybe some of the biggest companies, that's exactly what they've done, right? If you think of, like, Uber, um, uh, it's not a market for companies. Or maybe, maybe Amazon is a much better example here, where, um, you know, like, y- uh, theoretically, you had an incentive before, like, if a pandemic happens, I'm gonna manufacture a lot of masks, right? But Amazon provides, uh, makes the market so much more liquid so that you can, uh, you can just start manufacturing masks and now immediately put them up on Amazon. So it seems in these ways, actually, maybe starting a company is a really, uh, is an effective way to deal with those kinds of problems.

    18. FM

      Yeah, man, we've gone so async here. I should have just, like, said one thing and then... (laughs)

    19. DP

      (laughs)

    20. FM

      Um, yes, I'm sorry for throwing those things at you. Um, there's a lot there.

    21. DP

      (laughs)

    22. FM

      Those are... As far as I can remember, those are all great points. (laughs)

    23. DP

      (laughs)

    24. FM

      Um, yeah. I think my, like, high-level thought is I'm not sure how much we disagree, but I guess one thing I want to say is, again, thinking about, like, in general, what should you expect the real biggest opportunities to typically be for, like, just having a kind of impact. You know, one thing you might think of is if you can optimize for two things separately, that is optimize for the first thing and then use that to optimize for the second thing, versus trying to optimize for some, like, combination of the two at the same time, uh, you might expect to do better if you do the first thing. So for instance, you can do a thing which looks a bit like trying to do good in the world and also, like, make a lot of money, um, like social enterprise. And often that goes very well, but you can also do a thing which is try to make a lot of money and just, just, you know, make a useful product that is not directly aimed at, you know, improving humanity's (laughs) prospects or anything, but it's just kind of just great, and then use the success of that first thing to then just think squarely, like, how do I just do the most good, um, without worrying about whether there's some kind of profit mechanism? I think often that strategy is gonna pan out well. There's a thought about the kind of, the tails coming apart, if you've heard this thought, that at the extremes of, like, either kind of scalability in terms of opportunity to make a lot of profit and at the extreme of doing, like, a huge amount of good, you might expent- expect there to be, like, not such a strong correlation. Again, one reason, like, in particular, that you might think that is because you might think the, like, future really matters, (laughs) like, humanity's future, and, um-... sorry to be, like, a, a stuck record but, like, b- there's not really, like, a natural market there because these people don't, haven't been born yet. That is, like, a rambly way of saying that, okay, that's not always gonna be true, but I basically just agree that, yeah, I would wanna resist the framing of doing good which just leaves out, like, also doing some, starting some successful for-profit company. Like, there are just a ton of really excellent examples of where that's just been a huge success and, yeah, should be celebrated. Um, so yeah, I think I disagree with the spirit. Um, maybe we disagree somewhat on the, like, how much we should kind of relatively emphasize these different things but, um, doesn't seem like a kind of very deep disagreement.

    25. DP

      Yeah, yeah. Uh, may- maybe I've been spending too much time with Bryan Caplan or something. (laughs)

    26. FM

      (laughs)

    27. DP

      Um, a- and wh- what... Uh, uh, a- also, by the way, the tails coming apart I think is a, a very interesting, um, a very interesting way to think about this. Uh, uh, Scott Alexander has a good article on this and, like, one thing he points out is, like, yeah, generally you expect, like, different parts of, different types of strength to correlate but the guy who has the strongest grip strength in the world is probably not the guy who has the biggest squad in the world, right? Yeah. Okay. So that, I think that's an interesting place to

  7. 31:4036:48

    Backtesting EA

    1. DP

      leave that idea. Okay. Yeah. Another thing I wanted to, uh, talk to you about was back testing EA. So if you have th- these basic ideas of w- we wanna look at problems that are important, neglected, intractable, and apply them throughout history, so, like, a thousand years back, uh, 2,000 years back, 100 years back, i- is there a context in which applying these ideas, um, would maybe lead to a perverse outcome, an unex- unexpected outcome? Um, and are there examples where... Uh, I mean, there, there's many examples in history where things, y- you could have, like, easily made things much better but maybe made it much better than even conventional morality or, like, present day, uh, ideas would have made them.

    2. FM

      So we'll react to the first part of the question which-

    3. DP

      Yeah, let's do that. Yeah.

    4. FM

      ... as I understand it is something like, can we think about what some kind of effective altruism-like movement or if these ideas were in the water, like, significantly earlier, whether they might have misfired sometimes or maybe they might have succeeded in that? In fact how do we think about that at all? I guess one thing I want to say is that very often the correct decision ex ante is a decision which might do really well in, like, some possible outcomes but you might still expect to fail, right? The kind of mainline outcome is that doesn't really pan out but it's a, it's a moon shot and if it goes well, it goes really well. This is, I guess, similar to certain kinds of investing where if that's the case then you should expect, even if you follow the exact, like, correct strategy, you should expect to look back on the decisions you make, uh, made rather, and, uh, see a bunch of failures.

    5. DP

      Sure.

    6. FM

      Um, where failure is, you know, you just have very little impact. And I think it's important not to, to kind of resist the temptation to, like, really kind of negatively update on whether that was the correct strategy because it didn't pan out. And so, I don't know, if something like EA type thinking was in the water and was, like, thought through very well, yep, I think it would go wrong a bunch of times and that shouldn't be kind of terrible news. When I say go wrong I mean, like, not pan out rather than do harm. If it did harm (laughs) , okay that's, like, a different thing. Um, I think one thing this points to by the way is, like, you could take it... You could choose to take a strategy which looks something like minimax regret. Right? So you have a bunch of options, you could ask about the kind of roughly worse case outcome. Um, or just kind of like, you know, default eh outcome on each option. And one strategy is to just, like, choose the option with the least bad kind of meh case and you take the strategy, you should expect to look back on the decisions you make and, like, not see as many failures. So that's one point in favor of it. Another strategy is just like do the best thing in expectation (laughs) . Like, if I made these decisions constantly what in the long run just ends up, like, making the world best? And this looks a lot like just taking the h- highest EV option. Maybe you don't want it to, like, uh, run the risk of causing harm so, you know, that's okay to include. And, you know, I happen to think that, like, that kind of second strategy is very often gonna be a lot better and it's really important not to be misguided by this kind of feature of the minimax regret strategy where you look back and kind of feel a bit better about yourself in many cases. If that makes sense.

    7. DP

      Yeah. That, that's super interesting. I mean, if you think about, uh, back testing in terms of the, the, the sto- uh, like, uh, you know, models for the stock market one thing that, uh, to analogize this, one thing that, uh, tends to happen is that, uh, a strategy of just, like, trying to maximize returns from a given trade that results very quickly in you going bankrupt because, like, sooner or later there will be a trade where you lose all your money. And so then there's something called the Kelly criterion where you reserve a big portion of your money and you only bet with a certain p- part of it which sounds more similar to the minimized regret thing here. Um, unless your expected value includes a possibility that... I, I mean in this context that, like, you know, like losing all your money is, like, an existential risk. Right? So I, uh, I, I... maybe you, like, bake into the cake, in the definition of expected value, the odds of like losing all your money? Um, I don't know.

    8. FM

      Yeah, yeah, yeah, yeah. That's a great, that's a really great point. Like, I guess in some s- cases you wanna take something which looks a bit more like the, the Kelly bet, um, but if you act to your margins, like relatively small margins compared to the kind of pot of resources you have, then I think it often makes sense to take just the do the best thing bet and not worry too much about what's the kind of like size of the, the Kelly bet. But, um, yeah. That's a... it's a great point and, like, I guess a naïve version of doing this is just kind of losing your bank roll very quickly because you've, like, taken two (laughs) enormous bets and forgotten that, um, it might not pan out. Yeah. So I, I appreciate that.

    9. DP

      Oh, oh. What, what did you mean by add to the margins?

    10. FM

      So if you think that, that there's a kind of, a pool of resources from which you're drawing-

    11. DP

      Mm-hmm.

    12. FM

      ... which is something like maybe philanthropic funding for the kind of work that you're interested in doing-

    13. DP

      Mm-hmm.

    14. FM

      ... and you're only a relatively marginal actor, um, then that's unlike being, like, an individual investor where you're more sensitive to the risk of just running out of money and, um, when you're more like an individual investor-... then you want to, like, pay attention to what the size of the Kelly bet is. If you're acting at a margin, then maybe that is less of, like, a big consideration. Although, it is obviously still a, you know, very important point (laughs)

  8. 36:4839:26

    EA Billionares

    1. FM

      .

    2. DP

      Well, I- and then, um, I, I... By the way, I don't know if you saw my recent blog post about why I think there will be more EA billionaires.

    3. FM

      Mm-hmm. Yes, I did.

    4. DP

      Okay. Yeah, yeah.

    5. FM

      Yeah.

    6. DP

      Uh, I don't know what your reaction to any of the ideas there is, but, like, you might make claim is that we should expect the total funds, uh, dedicated to EA to grow quite, quite a lot.

    7. FM

      Um, yeah. I think... I really liked it, by the way (laughs) . I thought it was great. One thing it made me think of is that there's quite an important difference between trying to maximize returns for yourself and then trying to get the most returns just, like, for the world, which is to say, just doing the most good. Where one consideration we've just talked about, which is a risk of just, like, losing your bankroll, which is where, like, Kelly betting becomes relevant. Um, another consideration is that, as an individual, just, like, trying to do the best for yourself, you have, like, pretty steeply diminishing returns from money or just, like, how well your life goes with that extra money. Right? So like, if you have, like, 10 million in the bank and you make another 10 million, does your life get twice as good (laughs) ? Obviously not, right? And as such, you should be kind of risk-averse when you're thinking about the possibility of, like, making a load of money. If, on the other hand, you just care about, like, making the world go well (laughs) , um, then the world's an extremely big place, and so you basically don't run into these diminishing returns, like, at all.

    8. DP

      (laughs)

    9. FM

      And for that reason, like, if you're making money, at least in part, to, in some sense, give it away or otherwise just, like, have a positive effect in some impartial sense, then you're gonna be less risk-averse, which means maybe, um, you fail more often, but it also means that people who succeed, like, succeed really hard. Um...

    10. DP

      Yeah.

    11. FM

      So I don't know if that's... In some sense, I'm just recycling (laughs) what you said, but, um, I think it's like a, yeah, a really kind of neat observation.

    12. DP

      Well, and then another interesting thing is that, uh, not only is that true, but then you're also, you're also in a movement where everybody else is... has a similar idea. And not only is that true, but also the movement is full of people who are young, techy, smart-

    13. FM

      Mm-hmm.

    14. DP

      ... and as you said, risk-neutral. So b- b- basically, people who are going to be way overrepresented (laughs) in the, in the ranks of future billionaires. And they're all hanging out, and they have this idea that, you know, we can become rich together and then make the world better by doing so. Uh, you would expect that this would be the- exactly the kind of situation that would lead to people teaming up and starting billion-dollar companies. All right. Uh, yeah. So a bunch of other topics in effective altruism that I wanted to ask you

  9. 39:2651:40

    EA Decisions & Many Worlds Interpretation

    1. DP

      about. So one is, should it impact our decisions in any ways if the many-worlds interpretation of quantum mechanics is true? I know the argument that, oh, you can just think of... You can just translate amplitudes to probabilities, and if it's just probabilities, then decision in theory doesn't change. My problem with this is, I, I, I've gotten, like, very lucky in the last few months.

    2. FM

      (laughs)

    3. DP

      Now, I, I think it, like, changes my perception of that if I realize actually most mes... And okay, may- may... I know there's, like, problems with saying mes, uh, to what extent they're fungible.

    4. FM

      Mm-hmm.

    5. DP

      Most branches of the, of the multiverse, uh, like, I, I'm, like, significantly worse often. I... That, that makes it worse than, "Oh, I just got lucky, um, but, like, now I'm here."

    6. FM

      (laughs)

    7. DP

      And another thing is, if you think of existential risk, um, and you think that e- even if, like, existential risk is very likely, in some branch of the multiverse, humanity survives, I don't know, that- that seems better in the end than, "Oh, the probability was really low, but, like, it just resolved to we didn't survive."

    8. FM

      Mm-hmm.

    9. DP

      D- does that make sense?

    10. FM

      Okay. All right. There's, there's a lot there. I guess rather than doing a terrible job at trying to explain what this many-worlds thing is about, maybe it's worth just kind of pointing people towards, you know, just googling it. I, I should also add this enormous caveat that I don't really know what I'm talking about (laughs) . This is just kind of an outsider who's taken this kind of... I don't know. This just, this stuff seems interesting. Yeah. Okay. So there's this, there's this question of, like, what... If the many-worlds view is true, what, if anything, could that mean, uh, with respect to questions about, like, what should we do or what's important? And one thing I wanna say is, just, like, without zooming into anything, it just seems like a huge deal.

    11. DP

      (laughs)

    12. FM

      Like, every second of every day, I'm, in some sense, like, just kind of dissolving into this, like, cloud of mes and, like, just kind of unimaginably large number of, of mes. And that each of those mes is kind of, in some sense, d- dissolving into more cloud (laughs) . Um, this is just, like, wild. Also seems somewhat likely, um, to be true as, like, far as I can tell. Okay. So like, what does this mean? You... Yeah. You point out that you can talk about having a measure over worlds, and sometimes you can... There's actually a problem of how you get, like, probabilities or how you make sense of probabilities on the many-worlds view. And there's a kind of neat way of doing that which, like, makes use of questions about how you should make decisions. That is, you should just kind of weigh future yous according to, in some sense, how likely they are. But it's really the reverse, so like, explaining what it means for them to be more likely in terms of how it's rational to weigh them (laughs) . Um, and then I think there's like a ton of very vague things I can try saying (laughs) , so maybe I'll just try doing a, like, a, a brain dump of things. You might think that, like, many-worlds being true could push you towards being more risk-neutral in certain cases if you weren't before. Um, because in certain cases, you're, like, translating from some chance of this thing happening or it doesn't into some fraction of, you know, worlds this thing does happen, another fraction it doesn't. That's kind of, like, I, I do think it's worth reading too much into that because I think a lot of the, like, important uncertainties about the worlds are still, like, subjective uncertainties about how most worlds will, in fact, turn out. But it's kind of interesting and notable that you can, like, convert between overall uncertainty about how things...... turn out to, like, more certainty about the fraction of th- ways things turn out. (laughs) I think another, like, interesting feature of this is that... So the question of, like, how you should act is no longer a question of, like, how should you kind of benefit this person who is you in the future, who's one person? It's more like, how do you benefit this, like, cloud of people who are all successive you, that's just kind of like diffusing into the future? And I think you point out that you could just, like, basically salvage a lot of basically all decision-making, even if that's true. But the, like, picture of what's going on changes, and in particular, I think just intuitively, like, it feels to me like the gap between acting in a self-interested way and then, like, acting in an impartial way, where you're, like, helping other people, it kind of closes a little in a, in a way. Like, you're, you're already benefiting many people by doing the thing that's kind of rational to benefit you, um, which isn't so far from benefiting people who aren't, like, continuous with you in the special way. So I kind of like that as a fe- as a thing.

    13. DP

      Huh.

    14. FM

      Um...

    15. DP

      That's interesting.

    16. FM

      Yeah, and then... D- okay, there is also this, like, slightly more out there thought, which is... Here's a thing you could say. If many worlds is true, then there is at least a sense in which there are very, very many more people in the future, uh, compared to the past, uh, like, just unimaginably many more. And even, like, the next second from now, there are many more people. So you might think that should, like, make us have a really steep negative discount rate on the future, which is to say we should, like, value future times much more than present times. And, like, in a way which would just kind of... it wouldn't, like, modify how we should act. It just, like, explodes how we should think about this. This definitely doesn't seem right. (laughs) Maybe one way to think about this is that if this thought was true or, like, was kind of directionally true, then that might also be a reason for being extremely surprised that we're both speaking at, like, an earlier time rather than a later time. Because if you think of it as, like-

    17. DP

      Mm-hmm.

    18. FM

      ... randomly drawn from all the people who ever lived, it's like absolutely mind-blowing that we get drawn from, like, today rather than tomorrow.

    19. DP

      Yeah, yeah, yeah.

    20. FM

      Even though it's, like, 10 to the something many more people than tomorrow. Um, so it, it's probably wrong, and wrong for-

    21. DP

      Hmm.

    22. FM

      ... reasons I don't have a very good handle on, because (laughs) I just, like, don't know what I'm talking about. I mean, I can kind of try parroting the reasons, but, like, it's something I'm, you know, I'm interested in trying to really crack those reasons a bit more.

    23. DP

      Th- th- th- that's really interesting. Uh, I, I, I, I, I didn't think about that argument, uh, for, uh, this election argument. I think one resolution I've heard about this is that you can think of the proportion of, you know, Hilbert space or, like, the proportion of all the, the, the, the, like, the universe's wave function, that could be the pro- uh, like, the probability rather than each different branch. Uh, you know what I just realized? This, this selection argument you can m- you made, maybe that's an argument against Bostrom's, um, Bostrom's idea of we're living in a simulation. Because basically his argument is that there would be many more simulations than there are real copies of you.

    24. FM

      Mm-hmm.

    25. DP

      Therefore, you're probably in a simulation. The, the, like, the thing about, like, saying that all the simulations plus you are... your prior should be equally distributed among them seems similar to saying your prior of being distributed, like, along each possible, like, branch of, uh, the wave function-

    26. FM

      Mm-hmm.

    27. DP

      ... should be, your prior across them should be the same.

    28. FM

      Mm-hmm.

    29. DP

      Whereas, I think, um, in the context of the w- wave function you were arguing that maybe it should be, like, you shouldn't think about it that way. You should think about, like, maybe a proportion of the total, uh, wave, uh, total Hilbert space. Um-

    30. FM

      Yeah, yeah, yeah.

  10. 51:4053:32

    EA Talent Search

    1. DP

      about. Uh, talent search. What is the, what is EA doing about identifying, let's say, more people like you basically, right? But maybe even, like, people like you who are not in, like, places-

    2. FM

      Yeah.

    3. DP

      ... where... Th- they're not next to Oxford, for ex- I, I don't know where you actually are from originally, but, um, like, uh, if, if they're from, like, some d- uh, like, I don't know, like, China or India or something.

    4. FM

      Yeah, yeah, yeah, yeah.

    5. DP

      Um, what, what, what is EA doing to recruit more Finns from, uh, from places where they might not otherwise work on EA?

    6. FM

      Yeah, it's a great question. And yeah, to be clear, I just won the lottery (laughs) on things going right to kind of, um, be lucky enough to do what I'm doing now. So, yeah, in some sense, the question is how do you, like, print more winning lottery tickets (laughs) and indeed, like, find those people who really deserve them, but, like, who's currently not being identified? A lot of this comes from... I just... I read that book Talent, um, by Tyler Cowen and Daniel Grace recently.

    7. DP

      Mm-hmm.

    8. FM

      And yeah, there's something really powerful about this fact that this, like, business of, you know, finding really, like, smart, driven people and connecting them with opportunities to, like, do the things they really want to do, this is, like, really kind of still inefficient (laughs) and there are just still, like, so many just people out there who, like, aren't kind of getting those opportunities. I actually don't know if I have much more, like, kind of insight to add there (laughs) other than this is just a big deal. And it's, like, there's a sense in which it is an important consideration for this, like, project of trying to do the most good. Like, you really want to find people who can, like, put these ideas into practice. And I think there's a special premium on that kind of person now given that there's, like, a lot of philanthropic kind of funding ready to, like, be deployed. There's also a sense in which it is, this is just like, in some sense, like, a cause in its own right. It's kind of analogous to open borders in that sense, at least in my mind. Hadn't really, like, appreciated it in, on some kind of visceral level before I read that book.

    9. DP

      And then another thing he talks

  11. 53:321:00:11

    EA & Encouraging Youth

    1. DP

      about in the book is you want to get them when they're young. You can really shape somebody's, um, ideas about what's wo- worth doing if you, uh... A- a- and then also their ambition about h- what they can do if you catch them early. Um, and, you know, uh, Tyler Cowen also had an interesting blog post a while back where he pointed out that a lot of people applying to his Emergent Ventures, uh, program, a lot of young people applying, um, are heavily influenced by effective altruism. Which seems very i- Like, it's gonna be a very important factor in, uh, in the, uh, in the, in the long term. I mean, eventually these people will be in positions of power. Yeah, so, uh, and maybe effective altruism is already succeeding to the extent that, uh, a lot of the most, uh, ambitious people in the world are, uh, identified that way. At least, I mean, g- given the selection effect that Tyler Cowen's program has. But yeah, so, um, what- what is it that can be done to get people when they're young?

    2. FM

      (laughs) Uh, great. Um, yeah. I mean, it's a very good question. And I think, like, what you point out there is, is right. There's some... Nick Whitaker has this blog post. So something like the... It's called The Lamplight Model of Talent Curation.

    3. DP

      Mm-hmm.

    4. FM

      Um, and he, like, draws this distinction between casting, um, like, a very wide net that's just kind of very legibly prestigious and then, you know, filtering through thousands of, of applications. Or, in some sense, like, pulling out the bat signal-... that in the first instance just, like, attracts the, like, really promising people and maybe actually drives away people who would be a better fit for something else. Um, so, like, an example is if you were to hypothetically (laughs) write a quite, quite a wonky economics blog, like, every day for however many years (laughs) and then run some fellowship program, you're just, like, automatically selecting for people who read that book, and that's, like, a pretty good kind of starting population to begin with. So I really like that, that kind of thought of just, like, not needing to be, like, incredibly loud and, like, prestigious-sounding, but rather just, like, being quite honest about what this thing is about so you just attract the people who, who, like, really sort it, sort it out because that's just got a good feature. I think another thing that... Again, this is, like, not a very interesting point to make (laughs) , but something I've really realized the value of is, like, having physical, um, hubs. And so there's this model of, you know, running, like, fellowships for instance where you just, like, find really promising people, and then... There's just so much to be said for, like, putting those people in the same place and, you know, surrounding them with maybe people who are a bit more, like, senior, and just kind of, like, letting this natural process happen where people just get really excited, uh, that there is this, like, community of people working on stuff that previously you've just been kind of reading about in your bedroom on, like, some blogs. That, like, as a source of motivation, I know, is, like, less tangible than other things, but yeah, just, like, so, so powerful, and, like, probably the... I don't know. One of the reasons I'm, like, working here maybe.

    5. DP

      Yeah. Um, it, it, it is one aspect of, uh, (laughs) working from home-

    6. FM

      (laughs)

    7. DP

      ... that you don't get that. Um, um, the re- regarding the first point, so I think, uh, um, wha- maybe the, maybe that should update in favor of not doing community outreach and community building. Like, maybe that's negative marginal utility because, like, if I think about, for example, um, my local... Well, so th- there was an effective altruism group at my college that I didn't attend, um, and there's also, like, an effective altruism group for the city as a whole, um, in Austin, that I don't attend. Um, and the reason is just because, I don't know, the people who i- uh, that there's a, some sort of, um, adverse selection here where the people who are leading organizations like this are people who couldn't just, like, aren't directly doing the things that effective altruism says they might be w- m- might consider doing, um, and are more interested in the social aspects of altruism. Um, so I, I don't know. I, I, I'd be, I'd be much less impressed with a movement if my first introduction to it was these specific groups that I, uh, like, I've had the personal, I, I've personally interacted with, rather than, I don't know, just, like, hearing Will MacAskill on a podcast. Um, th- uh, uh, b- so th- by the way, the for- latter being my first introduction to effective altruism.

    8. FM

      Yeah. Interesting. Um, I feel like I really don't want to, like, underrate the job that community builders are doing. I think, in fact, it's turned out to have been, like, and still is, like, incredibly valuable, especially just looking at the numbers of, like, what you can achieve as, like, a group organizer at your u- university. Like, maybe you could just change the course of, like, more than one person's career over the course of, like, a year of your time. That's, like, pretty incredible. But yeah, I guess part of what's going on is that the difference between, like, going to your, like, local group or, like, engaging with stuff online is that you get to kind of choose the stuff you engage with. And, like, maybe one upshot here is that the, like, s- kind of set of ideas that might get associated with, um, EA is, like, very big and you don't need to buy into all of it or just, like, be passionate about all of it. Like, if this kind of AI stuff just, like, really seems interesting, but maybe other stuff is just, like, more peripheral, then, you know, one... Yeah. Like, this could push towards wanting to have, like, a, just a specific group for people who are just like, "You know, this AI stuff seems cool. Other stuff, not my, like, cup of tea." Um, so yeah. I mean, in the future as, like, things get scaled up, as well as kind of scaling out, I think also maybe having this, like, differentiation and kind of diversification of, like, different groups, I mean, seems pretty good. But just, like, more of everything also seems good. (laughs)

    9. DP

      Yeah, yeah. Uh, I, I, I'm, I'm probably overfitting on my own experience. And given the fact that I don't, didn't, uh, didn't act- actively interact with any of those communities, um, uh, pr- probably not even informed on th- those experiences of those. Um, but there was an interesting post on an effective altruism forum that somebody sent me where they were making the case that, um, it, it... At their college as well, they got the sense that, uh, the EA community building stuff had the negative impact because people were kinda turned off by their peers. And also, uh, uh, there's a difference between, like, I don't know, somebody like saying they're feeling free to go to Will MacAskill, uh, advising you, and th- uh, obviously virtually, um, to, um, uh, to d- do these kinds of things versus, like, I don't know, some, uh, some sophomore at your university studying philosophy, right?

    10. FM

      (laughs)

    11. DP

      Uh, (laughs) , no offense. (laughs) Um... (laughs)

    12. FM

      Yeah.

    13. DP

      Um-

    14. FM

      Yeah, I do. I do.

    15. DP

      ... you see what I mean?

    16. FM

      Um, I think my guess is that, like, on net, these, these efforts are still just, like, overwhelmingly positive, but, um, yeah. I think it's, like, pretty interesting that people have the experience you describe as well. Yeah. I mean, interesting to think about ways to kind of, like, get around that.

  12. 1:00:111:04:50

    Long Reflection

    1. FM

    2. DP

      So long reflection is a... It seems like a bad idea, no?

    3. FM

      I'm so glad you asked. (laughs) Um, yeah. I wanna say, I wanna say no. I think in some sense I've, like, come around to it as an idea. But yeah. Okay, maybe it's worth, like-

    4. DP

      Oh, really? Interesting.

    5. FM

      Maybe it's worth, I guess, like, trying to explain what's going on (laughs) with this-

    6. DP

      Sure, yes.

    7. FM

      ... this idea. Um-

    8. DP

      (laughs)

    9. FM

      So if you, like, were to zoom out, like, really far over time and consider our place now, like, in history, and you could, like, ask this question about suppose y- in some sense humanity just became, like, perfectly coordinated, what's the plan? Like, what, what kind of in general should we be prioritizing and, like, in what stages? And, um, you might say something like this. It looks like this moment in history, which is to say maybe this century or so-... just looks kind of wildly and, like, unsustainably dangerous. Like, what kind of... So many things (laughs) are happening at once, it's really hard to know how things are gonna pan out. But it's, like, possible to imagine things panning out really badly, and badly enough to just, like, more or less end history. Okay, so before we can, like, worry about some kind of longer term considerations, let's just get our act together and make sure we don't mess (laughs) things up. So okay, like, that seems like a pretty good first priority. But then, okay, suppose that you succeed in that and, like, we're in a significantly safer kind of time. Uh, what then? You might notice that the scope for, like, what we could achieve is, like, really extraordinarily large (laughs) . Like, maybe kind of larger than most people kind of, like, typically entertain. Like, we could just do a ton of really exceptional things. But also, there's this kind of feature that maybe in the future, not, uh, not especially long term future, we might more or less for the first time be able to embark on these, like, really kind of ambitious projects that are in some s- important sense, uh, like, really hard to reverse. And that might make you think, okay, at some point it'd be great to, like, in some, like, you know, achieve our, that potential that we have, and just, like... Like, for instance, a kind of lower bound on this is lifting everyone out of poverty, who remains in poverty, and then, like, going even further, just making everyone even wealthier, able to do more things that they want to do, making more scientific discoveries, whatever. So we want to do that. But maybe something should come in between these two things, which is, like, figuring out what is actually good. Um, and, okay, why should we think this? I think one thought here is, it's very plausible, and I guess this kind of links to what we were talking about earlier, that the way we think about, you know, like, really positive futures, like, what are the best futures, is just, like, really kind of incomplete. Almost certainly we're just getting a bunch of things wrong by this kind of pessimistic induction on the past. Like, a bunch of smart people thought really reprehensible things (laughs) , like, a hundred years ago, so we're getting things wrong. And then it's like, second thought is, I don't know, seems possible to actually make progress here, in thinking about what's good. There's this kind of interesting point that most, like, work in, I guess you might call it, like, moral philosophy has focused on the negatives. So, you know, avoiding doing things wrong, fixing harms, avoiding bad outcomes. But this idea of, like, studying the positive, or studying, like, what we should do, if we can kind of do, like, many different things, this is just, like, super, super early, and so we should expect to be able to make a ton of progress. And so, okay, again, imagining that the world is, like, perfectly coordinated, would it be a good idea to, like, spend some time, maybe a long period of time, kind of deliberately holding back from embarking on these, like, huge irreversible projects, which maybe involve, like, leaving Earth in kind of certain, you know, scenarios, or otherwise just, like, doing things which are hard to undo? Should we spend some time thinking before then? Yeah, (laughs) sounds good. And then I guess the very obvious response is, okay, that's a pretty huge assumption (laughs) , um, that we can just, like, coordinate around that, and I think the answer is, yep, it is, but as a kind of directional ideal. Should we push towards or away from the idea of, like, taking our time, holding our horses, kind of getting people together who haven't really, like, been part of this, like, conversation and, like, hearing them? Yeah, (laughs) definitely seems worthwhile.

    10. DP

      All right. So, uh, I have another good abstract idea that I want to entertain by you. So, uh,

  13. 1:04:501:22:00

    Long Term Coordination

    1. DP

      you know, it, it seems, like, kind of wasteful that we have these different companies that are building the same exact product, uh, but, you know, because they're, they're building the same exact product they don't have economies of scale and they don't have coordination. There's just a whole bunch of, uh, loss that comes from that, right? Wouldn't it be better if we could just coordinate and just, like, figure out the best person to produce something together, and then just have them produce it? And then we could also coordinate to figure out, like, what, what, what is the right quantity and quality of, for, for them to produce. I'm not trying to say this is a, this, this is, like, communism or something. I, I'm just saying, it, it, it's ignoring what would be required. Like, in this analogy you're ignoring, like, what, what kinds of information gets lost, and, um, what kinds of, uh, say, uh, what, what it requires to do that so-called coordination, um, uh, in the communism example. Um, in this example, it seems like that you're not, uh... It, it, whatever would be required to prevent somebody from realizing some, like... Let's say somebody has a vision for, like, uh, we, we want to colonize a star system. We want to, like, I don't know, make, make some new technology, right? That, that's part of something that the long reflection would curtail. Maybe I'm getting this wrong, but it seems like it would require almost a global panopticon, uh, totalitarian stray, uh, state to be able to, like, prevent people-

    2. FM

      Yeah.

    3. DP

      ... from escaping the reflection.

    4. FM

      Um, okay. So there's a continuum here, and I basically agree that some kind of panopticon-like thing not only is impossible but actually sounds pretty bad (laughs) . But something where you're just, like, pushing in the direction of being more coordinated on the international level about things that matter seems, like, desirable and possible. Um, and in particular, like, preventing really bad things rather than, like, try to get people to, like, all do the same thing. Um, so the Biological Weapons Convention just strikes me as an example which is, like, imperfect and underfunded, but, you know, nonetheless kind of (laughs) directionally good. And then maybe an extra point here is that there's, like, a sense in which the long reflection option, or I guess the better framing is, like, aiming for a bit more reflection rather than less. That's, like, the conservative option. That's, like, doing what we've already been doing, um, just a bit longer, rather than some, like, radical option. So I, again, I agree. It's, like, pretty hard to imagine, like, you know, some kind of super long period where everyone's, like, perfectly agreed on, on doing this (laughs) . Um, but yeah, I think framing it as, like, a directional ideal...... seems pretty worthwhile. And I guess I know maybe I'm kind of naively hopeful about the possibility of coordinating better around things like that.

    5. DP

      Uh, there, there's two reasons why this seems like a bad idea to me. One is... Okay, first of all, who is going to be deciding when we've come to a good consensus about, uh, w- uh, okay, so w- we've decided, like, this is the way things should go, um, and now we're, like, ready to escape the long reflection and realize our vision for the rest of the lifespan of the universe? Who is going to be doing that? It's the people who are, uh, presumably in charge of the long reflection. It, it... Almost by definition, it'll be the people who have an incentive in, uh, preserving whatever power, uh... well, uh, uh, power balances exist at the end of the long reflection. And then the second thing you'd ask is like, um, h- uh, there's like a difference between, I think, uh, having a consensus on not using biological weapons or something like that, where you're eliminating a negative, versus it seems like when we've had, uh, when we've required society-wide consensus on what we should aim towards achieving, um, the outcome has not been good in history. Uh, but it s- seems better that on the positive end to just leave it open-ended. And then just maybe, um, when necessary, say that, like, the, the very bad things, uh, we might wanna restrict together.

    6. FM

      Yeah, yeah, yeah. Okay. I think I kind of just agree with a lot of what you said, so I think the best, like, framing of this is the version when you're preventing something which most people can agree is negative, which is to say some actor unilaterally deciding to, like, do this huge irreversible... or set out on this huge irreversible project. Like, something you said was that the outcome is going to reflect the, like, values of whoever is, like, in charge. Um-

    7. DP

      And then not, not just the values. I mean, it also, uh, I mean, the... Just like think about how guilds work, right? It's like, um, i- i- if the... i- whe- whenever we've, for example, an industry, we let how the industry should progress, we let those kinds of decisions be made collectively by the people who are currently dominant in the industry, um, uh, you know, guilds or something like that, um, or, um, or like, uh, uh, industrial conspiracies, uh, uh, as well, it seems like the outcome is, um, uh, o- outcome is just, uh, bad, like, uh... And so like my prior would be that at the end of such a situation, our ideas about what we should do would actually be worse than, uh, going into the long reflection. Uh, th- I mean, obviously, uh, the, uh... it really depends on how it's implemented, right? So I'm not saying that... But, uh, just, uh, just like, uh, broadly, given all possible implementations and maybe the most likely implementation given how governments run now.

    8. FM

      Yeah. Yeah, yeah, yeah. I should say that, like, I am in fact, like, pretty... Obser- I just kind of... I don't know. It's more, more enjoyable to, like, give this thing it's hearing.

    9. DP

      No, no. I, I enjoy the, uh, the-

    10. FM

      (laughs) Yeah, yeah.

    11. DP

      ... parts where we have disagreements. Yeah.

    12. FM

      So one thought here is if you're worried about the future, like the course of the future being determined by some single actor, I mean, that, that worry is just symmetrical with the worry of letting whoever wins some race first go and do, you know, go and do the thing, the like project where they more or less kind of determine what happens to the rest of humanity. So the option where you, like, kind of deliberately wait and let people, like, have some, uh, like, global conversation, I don't know. It seems like that is less worrying even if the worry is still there. I should also say, I can im- imagine the outcome is not unanimity. In fact, it'd be like pretty wild if it was, right? But you want the outcome to be some kind of like stable, friendly disagreement where now we can kind of like maybe reach some kind of Coasian solution and we just like go and do our own things, and there's like a bunch of projects which kind of go off at once. Um, I don't know. That feels like really great to me compared to whoever gets there first determining how things turn out. But yeah, I mean, it's, it's hard to talk about that stuff, right? Because it's like somewhat speculative, but, um, I think it's just like a useful like north star or something to try pointing towards.

    13. DP

      Um, uh, okay. So may- maybe... But to make it more concrete, I, I, I wonder if your, uh... i- i- if your expectation that the, uh, the consensus view, uh, would be better than the first mover view. Uh, l- l- let's... In like today's world, maybe, okay, either we, either we have the form of government, uh, and... not just government but y- also the, the, I mean, the industrial and logistical organization that, I don't know, like Elon Musk has designed for Mars, either that is... so, uh, i- if he's a first mover for Mars. Would you prefer that or the, we have the UN, uh, come to a consensus between all the different countries, uh, about like how we should have the first Mars colony organized? Or should... W- would, would the Mars colony run better if after like 10, 20 years of that, the, the, they are the ones who decide how the first Ma- Mars colony goes? Global consensus view is to be better than first mover views.

Episode duration: 2:20:42

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode snMWoyvmkhg

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome