Fin Moorhouse - Longtermism, Space, & Entrepreneurship

Fin Moorhouse - Longtermism, Space, & Entrepreneurship

Dwarkesh PodcastJul 27, 20222h 20m

Fin Moorhouse (guest), Dwarkesh Patel (host)

Effective altruism: origins, community dynamics, and personal entry pathsCriticism of EA and prize programs for red‑teaming and idea generationLongtermism, existential risk, and the “long reflection” conceptFor‑profit entrepreneurship versus nonprofit routes to impactDecision theory, many‑worlds interpretation, and anthropic reasoningTalent identification, youth outreach, and community building in EASpace governance, von Neumann probes, and long‑run cosmic futuresPodcasting as a medium for spreading complex, under‑shared ideas

In this episode of Dwarkesh Podcast, featuring Fin Moorhouse and Dwarkesh Patel, Fin Moorhouse - Longtermism, Space, & Entrepreneurship explores longtermism, criticism, and space: Finn Moorhouse on fixing the future Finn Moorhouse discusses his path into effective altruism (EA), the importance of being radically open to criticism, and new prize initiatives to incentivize both EA-aligned writing and robust critiques of the movement.

Longtermism, criticism, and space: Finn Moorhouse on fixing the future

Finn Moorhouse discusses his path into effective altruism (EA), the importance of being radically open to criticism, and new prize initiatives to incentivize both EA-aligned writing and robust critiques of the movement.

He and Dwarkesh explore longtermism, existential risk, and whether we should expect major new moral concepts, touching on AI risk, many‑worlds quantum mechanics, and the “long reflection” idea before humanity locks in irreversible choices.

They debate the relative impact of for‑profit entrepreneurship versus nonprofit work, how to find and cultivate talent for high‑impact careers, and what EA might be over‑ or under‑emphasizing as it grows.

The conversation finishes with space governance and far‑future scenarios—von Neumann probes, “grabby aliens,” and whether early norms or constitutions could realistically shape humanity’s expansion into the cosmos.

Key Takeaways

Build anti‑fragility to criticism into movements and projects.

Moorhouse argues EA should proactively reward high‑quality criticism and publicly celebrate course‑corrections and shutdowns of failing projects, treating them as successes in epistemic hygiene rather than embarrassments.

Get the full analysis with uListen AI

Use prizes and “red‑teaming” to substitute for weak nonprofit feedback loops.

Unlike startups, nonprofits often lack clear market signals; targeted criticism prizes and charity evaluators can mimic selection pressure, surfacing flaws in strategy or worldview before large resources are deployed.

Get the full analysis with uListen AI

Don’t overrate or underrate for‑profit entrepreneurship as a path to impact.

For‑profits can be extremely beneficial (e. ...

Get the full analysis with uListen AI

Act on high‑EV causes without demanding certainty, but respect the risk of locking in errors.

They contrast “maximizing expected value” with “minimax regret” and note that when acting at the margin of a large philanthropic pool, it’s often rational to take higher‑variance bets—while still recognizing we may be morally and empirically confused.

Get the full analysis with uListen AI

Invest in finding and empowering unusually high‑potential people early.

EA underuses systematic talent search: building hubs, fellowships, and “lamplight” signals (like niche blogs or podcasts) can attract ambitious, aligned people—especially students—and connect them with high‑impact opportunities.

Get the full analysis with uListen AI

Treat longtermism as a reason to buy time and wisdom before irreversible moves.

The “long reflection” is framed less as a global panopticon and more as a directional ideal: slow down transformative, hard‑to‑reverse projects (like aggressive space expansion) until we’ve thought much more clearly about what futures we actually endorse.

Get the full analysis with uListen AI

Recognize that space expansion doesn’t automatically solve existential risk.

Backup colonies help only against independent risks; threats like unaligned AGI or engineered pandemics are highly correlated across locations, so terrestrial coordination and alignment still dominate extinction risk reduction.

Get the full analysis with uListen AI

Notable Quotes

Having this kind of property of being anti‑fragile with respect to being wrong, like really celebrating and endorsing changing your mind in a kind of loud and public way, that seems really important.

Finn Moorhouse

Where there is a for‑profit opportunity, you should just expect people to kind of take it. That’s why we don’t see $20 bills lying on the sidewalk.

Finn Moorhouse

If you have, like, $10 million in the bank and you make another $10 million, does your life get twice as good? Obviously not… If, on the other hand, you just care about making the world go well, then the world’s an extremely big place and you basically don’t run into these diminishing returns at all.

Finn Moorhouse

It’s super surprising to me that a movement like longtermism… took thousands of years of philosophy before somebody had the idea that, ‘Oh, the future could be really big, therefore the future matters a lot.’

Dwarkesh Patel

If you think you have a 1% chance of influencing $100 million of philanthropic spending, then there is some sense in which a donor might be willing to spend roughly 1% of that amount to find out that information.

Finn Moorhouse

Questions Answered in This Episode

How should effective altruists balance deference to existing EA ideas with openness to completely new moral concepts or cause areas that might eclipse current priorities?

Finn Moorhouse discusses his path into effective altruism (EA), the importance of being radically open to criticism, and new prize initiatives to incentivize both EA-aligned writing and robust critiques of the movement.

Get the full analysis with uListen AI

If space colonization can’t reliably hedge against AI or bio‑risks, does longtermist strategy need to shift more heavily toward alignment and global governance than it currently does?

He and Dwarkesh explore longtermism, existential risk, and whether we should expect major new moral concepts, touching on AI risk, many‑worlds quantum mechanics, and the “long reflection” idea before humanity locks in irreversible choices.

Get the full analysis with uListen AI

What concrete institutional designs could make a “long reflection” period even partially realistic without sliding into global authoritarianism?

They debate the relative impact of for‑profit entrepreneurship versus nonprofit work, how to find and cultivate talent for high‑impact careers, and what EA might be over‑ or under‑emphasizing as it grows.

Get the full analysis with uListen AI

Are we currently over‑investing in narrow, speculative interventions (e.g., specific AI policies) relative to broader “common sense” improvements like growth, state capacity, and liberal institutions?

The conversation finishes with space governance and far‑future scenarios—von Neumann probes, “grabby aliens,” and whether early norms or constitutions could realistically shape humanity’s expansion into the cosmos.

Get the full analysis with uListen AI

In practice, how should an ambitious 20‑year‑old decide between founding a startup, pursuing an EA‑aligned research path, or doing something completely orthogonal to the EA ecosystem?

Get the full analysis with uListen AI

Transcript Preview

Fin Moorhouse

... the scope for, like, what we could achieve is, like, really extraordinarily large (laughs) . Like, maybe kind of larger than most people kind of typically entertain. And having this kind of property of being, like, anti-fragile with respect to being wrong, like, really celebrating and endorsing changing your mind in a kind of loud and public way. But you can also do a thing which is try to make a lot of money and just, you know, make a useful product and then use the success of that first thing to then just think squarely, like, "How do I just do the most good?" So you have, like, $10 million in the bank and you make another $10 million, does your life get twice as good? (laughs) Obviously not, right? If, on the other hand, you just care about, like, making the world go well, (laughs) then the world's an extremely big place and so you basically don't run into these diminishing returns, like, at all. There's this question of, like, if the many worlds view is true, what, if anything, could that mean with respect to questions about, like, what should we do or what's important?

Dwarkesh Patel

(instrumental music) Today I have the pleasure of interviewing Finn Moorhouse, who is a research scholar at the Oxford University's Future of Humanity Institute and he's also an assistant to Toby Ord and also the host of the Hear This Idea podcast. Um, Finn, I know you've got a ton of other projects under your belt, so do, do you wanna talk about all the different things you're working on and how you got into EA and this kind of research?

Fin Moorhouse

I think you nailed the broad strokes there. I think, yeah, I've kind of failed to specialize in any particular thing and so I found myself just dabbling in projects that seem interesting to me, trying to help get some projects off the ground and just doing research on, you know, things which seem maybe underrated. I probably won't bore you with the, the list of things. And then, yeah, how'd I get into EA? Actually also a fairly boring story, unfortunately. I really loved philosophy. I really loved kind of pestering people by asking them all these questions, you know, "Why are you not... Why are you still eating meat?" Read kind of Peter Singer and Will MacAskill and I realized I just wasn't actually living these things a- these things out myself. I think there's some just, like, force of consistency (laughs) that pushed me into really getting involved. And I think the second piece was just the people. Um, I was lucky enough to have this student group where I went to university and I think there's some dynamic of realizing that this isn't just a kind of free floating set of ideas, but there's also just, like, a community of people I re- really get on with and have all these, like, incredibly interesting kind of personalities and interests, so, um, th- those two things, I think.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome