Dwarkesh PodcastFin Moorhouse - Longtermism, Space, & Entrepreneurship
At a glance
WHAT IT’S REALLY ABOUT
Longtermism, criticism, and space: Finn Moorhouse on fixing the future
- Finn Moorhouse discusses his path into effective altruism (EA), the importance of being radically open to criticism, and new prize initiatives to incentivize both EA-aligned writing and robust critiques of the movement.
- He and Dwarkesh explore longtermism, existential risk, and whether we should expect major new moral concepts, touching on AI risk, many‑worlds quantum mechanics, and the “long reflection” idea before humanity locks in irreversible choices.
- They debate the relative impact of for‑profit entrepreneurship versus nonprofit work, how to find and cultivate talent for high‑impact careers, and what EA might be over‑ or under‑emphasizing as it grows.
- The conversation finishes with space governance and far‑future scenarios—von Neumann probes, “grabby aliens,” and whether early norms or constitutions could realistically shape humanity’s expansion into the cosmos.
IDEAS WORTH REMEMBERING
5 ideasBuild anti‑fragility to criticism into movements and projects.
Moorhouse argues EA should proactively reward high‑quality criticism and publicly celebrate course‑corrections and shutdowns of failing projects, treating them as successes in epistemic hygiene rather than embarrassments.
Use prizes and “red‑teaming” to substitute for weak nonprofit feedback loops.
Unlike startups, nonprofits often lack clear market signals; targeted criticism prizes and charity evaluators can mimic selection pressure, surfacing flaws in strategy or worldview before large resources are deployed.
Don’t overrate or underrate for‑profit entrepreneurship as a path to impact.
For‑profits can be extremely beneficial (e.g., Wave, Amazon’s marketplace effects) and can fund later direct altruism, but the highest‑leverage opportunities often lie where markets fail—future generations, animals, and public goods R&D.
Act on high‑EV causes without demanding certainty, but respect the risk of locking in errors.
They contrast “maximizing expected value” with “minimax regret” and note that when acting at the margin of a large philanthropic pool, it’s often rational to take higher‑variance bets—while still recognizing we may be morally and empirically confused.
Invest in finding and empowering unusually high‑potential people early.
EA underuses systematic talent search: building hubs, fellowships, and “lamplight” signals (like niche blogs or podcasts) can attract ambitious, aligned people—especially students—and connect them with high‑impact opportunities.
WORDS WORTH SAVING
5 quotesHaving this kind of property of being anti‑fragile with respect to being wrong, like really celebrating and endorsing changing your mind in a kind of loud and public way, that seems really important.
— Finn Moorhouse
Where there is a for‑profit opportunity, you should just expect people to kind of take it. That’s why we don’t see $20 bills lying on the sidewalk.
— Finn Moorhouse
If you have, like, $10 million in the bank and you make another $10 million, does your life get twice as good? Obviously not… If, on the other hand, you just care about making the world go well, then the world’s an extremely big place and you basically don’t run into these diminishing returns at all.
— Finn Moorhouse
It’s super surprising to me that a movement like longtermism… took thousands of years of philosophy before somebody had the idea that, ‘Oh, the future could be really big, therefore the future matters a lot.’
— Dwarkesh Patel
If you think you have a 1% chance of influencing $100 million of philanthropic spending, then there is some sense in which a donor might be willing to spend roughly 1% of that amount to find out that information.
— Finn Moorhouse
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome