
Why Is No One Talking About Existential Risk? | Mara Cortona | Modern Wisdom Podcast 229
Mara Cortona (guest), Chris Williamson (host), Narrator, Narrator
In this episode of Modern Wisdom, featuring Mara Cortona and Chris Williamson, Why Is No One Talking About Existential Risk? | Mara Cortona | Modern Wisdom Podcast 229 explores humanity on a Knife Edge: Technology, X-Risk, and Our Responsibility Chris Williamson and Mara Cortona discuss existential risk (species-level threats) versus global catastrophic risks (mass death and severe suffering without full extinction), arguing that human-made threats now vastly outweigh natural ones.
Humanity on a Knife Edge: Technology, X-Risk, and Our Responsibility
Chris Williamson and Mara Cortona discuss existential risk (species-level threats) versus global catastrophic risks (mass death and severe suffering without full extinction), arguing that human-made threats now vastly outweigh natural ones.
They explore why society responds strongly to some risks (like climate change) but largely ignores others (engineered pandemics, misaligned AI), emphasizing broken communication systems, politicization, and our evolved inability to think beyond small social circles and short time horizons.
Cortona argues that technological development, driven and governed by a small technological and policy elite, will matter far more than individual consumer behavior, and that we need a rigorous rethinking of morality around real-world impact and effectiveness.
They conclude that individuals should focus on deep self-examination, consciously designed lives, and aligning personal ambitions with the long-term well-being of the “macro-organism” of humanity, while pressing those with disproportionate power—particularly in tech, policy, and space—to act responsibly.
Key Takeaways
Differentiate existential risks from catastrophic but non-extinction scenarios.
Existential risks threaten the permanent destruction or irreversible crippling of human potential, while global catastrophic risks involve mass death and drastic quality-of-life collapse without necessarily ending the species. ...
Get the full analysis with uListen AI
Human-made risks now dominate over natural background threats.
Asteroids, supervolcanoes, and natural pandemics exist but have low annual probability; our own technologies—climate change, engineered pathogens, nuclear war, misaligned AI, nanotech—have rapidly amplified risk, making this century unusually dangerous compared with our species’ past.
Get the full analysis with uListen AI
Our evolved psychology is poorly suited to civilization-scale problems.
We are wired for tribes of ~150 people (Dunbar number), concrete stories, and short-term incentives, not for abstract trillions of future lives or century-scale risk models. ...
Get the full analysis with uListen AI
Relying on mass persuasion alone is unrealistic; structural and technological solutions are crucial.
Cortona argues we are unlikely to “sway the masses” into the right behaviors at scale—especially when many are in survival mode or misinformed. ...
Get the full analysis with uListen AI
We need a new, impact-focused moral framework.
Traditional intuitions about goodness focus on visible, local help and warm feedback loops. ...
Get the full analysis with uListen AI
Global risk governance is fragmented; specialized institutions are needed.
Examples from space and “astropolitics” reveal a commons that is under-regulated and politicized, mirroring wider X-risk governance gaps. ...
Get the full analysis with uListen AI
Personal self-development should be tied to systemic impact.
Both guests emphasize living a consciously designed life: rigorously examining your motives, choices, and career for their real-world consequences, aligning personal drives (status, sex, money) with prosocial, long-term goals, and recognizing that those with more leverage (e. ...
Get the full analysis with uListen AI
Notable Quotes
“We’re at the point where natural background risks are far outweighed by the anthropic risks precipitated and accelerated by our own activity.”
— Mara Cortona
“I’d say the number one existential risk is really communication. It underlies our responses to all of the others.”
— Mara Cortona
“We are at the perfect junction between having enough power to severely neuter our future and nowhere near enough wisdom to corral that power.”
— Chris Williamson
“It’s not going to be something where we sway the masses. With virtually every existential risk, it’s going to come down to the actions of a few people in power.”
— Mara Cortona
“It is your duty to be everything that you can… a positive pathogen that spreads to the people around you.”
— Chris Williamson
Questions Answered in This Episode
If our psychology is fundamentally misaligned with long-term, abstract risks, what concrete educational or institutional changes could better align our behavior with civilization-scale threats?
Chris Williamson and Mara Cortona discuss existential risk (species-level threats) versus global catastrophic risks (mass death and severe suffering without full extinction), arguing that human-made threats now vastly outweigh natural ones.
Get the full analysis with uListen AI
How can we hold the “technological elite” and policymakers accountable for existential-risk decisions when their power and incentives are so concentrated and opaque?
They explore why society responds strongly to some risks (like climate change) but largely ignores others (engineered pandemics, misaligned AI), emphasizing broken communication systems, politicization, and our evolved inability to think beyond small social circles and short time horizons.
Get the full analysis with uListen AI
What would a genuinely impact-focused morality look like in everyday life, and how should it reshape personal career choices and philanthropy?
Cortona argues that technological development, driven and governed by a small technological and policy elite, will matter far more than individual consumer behavior, and that we need a rigorous rethinking of morality around real-world impact and effectiveness.
Get the full analysis with uListen AI
Given that some thinkers question whether the future of humanity is even worth safeguarding due to suffering, how should we ethically weigh unknown future happiness versus unknown future suffering?
They conclude that individuals should focus on deep self-examination, consciously designed lives, and aligning personal ambitions with the long-term well-being of the “macro-organism” of humanity, while pressing those with disproportionate power—particularly in tech, policy, and space—to act responsibly.
Get the full analysis with uListen AI
What would a credible, well-designed global agency for existential risk governance actually do, and how could it gain legitimacy and authority across competing nations and corporations?
Get the full analysis with uListen AI
Transcript Preview
The COVID-19 pandemic, the level of uncertainty about it shouldn't have been there that seemed to have been there. I mean, we know that pandemics happen. We know that pandemics at this scale happen. You know, it, it's, it's shocking to us 'cause we haven't experienced something like this in our lifetimes, but we all knew. It w- it was modeled, like, that something like this was going to happen. And we were completely unprepared globally. And not only were we unprepared globally, but the fact that so many people in the world are just caught up in these concerns about whether it's a hoax or these conspiracies. I mean, it, it's really very simple measures that need to be taken to control the spread of this pandemic that we all knew was coming. It's very clear that our communication and collaboration systems are very broken and things are heavily politicized.
I am joined by Mara Cortona. Mara, welcome to the show.
Thank you.
Pleasure to have you on. So, what are we, what are we gonna be talking-
Thanks a lot.
... about today?
Um, I'd love to talk about existential risk, um, and the way that we relate to it as individuals and, and as societies.
Positive conversation for us today, then. Everyone feeling good leaving this (laughs) leaving, leaving this in fear of the world stopping?
Okay. It's something that really troubles me that people aren't concerned about more than they are, both existential risk and global catastrophic risks, um, to the point that it is something that I tend to bring up in casual conversation, which makes me very fun at parties, um, but it is a thing that I, I really think we should be talking about more. So, here we are.
What do we need to know about to start then? What's the, the glossary of words and key terms or whatever it is that we need to be aware of before we can begin?
Sure. Um, well, I, I, initially, I think I'd like to draw a distinction between ex-risk, or existential risk, which is, um, a risk to the entire species as we know it, and, uh, global catastrophic risks, which, um, perhaps wouldn't cause the en- entire extinction of our species, um, but would lead to mass die-offs and a really low quality of life. And those are, um, those types of risks are both more likely to happen and more likely to happen sooner than the major types of ex- ex-risks that are frequently modeled and talked about. So, those are important things to discuss. Um, some of the main, some of the most critical and pressing forms of ex-risk, obviously climate change is the one on everyone's mind, um, though engineered pandemics, um, bio-weapons, and, um, nuclear war are, are right up there. So, it's really all, they're really all anthropo, um, excuse me, anthropic, um, risks. And those are distinct from sort of this background rate of existential risk that's always there, um, from, like, asteroid collisions or, um, perhaps natural pandemics, or, um, super volcanoes or the like. There's always this background risk of those happening, which is fairly low, um, as we have been on this planet to, in some degree for, um, you know, 2,000 centuries and we, we haven't come across anything like that yet. So, we're at the point where, um, those natural background risks are far outweighed by the anthropic risks that, um, are being precipitated and accelerated by our own activity. So, those are some of the main, um, the main terms that I use.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome