Skip to content
Modern WisdomModern Wisdom

How To Avoid Destroying Humanity - Rob Reid | Modern Wisdom Podcast 346

Rob Reid is an entrepreneur, podcaster and an author. The last 15 months have been a terrifying taster of just what a global crisis is like, except it wasn't lethal enough to be a threat to our long term survival - but just because this one wasn't, doesn't mean that more deadly existential risks aren't out there. Expect to learn how synthetic biology might be the biggest risk to our survival, what we should have learned from 2020, whether Artificial General Intelligence is an immediate threat, Rob's opinion on my solution for saving civilisation, whether we should totally stop all technological development, if synbio is preventable, how we can avoid civilisation’s destruction through nuclear bombs and much more... Sponsors: Get 20% discount on all pillows at https://thehybridpillow.com (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Check out Rob's Podcast - https://after-on.com Follow Rob on Twitter - https://twitter.com/rob_reid?lang=en Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #existentialrisk #syntheticbiology #pandemic - 00:00 Intro 01:19 The Thrill of Existential Risk 07:00 Why is Climate Change our Focus? 10:19 Humanity’s Close Calls 20:31 Democratising the Apocalypse 30:46 The Threat of Covid-19 and Pandemics 54:00 Is the Research Worth the Risk? 1:02:00 Would Moon Labs Reduce Risk? 1:08:18 Helpful Lessons from Covid-19 1:16:45 What if China Leaked Covid-19? 1:22:15 How to Prevent Destroying Humanity 1:37:47 Creating Silo Communities 1:46:54 Lesser-known Existential Risks 1:51:17 Making Existential Risk Sexier 2:01:50 How Can Individuals Help? 2:08:07 What’s Next for Rob? - Listen to all episodes online. Search "Modern Wisdom" on any Podcast App or click here: Apple Podcasts: https://apple.co/2MNqIgw Spotify: https://spoti.fi/2LSimPn Stitcher: https://www.stitcher.com/podcast/modern-wisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Rob ReidguestChris Williamsonhost
Jul 15, 20212h 11mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:19

    Intro

    1. RR

      We ignored the warning shots of SARS and MERS and Zika and a whole bunch of other things. COVID is a very, very difficult warning shot to miss. The whole world has been traumatized by this. There will be much greater seriousness applied to pandemic resistance in the future. The question is, will it be adequate attention, and will it be sustained attention, and will it be intelligent attention?

    2. CW

      So, I read something on Reddit the day that I want to dictate to you here.

    3. RR

      Mm-hmm.

    4. CW

      "The decision to use CFCs, chlorofluorocarbons, instead of-"

    5. RR

      Yep.

    6. CW

      "... BFCs, bromofluorocarbons, was pretty much arbitrary. Had we decided to use BFCs, the ozone layer probably would have been totally destroyed before we even knew what was happening, killing all life."

    7. RR

      No way.

    8. CW

      "BFCs destroy the ozone at over 100 times the rate of CFCs."

    9. RR

      That's amazing. I never heard that before.

    10. CW

      How sick is that?

    11. RR

      I mean, and CFCs were scary, but, um, obviously they moved slowly enough that we were more or less able to fix the problem before we were all dead.

    12. CW

      (laughs) Someone replied and said, "Maybe that was the great filter that all the other civilizations just-"

    13. RR

      (laughs)

    14. CW

      "... chose the wrong coolant medium."

  2. 1:197:00

    The Thrill of Existential Risk

    1. CW

    2. RR

      That's funny.

    3. CW

      So, today we're going to be talking about existential risk. My favorite terrifying topic, and also your, one of your areas of expertise.

    4. RR

      Mm-hmm. Definitely. And it's amazing how seductive the topic is to a lot of us. It's, it's like we can't take our eyes away from it. We get fascinated, like what you just read to me. It, this is, it probably says something bad about me psychologically, but my main reaction was like, "How cool."

    5. CW

      (laughs)

    6. RR

      I mean, obviously we dodged the bullet, so that's pretty nice, but like, wow, another existential risk that I didn't even know about. (laughs)

    7. CW

      What do you think it is about that? Because I have the same fascination.

    8. RR

      I, you know, maybe it's something that was, you know, drilled into us when our, you know, distant ancestors were growing up on the savanna. Maybe there's something about being fascinated by things that can annihilate one, oneself, um, that conferred some kind of survival ad- advantage. And, I'm just riffing here, and I'm just, I'm just gonna make this up, but, you know, particularly the head of the clan, the hunter-gatherer clan, whoever the boss was, you know, chieftain, whatever you want to call that person, um, really needed to think about what could kill us all. And the head of the clan probably was a man, and probably fathered far more children than people who were not head of the clan. And so, we all have a lot of head of clan DNA in us. I'm making this up as I'm going along, but I like that, I like that theory. So, we probably do, as a statement of fact, all have a lot of head of clan DNA, 'cause there were thousands of generations, and the heads of the clans were the people who probably had the most progeny, and the heads of the clans really did have to think about not just what could kill me, a saber-toothed tiger on a hunt or whatever, but what could wipe us all out. Really need to think about that. Um, and the successful ones continued to have progeny. So, that's my answer.

    9. CW

      It's like a, we've got the anxiety bias, right? That we're more scared of things than we are hopeful about things, but this is like a macro level version of that.

    10. RR

      Mm-hmm. Macro level anxiety bi- bias.

    11. CW

      Yeah.

    12. RR

      Exactly, yeah.

    13. CW

      We've got it. We've worked it out. Okay, so-

    14. RR

      All right, good.

    15. CW

      ... given the fact that me and you are obsessed with it, and a, a ton of people that are listening will be as well-

    16. RR

      Mm-hmm.

    17. CW

      ... why do you think we're so blind to how close we can come to total civilizational destruction? Generally. It's not at the forefront of what we're talking about every day as much as me and you might wish that it was.

    18. RR

      Well, I think it's because it's so new. There is really no plausible step that I can think of that humanity could have taken before, let's say, the mid 1950s to wipe everybody out. And at that point, it was one thing. So, after Hiroshima and Nagasaki, there's one nuclear power, the United States. It had precisely two bombs. It used them both, so there's no way to destroy the, the Earth, right? Then along comes the hydrogen bomb, and there are very few of them, and only the US has them, and then the Soviet Union gets them. And then all of a sudden there's this insane push to put them on long-range bombers, missiles and so forth. This is probably late '50s by the time H-bombs were proliferate enough and, you know, two sides had enough capability that, that truly wiping out society became a problem. So, as a, you know, quarter million year old species, we've been facing this for 60 years. So, it's probab- it's e- even though what I said about the clan notwithstanding, to put it on a global level, that's a pretty new development. And I would also say that the, the attention that's given to, the careful attention, academic attention, serious thought in industry attention, governmental attention is far less of what it should be, but nonetheless, the amount of attention that is given to existential risk today, to me, feels like it's 10 to 30 or even more times what we, what we gave to it, uh, let's say 15 years ago. I mean, 15 years ago, I don't think people like you and I even knew the term existential risk. So, I think we're, we're developing that muscle pretty rapidly at this point, and that's a good thing, and hopefully it's not too late.

    19. CW

      That's your hopeful optimism, your unbeatable optimism coming through there.

    20. RR

      Yeah, I'm, I'm pathologically optimistic sometimes.

    21. CW

      (laughs) How much of it do you think could be a hubris as well? You know, by definition, we haven't destroyed ourselves yet, therefore we're probably fine at surviving any future destruction potentials also.

    22. RR

      Yeah. I mean, the response to that is like, "Atta boy, atta girl, it's been 60 years." Um, so you've dodged numerous bullets in 60 years. Uh, w- o- n- e- or two maybe sort of by design and quite a few more by accident, and so do you want humanity to last another 60 years or do you want it to last another quarter billion years? And if the answer is you've been dodge- you dodged one bullet for th- uh, 40 years (laughs) and you've dodged more than one bullet for maybe 20 years, um, is that the kind of track record that gives you confidence that the civilization or the species is going to survive another quarter million years? That's absurd. That's like saying, w- I wish we could do the proportions and back of the envelope math could probably reveal it, but that's kind of like saying one to two seconds into what Americans call a soccer game and the rest of the world calls a football game, "We haven't given up any goals, so we're fine."

    23. CW

      Yeah. Yeah.

  3. 7:0010:19

    Why is Climate Change our Focus?

    1. CW

      Yep. Yeah.

    2. RR

      Um, yeah.

    3. CW

      Given the fact that existential risks generally don't have a ton of global p- sort of attention paid to them, why do you think climate change is given so much attention when there's more imminent threats that aren't even really in the conversation?

    4. RR

      Well, I think the attention to climate change, first of all, um, developed in a compounding way over a longer period of time. And so the Whole Earth catalog with a picture of the Earth on it, the first Earth Day, which I think was 1970, et cetera, um, you know, that's a, a great deal more time. And I think these things, you know, like successful investments, when a school of thought really, really plants its roots and takes off, it's like compounding returns, you know? It's like, so the number of people who are environmentally aware in 1971 was probably pretty small, um, but th- it was a, it was a meme that the world was ready to hear and it had a lot of committed people all from the very beginning and that meme spread and it resonated and more work was done and that spread, and it was like an investment that compounds at 20% per annum. Like, wow, 20% per annum, 10 years in, you know, you think you're in Fat City, but holy cow, 50 years in, it's ginormous. And so I think it's p- a lot of it is the fact that that compounding awareness has had, you know, more years to grow exponentially. And then the other thing is, you get to a certain point in any of these fields and you start developing very significant industries and economic interests around them. And so now there is a th- a very, very large number of people who are making their living off of protecting us from climate change, whether they're making electric cars, whether they're academics who specialize in climate models, whether they're politicians who fired up, you know, their base and got elected in part on that message, there is a very, very large interest group that is- believes, I- I'm not saying it's cynical, but believes in this, but is full-time committed to making this stuff work. And the number of people who are currently full-time committed to preventing existential risk is probably a minuscule handful of academics, but that is a hell of a lot more brain power, persuasion power, intellectual output, et cetera, than we had 15 years ago. So it's just starting.

    5. CW

      You think that first mover advantage for climate change, in that case, in 20 years time then are we gonna have the Church of Nick Bostrom and everyone's gonna be praying to the control problem and looking at nanotechnology and gray goo?

    6. RR

      One hopes.

    7. CW

      (laughs)

    8. RR

      Uh, I don't know if Nick or I want there to be a church dedicated to him. We could talk about that-

    9. CW

      (laughs)

    10. RR

      ... later in the, in the conversation, if you wish, but, um, yeah, I think, I think spreading and compounding awareness of this is the only thing that will, will protect us, because the policymakers will be the last ones to the party, but if this becomes something that, you know, a million people are really interested in, aware of and informed about, 10 million, 100 million, et cetera, um, i- that's, you know, it compounding out and eventually seeping into government, that's ultimately the only way

  4. 10:1920:31

    Humanity’s Close Calls

    1. RR

      we can dodge these bullets.

    2. CW

      You mentioned that we'd had a couple of close calls over the last 50 years. Can you take us through a few of those stories?

    3. RR

      Yeah. I mean, I think, you know, starting, um, with, well, first of all, the BFC thing, that's a new one. And if that Redditor was correct in what they said, and I'd love it if you could send me that because-

    4. CW

      I will.

    5. RR

      ... I want to dig into that. It's, it is intriguing and frightening. Um, there's one right there. I mean, the ones that are most chilling to me because I grew up during the Cold War, um, are the nuclear ones. And, you know, there, there are a couple of particularly famous incidents, one during the Cuban Missile Crisis in which there were, uh, nuclear armed subs, Russian subs patrolling the area outside of Cuba as the American blockade settled in, and one of the American boats was sending... It's funny, they knew the subs were down there, they didn't want to escalate, um, so instead of sending depth charges, they were sending something like practice depth charges or something weird like that, but they were dropping these depth charges and really menacing these submarines, and there were a lot of American boats, like, half the fleet was in a pretty compact area at this point. And I think there were four Russian subs in the, in the submarine fleet that were down there, and these depth charges start coming down. And, um, on one of the subs, th- they, there was a decision to nuke the American fleet because they had these nuclear torpedoes, and if they sent them up, it basically would have wiped out the, the American fleet. And i- in most submarines, um, three of the four, I believe, it required two people to say, "Yes, let's do it." In order for it to be done. Now you're underwater, you've got depth charges, you're not on the phone to Khrushchev. Like, these are people who are fully empowered to start a nuclear war.... and the stop-gap measure to prevent that was not one person, but two people have to say, "We're gonna do it." On this one particular sub where, for whatever reason, I wish I knew the details better, the decision was being taken, there was a third man. And I'm saying man because I'm sure it was all men on the Russian submarine crew in the 1960s. There was a third man who got a vote, because he was, like, the party cadre or something. Like, he was, he was like the head of the, you know, the Communist Party as opposed to the merely the military, you know, delegation of the submarine fleet. And he said no. And had that third person not been on the submarine, or had that third person said yes, the next step would have been, uh, for the Russian submarines to fire a tactical nuke and basically eradicate that part of the American fleet. Now a nuclear weapon has been used during the Cuban missile crisis. And that, you know, it's very, it's almost difficult to imagine how that would not have escalated to doomsday. So, holy cow. And then the other famous incident was the guy, I've, I've seen a movie about him, I've heard interviews with him, so I should remember his name. Do you remember his name, the guy in, on the Russian side who saw... Yeah. So, uh, another story in the States to, um, I, I wanna say the '80s, um, probably early '80s, there was, um, a, a, new- uh, basically the, a Russian equivalent of NORAD. So basically, um, the operation where they detected incoming American bombers and missiles and so forth, and they saw American missiles coming over. And they w- it was like, "Oh my God, it's Armageddon has started." It was only one missile, or two, or something weird like that. And so it basically went up to the person who was in charge of that facility, whose name is escaping me, and his instinct told him, "This is not the start of doomsday. They wouldn't just send one missile." And then more missiles started showing up, but it was not an all-out attack. And this guy was in contact with his superiors in Moscow and they said, "Launch everything." And he said no. And his instinct was screaming at him that this is some kind of, "Something's gone wrong with our systems. They're not starting... They're not gonna send two or three or a handful of missiles over here, trigger an all-out response. If they were, if they were taking a first strike, they'd be sending everything they had." So he stood his ground and did not launch. And had there been somebody different, almost probably anybody in that role, because the job, your job was to say yes to Moscow. Your job wasn't to think. Uh, it w- you know, uh, so that was an unbelievably close call. There was another less close call that I think a little bit more is made of than should be, where at NORAD on the American side, there was a first strike, a pretty full, full-blooded first strike was detected and it turned out to be a test sequence. And there was actually a journalist in NORAD when that happened. So there are, there are a lot of these things that we got through by the skin of our teeth, and these things could still happen. I mean, there's still an inordinate amount of ordinance, sorry for the pun, on the Russian and American side, and increasingly on the Chinese side. And one hopes that our, our safeguards and our software and our, our protocols have improved since the '80s, but I don't know that they have. And, you know, that risk still sits out there. Now, I'm focusing on nuclear because those are the, I think those really are the bullets that we've dodged so far. Um, we're just getting into the rain- the area in which synbio could take us out. And particularly with a rogue actor is a scenario that, as you know, I worry about particularly. Um, although bio- you know, bioterror or bioerror, both of them are very, very, very dangerous. But we're just getting to the point where synthetic biology could take us out. And super AI is not there yet. Um, and I f- I feel like that's at least a handful of decades out. Um, and nano is definitely not there yet. But w- we are going to have a, an increasing number of these, of these risks facing us. And the real danger is the proliferation of the ability to hit, we'll just call it euphemistically the flashing red button, the hypothetical, probably nonexistent flashing red button that we imagine, you know, Mr. Biden and Mr. Putin and Mr. Xi have at all times available to them to destroy the world. Um, when you look at the Cold War, let's think of this. We spent trillions of dollars preventing two people, um, obviously oversimplifying, but preventing two people from hitting that flashing red button. What did we spend it on? Well, we spent it on all those detec- detection systems, but we also spent it on enormous conventional armies to deter, you know, you know, small acts that could snowball into large, large conflagrations. Um, we spent money on regional wars to prod each other and test each other and to show resolve and, you know, to hold each other at bay, the diplomatic apparatus. All these things were in place to, to stop two people from hitting that button. And those were two people who were highly inclined not to hit that button. And obviously it was more than two, 'cause we just talked about scenarios in which people down in the c- chain of command had that power. But we spent a lot of money making sure that a very small handful of people didn't hit that button. And so far we've succeeded. It became terrifyingly close, but so far we've succeeded. The danger with things like synbio and super AI is that that decision not to do something unbelievably dangerous or even something deliberately destructive, i- is suddenly going to be in the hands of thousands of people perhaps. In the case of synbio, I believe it will be thousands of people, um, and probably pretty soon. In the case of super AI, it's probably gonna be a smaller group of people who don't take the right, you know, precautions t- about not letting the genie out of the bottle. But with synbio, I'll focus on that for a moment, the tools and the methodologies are improving so rapidly.... that the things that only the most brilliant academic synthetic biologists at the pinnacle of, you know, laboratory budget, equipment, know-how, et cetera, things that would elude that person, be impossible for that person will be, today, will be child's play in a high school bio lab in quite a bit less than 20 years because this is an exponential technology. And that's the frightening thing. And all the wisdom and complexity and the Nobel-worthy work that is done by prior generations will, starts getting embodied in simpler and simpler and more and more common and cheaper and cheaper tools. And so all of a sudden, all that wisdom and genius that eludes most of us is embodied, like, you know, we're talking on laptops. How much Nobel-worthy work was done to create, you know, the computers that we're using to speak to one another over many, many generations? You and I are smugly sitting here with computing power that the most brilliant, you know, com- you know, comp- you know, computer engineer, electrical engineer could only dream of, you know, 25 years ago. And it's all embodied in this simple tool, and that goes into synbio tools in wet labs at lower and lower levels of academia, and in, you know, a higher and higher number of lower and lower budget companies. Uh, we are relying on an impossible number of people not to screw up or not to do something deliberately evil, and if one says, "Well, w- why would somebody ever do something deliberately evil with synbio after what we, we've just been through?" The answer is, "I don't know." I don't know what motivated the Columbine kids. I don't know what motivates the, you know, more than one mass shooter per year, per day that strikes in the United States. I don't know what motivates those people, but they're motivated, and they're killing everybody they can. They just don't have tools to wipe us all out. Uh, so anyway, that was probably a very long-winded answer, but we are gonna have to worry a great deal about the ability to do something catastrophic being in a lot of hands, rather than just two that we can

  5. 20:3130:46

    Democratising the Apocalypse

    1. RR

      watch very closely.

    2. CW

      Is that what you call democratizing the apocalypse?

    3. RR

      Yes, that is exactly what I call democratizing the apocal- or privatizing. I call it, uh, both, privatizing the apoc- apocalypse and democratizing it. And privatizing, you know, just drives home the fact that saving the world from destruction or destroying the world is no longer a public good. You know, it's in private hands, and so it's a, a slightly playful and slightly perverse way of putting it. But that game of chicken that the superpowers played with each other during the Cold War was, quote unquote, "a public good." Um, for all of the terror that anybody who grew up under a nuclear threat... And by the way, that's everybody a- alive right now because that threat is still very present. We just don't feel it the way we did during the Cold War. The terror that anybody felt growing up with a nuclear threat, and it was billions of people who were to some degree, I'm sure, traumatized by that, is nothing compared to the horror that would have been inflicted by more and more conventional wars. So if conventional wars between the superpowers, you know, more Vietnams, more Korea wars, et cetera, uh, eventually, you know, followed by an enormous World War III smackdown in Europe, so imagine nuclear weapons were impossible, it's, it's probably likely... It, it, it's almo- I mean, it's highly likely, in my mind, looking at the rhythm of geopolitics stretching from, let's say, 1840 to 1940, if nukes had never been developed, um, it's hard to imagine we wouldn't have continued to butcher ourselves in our tens of millions on a highly regular-

    4. CW

      That's a really good point. Imagine-

    5. RR

      Yeah.

    6. CW

      ... just how much massacre there would have been if we didn't have this capacity to do wide-scale preventative destruction.

    7. RR

      Yeah, so, so I am sure that there would have been an all-out war between the Soviet bloc and the Western Bloc in Europe, probably in the '50s, probably in the '60s, without that, that threat.

    8. CW

      Yeah.

    9. RR

      And there probably would have been far more Vietnams and, and, and Koreas and stuff we can't even imagine. And so that game of chicken was, quote unquote, "a public good," and it was owned and operated by governments. And that most, you know, terrifying of decisions was concentrated in a tiny number of hands, and humanity did shoot those rapids. And we probably would have had a far, far, far, far, far more gory and traumatizing second half to the 20th century. And, and to this day, I bet we'd still be clobbering each other with conventional weapons, and now they're getting to the point that they're, they're terrifying with automated... You know, like, so, you know, public good. All of a sudden... And, and when that suddenly is in private hands, things change in frightening ways. So let's pivot over to super AI risk. Um, the, the parties that will be in the position at some point in the future if super AI is indeed a possibility, which I personally absolutely think it is, the parties who will be in the position to create the genie in the bottle, step one, and B, screw up and let the genie out of the bottle, um, or let's just go with the creation step, not maybe they don't know that they're inches away from creating the genie, those people are almost certain to be, uh, in some form of private company, in my mind, um, at least in the United States. Uh, China, we know less of what they're doing there, and there probably are government labs that are recruiting extraordinary talent.... and proceeding headlong down paths, um, that we can't necessarily proceed on. I mean, but the, n- today, the greatest talent in computing is not working for, you know, the United States government or any government. It's working for, you know, DeepMind, it's working for Google, it's working for startups that we're not aware of. And that means that the person or people who are in a position to say, "Ooh, that's really risky, huh? But, uh, it's kinda tempting," they have huge economic incentives to take what they might perceive to be a tiny risk and, you know, probably get away with the tiny risk, and, you know, as a result of that, be gazillionaires. You know, that, eh, economic incentive did not exist for anybody who felt like they were chancing it with nuclear Armageddon. Y- we don't have to worry about Putin saying, "Ooh, it's kinda risky. If I take this insane step and, like, invade the rest of the Ukraine from the Eastern Ukraine could lead to nuclear war, but, uh, I get an IPO and I'm rich if it doesn't..." You know, that, that incentive isn't there if it's a public good. And so, what I worry about is lots and lots of private actors taking what might be, like, "Uh, it's a sliver of 1% risk that the world ends, but that's not gonna happen," um, or probably won't happen. And if it doesn't happen, and let's face it, it probably won't, "Holy cow, here comes glory." And suddenly there's much, much, much more incentive for lots and lots of people to take tiny, tiny risks that could kill us all. And so that's why the privatization really, really worries me.

    10. CW

      It's because you've got privatized gains, but socialized losses.

    11. RR

      And socialized losses, exactly. Privateized, that's what, that was our economic crisis, right? The financial crisis for years, um, people in various positions in Wall Street, on the buy side, on the sell side, on the fund side, all kinds of things, were taking odious risks with the world economy and getting great returns because, you know, higher risk leads to higher returns in finance. And so they were inhaling money for themselves and putting them into super yachts and Picassos. And then when everything fell apart, the bill came due to us, all of us. The financial crisis, that bailout was, b- the cost of that was borne by taxpayers throughout the world. And so we see what happens when you have privatized gains and socialized losses. And people will take tiny risks on their own account all the time. I mean, if we want to be, you know, hairsplitting about it, we all take a tiny risk on our own account one other we hop in a car. You know, it's like, "I really am hungry and I want to go and buy some popcorn," and, uh, they've got that great microwave popcorn down at the Safeway. Uh, "I'm famished, I love popcorn, I'm gonna go get it." You're not thinking, "I am putting my life on the line for fricking popcorn," but you are, and it's a tiny risk and you take it. Um, if you dial that up, there are people who get involved in extreme sports, uh, who take very, very significant risks in order to prove to themselves that they're great, in order to get public accolades, you know, in some cases, some extreme sports probably have nice tidy purses that can be made. You know, nothing like, you know, in professional basketball or football, but, you know, people on their own account will take tiny risks. And particularly, I think when you have somebody who doesn't have deep family ties, doesn't have children, um, you know, who is, you know, earlier in life or is more solitary in life or whatever it is, they, when they're facing that risk of like, "I could annihilate the world by mistake or make gazillions of dollars and the risks of annihilating the world are minuscule," their psychology... 'Cause again, we weren't trained on the Savannah to think, "If I screw up, all humans die." Their psychology is probably thinking very much in terms of like, "I'm taking a tiny risk here." They're probably thinking about their own risk of annihilation. That probably is at least half of the calculus in their mind because we're all individuals and we don't want to die. If they say, "God, that's minuscule," they might take the risk of a daredevil. You know, that a daredevil might take, an extreme sports person take, like, "Okay, I'm kinda putting it on the line here, but I think I can shoot these rapids." And when lots and lots of people are in that position, you start arithmetically adding up all those risks and at some point it becomes untenable. So that's why the privatization is really, really dangerous.

    12. CW

      Because it democratizes the technology to the stage where you include so many potential agents-

    13. RR

      Yes.

    14. CW

      ... that one of them or multiple of them are going to be outside of whatever Overton window of safety that we have-

    15. RR

      Mm-hmm.

    16. CW

      ... and they're going to lie there and decide... And this is us just talking about people-

    17. RR

      Mistakes.

    18. CW

      Mistakes, yeah.

    19. RR

      Yeah.

    20. CW

      This isn't malignance.

    21. RR

      Yeah. Yeah, malignance is far more terrifying.

    22. CW

      This is negligence, not malignance.

    23. RR

      The negligence, not malignance. And I do worry more, worry more about malignance, particularly in synbio because if we think about COVID... Let's think about COVID. Um, we've all been thinking about COVID for a while. We're very practiced at thinking about COVID. If we think about COVID, it is remarkable on a number of levels how benign this, this horrific thing is. Um, it is not very lethal compared to a lot of things out there. It's not lethal at all compared to SARS. You know, SARS is, dep- you know, depending on what numbers we run, 10 to 20 times more deadly. It's also a coronavirus. Uh, MERS, Middle East respiratory system, is, kills at a rate of about 30% case fatality rate. Um, H5N1 flu, which as you know I'm quite grimly fascinated by, kills about 60%, 6-0 percent, of people. And with COVID, the case fatality rate, according to the World Health Organization, somewhere between 0.5% and 1%. So COVID could have been far, far worse merely on a m- lethality basis and also on a transmissibility contagiousness basis. If somebody were malignant, if somebody were, were, were, you know, uh, malicious and really d- deliberately developing something to be maximally destructive, it would be worse than COVID.... they let, you know, let's, a- an imaginable near future where somebody is sophisticated enough, or the tools that they're using are sophisticated enough to allow them to basically dial that up, they're not going to unleash something that kills a half a percent. They're going to release something that is so much more deadly and so much more dangerous that it could have civilization-toppling potential.

  6. 30:4654:00

    The Threat of Covid-19 and Pandemics

    1. RR

    2. CW

      Here's something that I've just thought of. Have you considered the potential that the lab leak hypothesis, or some variant of it for-

    3. RR

      Mm-hmm.

    4. CW

      ... COVID-19, could be true?

    5. RR

      Yeah, and v- it's entirely plausible, and it's undeniable that it's plausible at this point.

    6. CW

      But that the reason that it was released was some big-picture thinking, fair weather saint human who said-

    7. RR

      Ooh.

    8. CW

      ... "Bill Gates has told you at the end of his TED Talk, and we've been warning you for years about the dangers of engineered pandemics and natural, natural pandemics as well, so what I'm going to do is I'm going to give you a very moderately transmissible, but very not lethal pandemic, which is going to act like a global vaccine."

    9. RR

      Mm-hmm.

    10. CW

      "It's going to cause you to have a very benign dose, uh, co- a coordination problem dose of how to deal with this sort of a pandemic, and maybe this will make people wake up." Have you thought about that?

    11. RR

      Well, have you read, somehow, have you hacked into my computer and read the outline of a novel that I'm working on?

    12. CW

      Oh, dear.

    13. RR

      Because that, I, I've actually, um, that's a story that I've fleshed out a great deal. Um, so I'm, um-

    14. CW

      Familiar with it.

    15. RR

      ... on the science. Yeah, I'm familiar with i- with that scenario, and it's a very, very interesting one. And I don't think that that was COVID, and here's why. I, I think if somebody wanted to un- unleash an engineered pandemic to freak everybody out and, and realize how dangerous it was, they would make sure that the world knew it was engineered, because, you know, right now, the prevailing wisdom in science and policymaking circles is that this wasn't engineered, and therefore, the response has been more about zoonotic, like, how do we prevent more zoonotic transmission? So if somebody wanted to create a mild engineered pandemic and freak out the world, they would absolutely make sure that the world knew this was engineered. Um, so I don't think that's what happened. That doesn't rule out the lab leak hypothesis at all.

    16. CW

      They would've put like, "Sort your shit out, world" into the RNA-

    17. RR

      Yeah.

    18. CW

      ... or something like that.

    19. RR

      Or they would've released some message online or whatever it was.

    20. CW

      Yeah.

    21. RR

      And, you know, done, tried to do a pinprick of salt, and of course, uh, uh, uh, pinprick attack of some kind, and of course, the danger with that is that thing mutates and gets out of control and annihilates us anyway. Not that that's necessarily going to be a plot twist in the book that I may or may not write.

    22. CW

      (laughs)

    23. RR

      But, uh-

    24. CW

      Well, we've ruined it now. All right. So-

    25. RR

      Yeah, yeah.

    26. CW

      ... if, if COVID was one of the more benign-

    27. RR

      Mm-hmm.

    28. CW

      ... what was the most dangerous or lethal virus in history that you've come across?

    29. RR

      Oh, I mean, p- you know, probably one of the influenza pandemics in probably 1918. Um, I'm not saying COVID is necessarily, but I mean, like, 1918 flu killed so many more people, so many more people in, uh, a much smaller world population that proportionately, it, it, it's so much worse than COVID. And that, we don't really know if that's because we have better tools and better detection and better h- public health, um, you know, um, practices today than they had in 1918, that's possible, or if 1918 was simply much more virulent. My guess is it's a little bit of both because it's not like we really implemented amazing be- best practices in public health in most of the world. You know, Australia and New Zealand did far better than most of us. But, um, we kind of botched a lot of things. So I, I'd say 1918 is probably worse, but I'm thinking more in terms of, you know, again, SARS. If you had SARS-level lethality and COVID-level transmissibility, eh, there's no reason that, that nature, when it spins the roulette wheel, won't come up with that. Um-

    30. CW

      With a nice, big incubation period as well, probably, where you're still-

  7. 54:001:02:00

    Is the Research Worth the Risk?

    1. CW

      expected value of this benefit versus cost, I don't see how-

    2. RR

      It's insane.

    3. CW

      I don't see how any scientist that's able to do the complex level of syn bio that you need to-

    4. RR

      Yeah.

    5. CW

      ... to probably be able to sequence these genomes and, and m- mess around with the capability of microorganisms, if an idiot like me can understand existential risk, why can't geniuses like them understand the risk in what they're doing?

    6. RR

      Again, I think you've got a semi-privatization issue even though these people are getting public funding. So let's try to inject ourselves into the brain of, you know, the science- scientists in Wisconsin, um, who decided to go ahead with this research. In his mind, he, he... I'm not saying he's a bad guy. He- I'm- I- I am sure his motivations are pure. I'm sure that he thinks what he's doing matters and is helpful, et cetera, et cetera. But he's living in the bubble of his own life and his own career, and he has the overconfidence that any expert has in their own expertise. Um, I am an expert driver, right? I've driven tens of thousands of miles. I have unbelievable confidence in my driving ability, and it's probably misplaced to some degree. He is an expert wet lab dude. You know, he's been running a laboratory that has his name on it, that gets all kinds of funding from different bodies and gets grants from competitive sources and does excellent work and has never had a leak of any kind from his lab. So in his mind, the risk of a leak is n- negligible. He probably would not say non-existent, but he's gonna say it's so low, it's silly. And also in his mind, he has got, you know, utility function. He has got, you know, s- things that he's trying to maximize in his life as all human beings do, and he wants his career to move forward, and he wants to publish in Science and Nature, and he, you know, he wants to do all these wonderful things. And so he is... His own personal risk/reward, um, curve is out of whack with the rest of humanity. If he does this gain-of-function research and gets published in Science and Nature and does more gain-of-function research and gets more celebrity and so forth, his career is going to be much, much more fun, and maybe not much more remunerative because he's an academic scientist. He's probably capped out. But i- it's the things that motivate him, accolades, papers, that kind of thing, are going to come in, in greater, greater and greater, uh, cadence. And so his utility curve says, "Yes, let's go down this path." And he faces the same risk that you and I face if the world ends. He dies. (laughs) Now, he's got... He'll have a lot of guilt maybe, so maybe slightly worse for him than you or I. But e- basically he says, "Tiny risk I die, very high chance, uh, my career gets more awesome, and I'm convinced I'm doing a good thing, and I'm convinced I'd never let anything leak." Just like Rob Reed has been driving for decades, he's never had an accident. "I've been running a lab for decades, I've never had a leak, so I'm never, ever, ever gonna have a leak," right? So he's got misplaced confidence of any expert. He has got strong incentives to do things that incur a tiny little risk, but that tiny little risk doesn't merely apply to him, it applies to all of us. And so the expected value curve that we all run in our own brains whenever we do anything is generally our own interests. You know, if I do this, you know, 10% chance I lose this much money, um, 1% chance I win this much money, 89... you know, whatever. We're generally thinking for ourselves. He's thinking for himself, but he's got all of us in the risk curve, and he's not calculating that expected value of what happens if this goes wrong. So there's a bit of a privatization thing there.

    7. CW

      Even to go back to the first atomic bomb test-

    8. RR

      Yeah.

    9. CW

      ... they did run the numbers.

    10. RR

      They did.

    11. CW

      And even with the numbers in front of them, that there was a nonzero risk that the entire atmosphere could be set alight, permanently curtailing not only all human life, but everything, literally-

    12. RR

      Everything, yeah.

    13. CW

      ... obliterating the atmosphere and setting i- turning the earth into the sun briefly. And-

    14. RR

      Yeah.

    15. CW

      ... I think the number was, uh, 14 million, one in 14 million.

    16. RR

      Um, I believe that, um, one of the scientists put it actually a little bit lower, I think more like one in three million chance of that. Um, but they didn't know, and that's, that's the important thing. This is-

    17. CW

      Nonzero.

    18. RR

      ... was nonzero. Now, here's, again, we get to the public/private decision, and this is really significant. So at that point, um, everybody, uh, you know, Enrico Fermi, Teller, all of these people at the very beginning of the process, so 1941 as the Manhattan Project is just getting going, the first atmospheric test is still years away, um, I think it was f-... It was either Fermi or Teller or Oppenheimer, one, uh, one of them suddenly realized, "Oh my God, we could... We don't know if we'd set the atmosphere on fire or not when we do the first explosion," which turned out to be four years later. And so it was immediately determined that the odds of that were minuscule, and I think that a lot of the scientists s- r- really said they're zero, right? But they were running numbers up until the day before the very first test, the Trinity, um, test. They were running numbers right up until then, and they did the test, and lo and behold, the atmosphere didn't ignite. Now, did they do something irresponsible? Well, let's think about it. Um, they did take what they thought was an incredibly low but real chance of igniting the atmosphere, but particularly back in 1941 when they first confronted that danger and decided to proceed down the path anyway, at that point, the m- most, you know, the most facile nuclear brain power was concentrated in Germany at that point, and the only heavy water plant in the world I think at that point was in Norway and Ger- Germany had conquered Norway. It's 1941, you're looking at a one in three million risk that you might ignite the earth, and you're looking at a much higher risk that Hitler's going to develop a nuclear weapon before you are. And we can all imagine what the world would have looked like if Hitler had developed a nuclear weapon first. Uh, y- so that's big real possibility. This is also bigger real possibility, but minuscule, and ultimately the decision was made at the highest level of, you know, a flawed but functioning democracy, you know, people who had been empowered by, you know, 100 million voters or whatever the number of voters was back in the 1940s, probably less than that. Um, but nonetheless, a very, very careful public good. You know, it wasn't like Oppenheimer was gonna be like, "Oh my God," you know, one in three million chance the at- atmosphere ignites.... you know, the remainder chance IPO. We're gonna- we're gonna go public.

    19. CW

      It could get published in Science and Nature, yeah.

    20. RR

      Yeah. Not going to publish in Science and- not going to go public and make a billion dollars. You know, it was like, he's facing the same risk (laughs) as all of us, and he is thinking on behalf of the planet. They were thinking very, very, very carefully about that risk, and they ended up saying Hitler with a nuke versus this minuscule chance. And I, you know, looking back on it, um, people could argue both sides, but that's the point. People could argue both sides. I'm really glad Hitler didn't get a nuclear weapon before, you know, his enemies did, and, you know, was that risk worth taking? Uh, y- you could certainly argue that it was, and I think that ultimately, the Manhattan Project people said, "Teeny tiny risk. We need to win this war. You know, let's go." Um, and, and again, I don't think that that's ... That could be debated eternally, but the fact that it could be debated tells us it was not a crazy thing to do. Whereas gain-of-function research, I might end up on Science, and I'm taking that whole ... Like, you getting on the cover of Science Magazine is not as awesome as beating Hitler. It just isn't. Beating Hitler is really, really good. You getting on the cover of Science doesn't matter to anybody in the fricking planet but you. And taking a similar one-in-three-million risk, let's say it's one-in-three-million risk, is- is obscene when you're not taking that risk to do something as important for humanity as defeating Hitler.

  8. 1:02:001:08:18

    Would Moon Labs Reduce Risk?

    1. RR

    2. CW

      I think-

    3. RR

      And we are going to be giving a lot of people one-in-three-million roulette wheels. We already have.

    4. CW

      So to me, bringing a virus into existence that doesn't currently, in an effort to inoculate us from the chance that it might come into existence-

    5. RR

      It's Sargraving mad.

    6. CW

      Thank you. Why aren't we talking about putting BSL3-plus labs on the moon?

    7. RR

      Yeah, no, okay, so, well, this is, this actually started with y- when you said, "Can we do a glass ceiling?" And I was starting out by saying, "Well, there's a really simple thing we could do, but we're not doing it, which is no gain-of-function research period at all." The- the world should agree on that, just like the world agreed on, you know, nuclear non-proliferation and, you know, other ... There- there have been treaties that a- you know, hundreds of nations have signed, over a hundred nations have signed. That's how we got rid of chlorofluorocarbons, uh, international agreement that we're not going to use the stuff anymore, which more or less stuck, although there are signs that they're being used in China now, um, on the sly, but whatever. Um, so we- we've done this before. It- it shouldn't be that hard for all the nations of the world to agree no fricking gain-of-function research. And that probably stops it because there's not much of ... There's no motivation for somebody to, for a private company to do it. It really is academic. It's generally government-funded. It's generally done in relatively transparent areas. It's generally done because people want to get published. It's generally done or because the government has an agenda. It would be an easy thing to say, "Let's stop all of that." And now you have taken one source of risk out of the whole life sciences equation, of bio error with gain of function. So that should be fricking easy thing for us to agree on. But have we ... So let's start there. We're certainly not there yet, but if we do that, then there's a whole nother set of risks that are out there that like, okay, uh, we're gonna get better and better and better at designing bugs in silico, or designing bugs in lab because we- we weren't smart enough to not do g- gain-of-function, or we're gonna continue to just publish the genomes of things like the 1918 flu and smallpox and have that information out there. Like, we've got recipes for unbelievably lethal things, and instead of it being, you know, something coming to be in a, in- in a wet lab and escaping, we're not that far from a time when people, and this is oversimplifying what they would do, but we're not that far from a- a time when a lot of people in academic and- and private company settings will be able to de facto hit a print button and get the genome of whatever arbitrary critter they want out and then have the tools to boot that up and the mecha- and the mechanism of a virus and have it start replicating. And so gain-of-function is one lid that we need to place, but we also need to really, really harden the entire syn bio infrastructure to make it very, very difficult for people to, you know, to print or obtain dangerous DNA. And there are quite respectable early efforts generally originating with private industry to limit the ability of any random person to get dangerous DNA. But they are not wide enough spread, they do not have the force of law, there are self-regulatory steps that the biotech industry, life sciences industry has taken, and they're not really necessarily envisioning the day in which printable highly distributed DNA and RNA synthesis capabilities become widespread. And so that's another lid that we have to put on that's a much trickier lid than merely not doing something stupid and dangerous. You know, it's kind of like, you know, imagine that you're raising a kid who, um, loves to get drunk and drive and, uh, also loves to go to school and breathe, okay? So your kid, y- y- stopping the gain-of-function research is like stopping the kid from drunk driving, okay? We got that off the table, but he also goes to school and breathes. So there- there's a- there's another risk that's much more complex that he's going to catch a deadly virus at school, and you know, okay, we've gotten rid of the drunk driving. Nice. (laughs) That's a good thing. We stopped doing the self-destructive stuff, but now there's this much more diffuse, harder-to-define risk that we need to work on.

    8. CW

      It's gonna be difficult to survive the next century, isn't it?

    9. RR

      It is. It really is.

    10. CW

      Why aren't BSL-3 plus labs on the moon?

    11. RR

      On the moon? Yeah. Well, 'cause most of the work that they do, um, is not with apocalyptic... Um, I mean, these really truly apocalyptic microb- microorganisms are rare, and most of the work that they do, um, isn't with things that deadly. Um, we don't have much stuff going on on the moon right now, so shipping that stuff up there and maintaining it up there and, you know, all that payload, and, you know, the tons and tons of matter that would need to be transported from here to the moon is c- currently beyond our capability. And, you know, when you look at all the things, good things that are being done with therapeutics, and, you know, academic research that has an unambiguously good agenda without disastrous consequences, you can rationally say that, like, the work that happens in BSL-3 and BSL-4 labs is valuable to humanity, it's largely contained, and the danger of most of what's in there getting out is highly, highly local, and compared to what we're talking about, extremely minor. Um, so that's probably, uh, why we don't. Now, once, once we get to the moon, we get to Mars, and we're ferrying things back and forth quite easily and naturally, uh, perhaps another conversation should happen at that point. But for now, I think the easiest answer is let, let's not have any apocalyptic microbes anywhere, um, and when we find, you know, semi-apocalyptic microbes like the 1918 flu, let's not publish their genomes to the internet (laughs) . You know, that's getting rid of the drunk driving, and yeah, that's... Uh, but yeah, it's risky. It's risky as this stuff proliferates if we don't build really, really great safeguards into the tools before they

  9. 1:08:181:16:45

    Helpful Lessons from Covid-19

    1. RR

      proliferate.

    2. CW

      So given the fact that we've had COVID-

    3. RR

      Yes.

    4. CW

      ... and that this has been-

    5. RR

      Oh, I have with-

    6. CW

      ... a coordination inoculation if we want to call it that, that it's taught us we aren't able to shut down travel sufficiently quickly, that we weren't able to produce PPE sufficiently quickly, that culturally, given the technology of now, we didn't have any archetypes for how people should behave-

    7. RR

      Mm-hmm.

    8. CW

      ... that people didn't understand what social distancing was, people don- didn't understand why you should wear masks, about what staying at home and isolation was, and quarantine and and the way that we get vaccines out and stuff like that.

    9. RR

      Mm-hmm.

    10. CW

      Um, do you feel like we're in oddly a better position post-COVID? And if so, how much?

    11. RR

      Yeah, in, in, in some ways, we are hypothetically in a better position if we take a set of actions in response to COVID to harden society against the next pandemic, if and only if we take those steps. And so, you know, w- we ignored the warning shots of SARS and MERS and Zika and a whole bunch of other things, you know, we kept ignoring the warning shots. COVID is a very, very difficult warning shot to miss. The whole world has been traumatized by this, um, trillions and trillions of dollars in economic damage, millions and millions of lives lost. There will be much greater seriousness applied to pandemic resistance in the future. The question is, will it be adequate attention, and will it be sustained intention, and will it be intelligent attention? And so what we... There's a, as, as you know, I'll briefly plug another appearance that I did. Um, Sam Harris and I did this four-hour piece that was a very unusual podcast format in that about 100 minutes of that was a monologue that I researched and wrote and recorded, and I did the research, uh, I interviewed d- um, over 20 scientists. I read thousands and thousands and thousands of pages, and in that episode, I propose a set of steps that collectively are trivially inexpensive compared to the cost of even the annual cost of the flu, uh, let alone a true pandemic. And I believe if we take those steps and surely other s- steps that I wasn't smart enough to identify, we will really, really, really harden ourselves up. Uh, I'll use one example of something that, that we should be, there should be a global headlong effort in right now, and I've heard absolutely no sign of that. Um, people who are deep in virology, uh, are, are quite convinced that there's a very high likelihood that with the right amount of research and the right amount of dollars, we could create pan-familial vaccines. What do I mean by that? Well, coronavirus is a virus family, influenza is another, there are untold thousands of virus families, but only a few dozen, uh, um, present lethal risks to humans, uh, coronavirus and influenza being two of them. So let's say there's 20 of them, it's roughly 20, um, we don't currently have what we could call a universal flu vaccine. What a universal flu vaccine would be, or will be hopefully if we develop one, is a vaccine that attacks the core infrastructure of the entire influenza fa- family. And so what we have with the vaccines that get issued every year is that there's lots and lots and lots of mutations in influenza, I mean, so many mutations. It, it, it mutates f- y- you know, frenetically throughout the year, and when we develop the vaccine for the Northern Hemisphere, we're looking at what's brewing in the Southern Hemisphere. There's a lot of, you know, influenza surveillance that's going on throughout the world, and a, a panel of extraordinarily talented scientists make their best predictions of what elements of influenza are gonna be predominant in, let's say, the United States. It's probably the whole Northern Hemisphere that gets the same vaccine, but let's just say the United States to simplify it, so I can be parochial as well, 'cause here I am. Um, what, what strains are likely to be predominant in the United States in, in the coming flu season? Let's protect against that as best we can in this year's vaccine, and some, you know, maybe 50% of Americans get the, the flu vaccine, probably less than that. Some percentage of people will be immunized, and in a good year, that vaccine will be about 60% effective.... now a pan-influenza, a, you know, a universal flu vaccine would say, "Screw the strains. We're going for the jugular of this..." it's called a species, you know, of influenza as a family. Um, I talked to one person who's very, very deep in, um, the world of lobbying for and doing initial work for a, a, a universal flu vaccine, a guy named Harvey Fineberg, used to run the Harvard School of Public Health, um, like all kinds of titles and accolades I can't remember right now. But he's, he estimated to me that if we really went all in on this, it would probably cost about $200 million and take 10 years to get there or not. And he felt that there was a 75% chance that we would get there, not 100% chance. And I said, "Well, Harvey, le- let's go crazy worst-case scenario. Could it be 10X that?" He's like, "Yeah." He's like, "Maybe it's $2 billion over 10 years, and there's a 50% chance of getting there." Okay, the, the flu costs the United States $365 billion a year in lost productivity and medical bills. I- if you have got a chance, let's say Harvey's worst number, to invest $2 billion with a 50% chance of, of relieving yourself of a annual $365 billion burden, there should be no thought necessary at all. You take that chance, and hopefully Harvey's right, and it's actually more like $200 million and 75% chance of success. What we should be doing right now is let's take worst-case numbers. Let's say it's $2 billion per virus and there's 20 of them. Let's invest those $40 billion over the next 10 years and get, you know, pan vaccines for every virus that infects and kills humans. And, and let's throw in another 20, the zoonotic viruses that are out there that are most threatening, let's get pan-familial vaccines, or do our very best and at least have a 50/50 shot with each of them. You know, $40 billion over 10 years, $4 billion a year, that's, that's, that's chump change in the context of the American budget. That's chump change in the context of $365 billion lost to flu, and, you know, one credible econo- economist, um, estimated $14 trillion of damage to the United States economy, US alone, from COVID. Like, you, you do that. I don't see that happening anywhere. There are a couple of academic labs that are doing, uh, working on a pan-coronavirus vaccine, but they don't have a (laughs) $2 billion budget. This is not happening. And so when you ask me, "Are we better off for having had COVID?" Theoretically, yes. Theoretically, we've gotten a wake-up call that's unmissable, and now we're going to take really smart preventative steps. But this sh- shriekingly obvious step, you know, there may be some governments that are calling for it. I, I can't read every news story every day. I haven't detected any concerted effort to say, "Let's just take every virus family off the table that we can." And if we're not doing that, and that's really cheap and really obvious, um, I could certainly see us not taking a lot more, more expensive and slightly less obvious, but equally important steps.

    12. CW

      After the last 15, 18 months as well-

    13. RR

      Yeah.

    14. CW

      ... the, the most obvious flare fired up into the air directly over our position, highlighting-

    15. RR

      Mm-hmm.

    16. CW

      ... what the problem is, highlighting all of our insufficiencies and our poor coordination, and yet nothing.

    17. RR

      Yeah. Pan-corona vaccine. Hello, people. Why is that not happening now? I mean, that research should have started in April of 2020. W- w- we don't have to wait for, you know, the mRNA vaccines and so... You know, that research should have started as soon as we said, "Holy cow. SARS and now this? Coronavirus, big deal." But no, that, that, you know, there is research, like I said. I've seen a couple papers coming out of academic labs, hurrah, go academic labs. But it doesn't have the public support, like, the public funding support that it needs. And, and that's, yeah, it's, it's bonkers.

  10. 1:16:451:22:15

    What if China Leaked Covid-19?

    1. RR

    2. CW

      What would be the implications if the lab leak theory was proven to be true, for China and for-

    3. RR

      Mm-hmm.

    4. CW

      ... sort of the fallout, uh, politically and the safety and future as well?

    5. RR

      Okay, so let's, um, let's play this out. Let's assume for the s- the sa- the sake of this thought experiment, this was a lab leak, um, and it gets proven definitively that it was. It would almost certainly be, if this was the case, it would almost certainly turn out that COVID was a result of gain-of-function research, uh, because it is a novel virus, (laughs) and you know, it came out of nowhere. And if it was a lab leak, it almost by definition would not have been something that was circulating naturally and just didn't happen to cause a pandemic. It would be something that was novel that was created in that lab. So I think if that got out and was proven beyond a reasonable doubt, there would... one hopes that there would be a global ban on gain-of-function to start with. Like, this bizarre thing that we haven't done yet would be done. And I think banning gain-of-function maybe eliminates 2 to 5% of the total risk that we face from senbio run amok in evil hands or good hands. But that's a great step in the right direction. So I think that would happen. Um, I think that, you know, we talked about the compounding spread of, uh, climate risk awareness. I think there would be an unbelievable jump in global awareness and concern, uh, about senbio run amok. And so I think that there would be a much better regulatory as- apparatus, there would be much more self-knowledge within academic and private circles. Like, a lot of really, really good stuff would happen as a result.

    6. CW

      Is there a part of you that h- that hopes that it does get proven?

    7. RR

      Well, I mean, if it, if that's in fact what happened, all of me hopes that it gets proven. You know, if that's in fact what happened, um, yes, absolutely, uh, China would have a great deal to answer for.... to every single country in the world. And, you know, China should be held accountable, and the Wuhan Institute of Virology should be held accountable, and the very practice of gain of function should be held accountable, and the notion that BSL-4 lab- labs are safe should be held accountable. And so yeah, if it, if that's in fact what happened, absolutely that I would want that to come out so that we s- you know, the world says, "China don't do that anymore." The world says, "No more gain of function." The world says, "BSL-4 is only a best effort." And those three things right there, uh, would significantly redu- reduce the world's risk. So yeah, if that's what happened, I'd want it to come out.

    8. CW

      Thinking about-

    9. RR

      But we don't know, we don't know that that's what happened. Uh, and we probably never will.

    10. CW

      Well, there's a lot of opaqueness, right? A lot of opacity around-

    11. RR

      There's incredible opacity, yeah. Incredible opacity, which is, which w- makes one suspicious, but then again, authoritarian governments are opaque by nature.

    12. CW

      Yeah.

    13. RR

      It's their instinct, yeah.

    14. CW

      It's almost like if you could have picked, probably except for North Korea-

    15. RR

      Mm-hmm.

    16. CW

      ... if you could have picked a country that you didn't want it to start in, it would've-

    17. RR

      Yeah.

    18. CW

      ... been China.

    19. RR

      Yeah.

    20. CW

      In fact, maybe even more so that it's China because they have more sophisticated resources, probably fewer bad actors that are prepared to, uh, turn mole and actually be whistleblower about stuff, better coordination, better surveillance.

    21. RR

      Yeah. Yeah, I mean, it's, it's, um, the opacity surrounding the investigation of where this came from is, is near almost total. And, you know, it, it makes one wonder, "W- why, what are we hiding?" You know? And so for those who, and again, I don't, I do not pretend to know whether it was a lab leak or not. I want to be very clear about that. A lot of very, very, very smart people think it was. A lot of very, very smart people think it wasn't, so... And I don't have the level of bio-sophistication to enter that debate, so I will plainly state, I don't have a theory on that. But a very strong argument in favor of it is like, what in the world are you hiding? 'Cause you're hiding something, why in the world will you not allow, you know, a very, very serious outside investigation into the very early cases? You know, there, it seems that a couple WIV, Wu- Wuhan Institute of Virology people were hospitalized with something looking an awful lot like COVID in December, and, you know, why is that information, why is that not being explored? You know, why did the World Health Organization delegation that went there have zero access to anything that, well, to a lot that would've shed light on the first five weeks? W- why all the opacity, you know? And it, it's easy to imagine that something's being hidden. It's... But we also have to acknowledge that, you know, authoritarian governments are opaque by nature. That's their instinct. And they could be (clears throat) opaque for reasons that are rational to them that don't have to do with this.

    22. CW

      Yeah, yeah.

    23. RR

      Like they, they c- it could be that they, uh, that, that it's completely zoonotic, completely natural, and when they did their own in- investigation, they're like, "Oh my God, our safety pro- protocols at Wuhan Institute of Virology kind of suck." It didn't come out of there, thank God. I would feel really guilty if it did. But, oh my God, like, it could've. It didn't, but it could've, and we don't want anybody seeing that." It could be something like that.

    24. CW

      Fallibility all the way down, just humans-

    25. RR

      Yeah.

    26. CW

      ... humans and our biases the whole way. So-

    27. RR

      I know.

    28. CW

      ... zooming back out now from just syn bio into some of the broader strategies that we have for X-risk,

  11. 1:22:151:37:47

    How to Prevent Destroying Humanity

    1. CW

      I think looking at all of the different ways that we could potentially manifest our own extinction, on top of the natural risks as well that are just background and ambient and constantly going on, whether it be a super volcano or a gamma ray burst or-

    2. RR

      Yeah.

    3. CW

      ... an asteroid that's gonna come to hit us, um, it seems like making the situation already worse for ourselves is probably a bad idea.

    4. RR

      Yeah. (clears throat)

    5. CW

      Would there be an equivalent of reducing, uh, putting a glass ceiling on gain of function research or putting a glass ceiling on the source of-

    6. RR

      So eliminating it, yeah.

    7. CW

      ... this sort of research that we do in- entirely. Should we be looking at perhaps considering that across the board with regards to technology? Should we curtail our technological progress for a few thousand years until our wisdom can catch up with it?

    8. RR

      Um, I would say no for a diversity of reasons. Um, one is, I think it's thoroughly impossible. Uh, it, it would, it's, it's strictly in the re- realm of thought experiment, because if... Let's talk an entirely equally impossible scenario, but let's say, you know, the Western democracies and Eastern democracies, you know, all, i- the, the democracies of the world all agree to do that. China ain't gonna stop, you know? And, and it, it just isn't. And if China stops, Russia's not gonna stop. And if China and Russia stop, North Korea's not gonna stop. And you know, if North Korea stops, then somebody that we never thought of, like, you know, uh, maybe suddenly, you know, Egypt- (laughs)

Episode duration: 2:11:44

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode SXrYXmnmh9k

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome