Skip to content
Modern WisdomModern Wisdom

Transhumanism; How Biotechnology Can Eradicate Suffering | David Pearce

David Pearce is a co-founder of Humanity+ and a prominent figure within the transhumanism movement. Where is the future of the human race heading? Will our descendants be at the mercy of random genetic chance for suffering diseases? How about editing the genetic baseline for happiness? Or levels of empathy? Can we use computers to emulate a human brain? What are the ethical implications of Crispr? Expert to hear David's answers these questions and more as we delve into the fascinating world of transhumanism. Extra Stuff: Check out David's Website - https://www.hedweb.com/ The Abolitionist Project - https://www.hedweb.com/abolitionist-project/index.html Check out everything I recommend from books to products and help support the podcast at no extra cost to you by shopping through this link - https://www.amazon.co.uk/shop/modernwisdom - Listen to all episodes online. Search "Modern Wisdom" on any Podcast App or click here: iTunes: https://apple.co/2MNqIgw Spotify: https://spoti.fi/2LSimPn Stitcher: https://www.stitcher.com/podcast/modern-wisdom - I want to hear from you!! Get in touch in the comments below or head to... Twitter: https://www.twitter.com/chriswillx Instagram: https://www.instagram.com/chriswillx Email: modernwisdompodcast@gmail.com

Chris WilliamsonhostDavid Pearceguest
May 30, 201954mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    (wind blowing) Hi, friends. Welcome…

    1. CW

      (wind blowing) Hi, friends. Welcome back to the Modern Wisdom Podcast. My guest today is David Pearce. We're going to be talking all things transhumanism. That is not talking about people changing their genders, but it is exploring some very exciting topic areas which I've wanted to sink my teeth into for a while. So today, expect to learn why suffering of any kind is an artifact that might have been useful to our ancestors but is something that we need to transcend as soon as possible, how David thinks that humanity not only should but also must progress towards a more hedonic imperative, the hard problem of consciousness and why we might not be uploading our brains into computers any time soon, and a lot of interesting discussions about the implications of CRISPR and gene editing on the whole. We cover a lot of topics that I definitely should have been much more well-educated on seeing as they have massive implications and are also pretty current to society right now. But yeah, uh, loved the conversation. David is a massively prominent figure in the transhumanism movement, and I feel very privileged to have had him on teaching us the 101 of transhumanism. So without further ado, please welcome David Pearce.

    2. NA

      (Upbeat music playing)

    3. CW

      Ladies and gentlemen, welcome back. I'm joined by David Pearce today. David, welcome to the show.

    4. DP

      Hi, Chris. It's good to be with you. (claps hands)

    5. CW

      Fantastic to have you on today. Also nice to hear a familiar accent. Uh, so-

    6. DP

      (laughs)

    7. CW

      ... we're talking about transhumanism today. This will be a, a new venture for a lot of the listeners. So let's start off with a definition. Can you, can you tell us what transhumanism is?

    8. DP

      (inhales deeply) Uh, well, there are no sacred texts, but, uh, very simplistically, I sometimes talk about the three supers, uh, three supers of, of transhumanism. Uh, super intelligence, this is the idea that it's going to be possible to radically amplify our intelligence, uh, and machine intelligence, and there are, uh, different ways one can go about this. There are different conceptions of post-human super intelligence, but that's one of the three supers. Then there is super longevity. This is the idea that there is no immutable law of nature that says that biological robots must grow old and die, whereas silicon ro- robots can be, uh, repaired indefinitely. And transhumanists, uh, uh, believe in radical life extension, indefinite youth with a backup, uh, of cryonics or maybe even cryoethanasia because... for any of your, uh, listeners who perhaps of a certain age think that realistically they're not going to make it.

    9. CW

      (laughs)

    10. DP

      And the third, uh, uh, super, which is the super I focus on most of all, super happiness or self... or super well-being. Uh, this is the idea that it's going to be possible to replace the biology of pain and misery and suffering, uh, with life based entirely on gradients of intelligent well-being. This is, uh, uh, replacing the biology of suffering not just in humans but, in the long run, uh, the rest of the animal kingdom throughout the living world. Now, as well as those three supers, uh, there are plenty of transhumanists who would want to add a fourth, uh, super that they don't agree what that might be. Uh, for example, what about, uh, super empathy? Um, but I would argue that, uh, this is, uh, embraced by any sufficiently rich conception of super intelligence, that a full spectrum super intelligence wouldn't just have an off-the-scale IQ. Uh, it would also have a super human, uh, capacity for per- perspective-taking, empathetic understanding. So there in a nutshell.

    11. CW

      Fantastic. It's no small task, I think, is one way to kind of summarize the, uh, the transhumanist movement then. Um, interestingly, I had, uh, Professor David Sinclair on recently, who you may know.

    12. DP

      Right. Yes, indeed. Yes, yes.

    13. CW

      Yes, I had him on not long ago talking about the cutting-edge longevity research that he's doing. And during that, I asked him, "Do you think that a human could live for a thousand years?" Uh, and his, his short answer was yes. So it's, it's interesting to hear that people from multiple fields, really different fields from... coming out from genetics and, and gene editing, coming at it from this longevity research, and then m- your side as well, are all pointing in a similar direction.

    14. DP

      (inhales deeply) Yes. Uh, and th- this is it. I think a lot of people psychologically would switch off if one, uh, were to say, "Well, look, uh, our grandchildren won't grow old and die, but you will."

    15. CW

      (laughs)

    16. DP

      I mean, there's something almost cruel about telling people that science is going to find a, a cure for aging shortly after you're dead.

    17. CW

      Mm-hmm.

    18. DP

      And I know... don't know this. Uh, unlike most, uh, transhumanists, I'm not, shall we say, one of life's temperamental optimists. Um, but nonetheless, as, as, as I said, as well as, uh, supporting, uh, research into anti-aging technologies, there is one strand of the super longevity aspect of transhumanism that is focused on, uh, cryonics and even, uh, uh, cryoethanasia. So-

    19. CW

      What are, what are those two terms for us, please?

    20. DP

      Essentially... Sorry. I sh- I should have said. This is the i- this is the idea that if you are, uh, uh, sus- suspended in optimal conditions, that, uh, so long as irreversible information loss doesn't occur, it will be po- possible to reanimate you at some future date when whatever... when there is a cure for whatever killed you. Um, now, uh...It is extremely difficult in a technical sense to destroy information, sh- some physicists would say impossible. But nonetheless, I would say a bi- a big imponderable is whether that people who are, uh, uh, frozen a long time, by about long time I mean hours after, uh, their nominal death, whether it wi- will be possible to reanimate them, that, uh, in practice, deterioration may be too far advanced. But, uh, in principle, at any rate, one could have something like cryoethanasia rather than waiting until you're 95 and gaga, uh, uh, have yourself, uh, suspended where you are still, shall we say, in the, in the prime of life. It hasn't been done, uh, yet, but this is, uh, this, this would be one possible option.

    21. CW

      That sounds an awful lot like a book that I've just about finished. Have you read Children of Time by Adrian Tchaikovsky?

    22. DP

      I blush to say it is now many, many years since I've read a, a, a novel. Uh, that is... (laughs)

    23. CW

      Cool. So it was the, um, it was the winner of the 2016 Arthur C. Clarke Award. Um-

    24. DP

      Yep.

    25. CW

      And it was the 30th anniversary one as well. So they gave it out to a fantastic book. And in that they have people who dip in and out of suspended animation, these long sleeps for, um, going across, uh, big, vast galactic distances between different, uh, different star systems and stuff like that. And, uh, they, they jump in and out of it like you'd, like you'd get a shower. Um, but it sounds, it sounds like the, the technology for that's a, a little bit further away. So moving on to your particular domain of competence, you were talking about, uh, super happiness?

    26. DP

      Yes. Once again, that's, uh, that's just a, a slogan. But, um, yes, if you think of perhaps today's, uh, hedonic range as minus 10 to zero to plus 10, with, uh, minus 10 being the absolute pit of, uh, despair, unbearable a- unbearable agony, uh, hedonic zero being emotionally neutral experience, uh, and plus 10 being the most wonderful peak experience of your life-

    27. CW

      Mm-hmm.

    28. DP

      ... um, imagine if it were possible to engineer a civilization that, let's say, stretched from plus 70 to plus 100.

    29. CW

      (laughs)

    30. DP

      Now, if that sounds too much like, uh, science fiction, well, that may well be the, uh, uh, the case. And much more morally urgent, I think, is focusing on th- the, the subzero states that, uh, plague so many lives, uh, today, uh, that sadly, natural selection didn't optimize us for being happy, it optimized us for leaving more copies of our genes, or as evolutionary biologists like to say, to maximize our inclusive genetic fitness. Uh, and yeah, there's no real convincing scientific evidence that on average, uh, we are happier or sadder or more or less d- contented or discontented today than we were on the African savanna, which, uh, is very counter, uh, intuitive. But yeah, the hedonic treadmill-

  2. 15:0030:00

    (laughs) Yeah, yeah. …

    1. DP

      Reich, uh, race hygiene. Uh, other people will say brave new world.

    2. CW

      (laughs) Yeah, yeah.

    3. DP

      Uh, another reason is that when we, uh, try to imagine the future, there is a lot of neurological evidence that what we're doing is drawing upon memory and the memory which we're drawing on is essentially sci-fi, often sci-fi read when we were, uh, uh, uh, kids and sure, uh, half-digested memories of everything from, uh, you know, kind of Gattaca or in the case of superintelligence, uh, Skynet. So yeah, I mean, uh, uh, as I said, I, I very much sympathize with anyone who is suspicious and who is pessimistic. Nonetheless, if we are to get rid of the horrific burden of suffering in the world, we're going to have to, uh, tackle it at its genetic source. Essentially, uh, our genes designed us to be unhappy and discontented, uh, a, a lot of the time, uh, and unless we actually do edit our genetic source code, there is going to be obscene, uh, misery and suffering in the world, uh, indefinitely.

    4. CW

      Are you sometimes surprised that we manage to have lives of the degree of happiness that we do given our genetic predisposition?

    5. DP

      Yes, at times. I mean, this is it, nature, uh, seems to just sort of play around with the dials and that those, sadly, there are a very significant minority of people who endure kind of chronic misery, pain, depression. Equally, there are life's temperamental optimists and even extreme genetic outliers who spend most of their, uh, lives, uh, yes, extremely happy, uh, wh- which is kind of e- existence proof. Essentially being temperamentally very happy, it's a kind of high risk, uh, high reward. One is more likely to go out, do things, explore the world, take risks and so on if one is a happy, extroverted go-getter. Whereas if you have low m- low mood, you're more likely to keep your head down. It's a kind of low risk, low reward. I mean, that's a very crude dichotomy.

    6. CW

      Yeah.

    7. DP

      Um, but, uh, uh, yeah, most people, uh, on balance, uh, love life. I mean, I might, given my rather gloomy and depressive temperament-

    8. CW

      Mm-hmm.

    9. DP

      ... think of Darwinian life as, as sort of sentient malware-

    10. CW

      (laughs) .

    11. DP

      ... but this is very much the minority, uh, posi- uh, position. Uh, and, and yeah, one needn't be a, a Buddhist or a negative utilitarian to believe that we should aim to prevent and mini- minimize unnecessary suffering. Uh, and what seems quite counterintuitive is the idea that all suffering, all experience below hedonic zero is going to become technically-... optional, uh, that sure, suffering in one's life, uh, sometimes, but only sometimes, sometimes it can be instructive and valuable. But the critical question to ask is, is it functionally indispensable? And as, uh, silicon robots-

    12. CW

      Mm-hmm.

    13. DP

      ... machine intelligence, AI, uh, progressively eclipses humans in ever more cognitive domains without the nasty raw feels of, of, of pain and suffering. I think we just have to face up to the fact that, uh, yeah, that, that suffering is just a ghastly implementation detail of Darwinian life, and we ought to be switching to a more civilized signaling system instead.

    14. CW

      Yeah, it's this odd artifact that's come along for the ride, hasn't it? So the, uh, I wanna talk about The Hedonistic Imperative and The Abolitionist Project, both of which you're a, you're a big proponent of. Would you be able to explain to the listeners what those are, please?

    15. DP

      Yes. I mean, The Hedonistic Imperative was the name of an online manifesto I wrote back in 1995. Why The Hedonistic Imperative? Well, I would really have liked to, uh, call it, you know, The Moral Imperative To Use Biotechnology To-

    16. CW

      (laughs)

    17. DP

      ... Face All Suffering Throughout The Living World. But, of course, no, one needs something snappy and catchy, so-

    18. CW

      I, I think your marketing is better on this one, David. I think-

    19. DP

      Yeah. (laughs)

    20. CW

      I think The Hedonistic Imperative works better, yeah.

    21. DP

      I mean, it's not ideal because hedonism connotes something vaguely shallow and a- and amoral. Uh-

    22. CW

      You think Woodstock, don't you?

    23. DP

      Yeah. Uh, whereas I do see there is a desperate moral urgency to get- getting rid of suffering. But, uh, in a nutshell, yeah, it- it- it gives the, uh, it- it gives the gist. Um, The Abolitionist Project really, uh, just, uh, alludes to, yeah, getting rid of suffering in both human and non-human animals via biotechnology. Um, how far one goes beyond this, uh, yeah, though personally I'm, uh, a negative utilitarian, I think that a- a overriding obligation is to get rid of suffering and then everything else is a bonus, icing on the cake. Nonetheless, uh, yes, I do foresee a civilization where, uh, their, their darkest lows are richer than today's peak experiences. Um, but that probably strikes most people as, uh, science fiction. Uh, and if they're anyone with a more down-to-earth temperament, uh, I would, yes, stress more the, uh, uh, the issue of, of, of suffering. Yeah.

    24. CW

      I understand. So getting towards the, the rubber meeting the road, so to speak, how, how are we beginning to go about that? How, what's the beginning of the proposed strategy, the next steps from now and then where does that lead us to in the, the longer term, like the real far future?

    25. DP

      Hmm. Well, a lot of, uh, futurology consists, uh, of extrapolation, which is dangerous. Uh, uh, and sadly, uh, though we are just begin- you know, the first CRISPR ba- babies, if you think of the scandal of these Chinese, uh, researcher who did it tw- allegedly without authorization.

    26. CW

      From the parents, yeah.

    27. DP

      Uh, yeah, yeah, well, he may have got... Well, I was thinking more of the, the, the state. Uh, w- we now know that not merely did, uh, uh, yes, uh, not merely did the researcher a- uh, aim to make the kids protected against HIV, that, uh, purely inadvertently, it seems memory and intelligence may have been enhanced.

    28. CW

      (laughs)

    29. DP

      Genes involved in, uh, uh, in cognitive augmentation. I'm personally rather skeptical that this was a mere, uh, oversight.

    30. CW

      Accident, yeah.

  3. 30:0045:00

    Mm-hmm. …

    1. DP

      uh, but, uh, but as I said, I said responsible pillar of the local community, vegan, respi- uh, uh, a school, a school teacher, um, one obviously needs to be extremely wary of placing too much faith in individual case, case studies. But I mean another example I give is of, uh, yeah my transhumanist colleague Anders Sandberg who, uh, just, yeah, will acknowledge, "Yeah I do have a ridiculously high-"... hedonic set point. What is gives this back to your, uh, u- u- original question, what is critical, uh, to intelligent behavior, to responding adaptively, to- to noxious stimuli, uh, isn't one's absolute place on the pain or pa- or pleasure scale. It's some kind of hedonic gradient. Uh, and, uh, yeah, one might imagine that people who are just exceptionally happy would be less motivated-

    2. CW

      Mm-hmm.

    3. DP

      ... to behave adaptively, but counterintuitively, this doesn't seem to be the case, that other things being equal, uh, the more- the more you love life, the more you're motivated to protect and, uh, and pre- preserve it. Whereas, it's depressives, uh, who get stuck in a rut. They experience learned helplessness, behavioral despair, sometimes, uh, self- self-destructive behavior. But preserving in- yeah, uh, information sensitivity, uh, uh, the dis- distinction between being blissful and- and blissed out, and-

    4. CW

      Yeah, yeah. I think that's- that's something that was in the back of my mind, that if it's a world where essentially everyone's walking around on MDMA, like 100X MDMA sensations, but a little bit more sober and able to remember what's going on, I- I- I wonder, um, how useful that society would be. I think there's some parallels that can probably be drawn between that as a- a genetic or pharmacological solution, uh, and some of the concerns people have with universal basic income, that when you take away people, what is considered to be people's current reason for living, that they'll be left in this kind of wallowy state where everyone's just lying around eating Skittles on beanbags and stuff like that (laughs) .

    5. DP

      Yes, yes. Uh, it's a ... I mean, in the case of universal ba- uh, basic, uh, income, I think it's just a matter of any decent society will have something like universal basic income and get rid of this, uh, this appalling sprawling, uh, welfare bureaucracy. But nonetheless, uh, I mean, the- the basic, uh, uh, yeah, can be relatively, uh, basic that I- I- I don't think ... Well, I won't go off on a long, uh, uh, uh, spiel there. It's very easy-

    6. CW

      (laughs)

    7. DP

      This is it. It's very easy to, uh, you know, to sound off, setting the world, uh, uh, uh, to rights, but, uh, uh, yeah, I- on the whole, I don't tackle the- the social issues to the same degree. I mean, I do have views on everything from Donald Trump to climate change, to you name it. But the point is other people have said it better, so ... (laughs)

    8. CW

      Yeah, I understand.

    9. DP

      Yeah.

    10. CW

      I think you've got- you've got some s- some fairly niched down stuff that you need to be working on as well. I think you can- you can leave the politics to, uh, to some other people. Um, so one of the things that I'm thinking about straight off the bat, having read Superintelligence by Nick Bostrom. Listeners of the show will know that I found the book both testing and very, very fascinating. Um, rather than being selective with our genes or- or using, um, particular drugs or whatever it might be to edit the way that we exist in the real world, why not just do whole brain emulation or upload ourselves to the internet?

    11. DP

      Ooh, difficult question because this brings us to the nature of consciousness and the binding problem.

    12. CW

      Let's jump into it, David. Come on. Let's go-

    13. DP

      (laughs)

    14. CW

      ... feet first into the binding problem, my friend. We're in at the deep end now.

    15. DP

      Okay. Um, the hard problem and the binding problem, uh, are worth distinguishing, though they're, uh, interrelated. The hard problem is why, uh, does consciousness exist at all? Why aren't we P zombies? Uh, nothing in the laws of physics is understood today if one assumes that, uh, basic understanding of the world's quantum field theory describes fields of insentience rather than sentience. If one makes that very modest assumption, nothing, uh, in today's physics and chemistry forbids, uh, you and I from being- talking to- to each other now, and we- we're both being P zombies. So that's the question.

    16. CW

      What's a P- what's a P zombie?

    17. DP

      Oh, sorry. A philosophical zombies-

    18. CW

      Oh, okay. Yes.

    19. DP

      ... who acts, uh, in exactly the same way as you or me, uh, but isn't conscious, isn't sentient. And I think the interesting question isn't the- the skeptical question, how do I know that you're not a P zombie?

    20. CW

      Mm-hmm.

    21. DP

      The interesting question is why aren't we P zombies? Uh, how is it possible for consciousness, uh, to have the causal capacity to allow us to pose questions about its existence, for instance? So yeah, that's, in a nutshell, the hard problem. The binding problem, which probably fewer people are familiar with, is that even if you think that consciousness is absolutely fundamental to the world, why aren't we so-called micro-experiential zombies? And the population ... I mean, as an example, I mean here is, you know, the population of United States, let's say, three ... Uh, uh, uh, so, uh, yeah. What's the population of the United States? Uh-

    22. CW

      I'm not too sure. Give me one second and I will tell you.

    23. DP

      Yeah, it's, uh ...

    24. CW

      Let me say population. It is 327.2 million.

    25. DP

      Three hundred and twenty... Yeah. Now, simply the fact that one has, uh, 327 million skull-bound, uh, minds, however they intercommunicate, nonetheless-... a, we've no reason to think that the population of the United States is, uh, a mind, a unitary bound mind. Uh, or one can't be sure that the population of the USA isn't a unitary subject to experience, but that kind of strong emer- emergence would be some kind of spooky, it's, uh, difficult to reconcile with monistic physicalism. Um, now the question is why are our brains any different? After all, you and me, we are, uh, uh, yeah, uh, 86 billion membrane bound nerve cells. Even if you think that individual nerve cells, uh, uh, uh, may be, kind of support rudimentary consciousness, rudimentary experience, why aren't we just micro experiential zombies, just patterns of mind dust? Uh, and so yeah, that is the, uh, the binding problem. Uh, and yeah, here I'm very much out on a limb. I don't think phenomenal binding is, uh, uh, is a classical phenomenon, quantum mind. There are powerful arguments against this. But, uh, less controversially, uh, today's classical digital computers are not subjects of experience. And, uh, uh, and simply increasing speed of execution, uh, their complexity or even making them massively parallel, uh, there's no reason to think that sentience is somehow going to s- switch on. So-

    26. CW

      So it's-

    27. DP

      ... exactly-

    28. CW

      ... it's, it's not a case of, I think Sam Harris talks about it being that with processing power, consciousness comes along for the ride.

    29. DP

      Um, well, I mean, uh, yeah, I suppose a lot of researchers, lot of AI researchers do assume that at some time in future our machines will become conscious. But personally, I think the idea that consciousness arises at some, s- some kind of computational level of abstraction is, uh, is, is, is a, is, is a mistake, uh, and that I don't think classical digital computers, uh, are ever going to be more than zombies. Uh, so therefore, I don't think, uh, you or I are ever going to be, uh, uploading ourselves, uh, uh, yes, into- to- to- to digital computers. But I stress, this is obviously a, a, a controversial topic.

    30. CW

      Mm-hmm. Yeah.

  4. 45:0054:21

    Uh, yes. So this…

    1. CW

      um, poetically beautiful way for us to transcend our own genetics?

    2. DP

      Uh, yes. So this is back to, you know, kind of natural selection designed us to be discontented, uh, uh, a lot of the time. Um, I wouldn't say all of us, or at least even most of us necessarily spend our time trying to improve ourselves, but certainly not many people f- think, "Well, I've had enough. Uh, I've got enough money, or I've had enough reproductive opportunities, and so on." The others- other things being equal, wanting more is fitness enhancing. Uh, so, uh, so yes. Yes.

    3. CW

      I understand completely. So m- moving on, one question that I really wanted to ask was, if you had the opportunity to create a, a wish list of advisors to help the government understand the future and where they should be directing research and funds, would you be able to put together a little dream team of advisors for them? And would you be able to tell us why you'd choose those people?

    4. DP

      Um, I probably could, but just, uh, not, uh, off my head like that.

    5. CW

      (laughs)

    6. DP

      And I would probably... Uh, it would also be invidious too, and so- (laughs)

    7. CW

      (laughs)

    8. DP

      But, uh, uh, yes, obviously I would, I would love to be able to do so. I mean, as, as yet, uh, no, uh, shall we say, b- billionaire or corporate, uh, uh, uh, colossus has, uh, has approached, uh, me with, uh, uh, uh, uh, funding. But yes, it's, uh, it's- (laughs)

    9. CW

      It's in the post, David. I promise you it's in the post, mate.

    10. DP

      (laughs)

    11. CW

      I think it will be. So in terms of risks, are there, um, things that are at the forefront of your, of your thinking, um, that you're concerned about as we move forward with this? Are there risks to the way that the project can go, the way that the public perceives it or, uh, um, more kind of nitty-gritty, um, concerns to do with the actual way in which we proceed towards the, the hedonistic imperative?

    12. DP

      Uh, yeah. By risks, uh... And there are some people, including some transhumanists, who think of life as fundamentally good and we don't want to put it all at risk. Whereas my conception of life today, frankly, is much, uh, bleaker, uh, that's, uh, the non-human animals in our factory farms and slaughterhouses are as sentient as, uh, pre-linguistic toddlers. Uh, and-

    13. CW

      Is that the c- is that correct?

    14. DP

      Um, one- one, if one is being more careful, one has to say that a pig, for example, is demonstrably more sapient than, uh, uh, a pre-linguistic toddler. One can't be, uh, certain, uh, that the pig is as, uh, sentient.

    15. CW

      Yes.

    16. DP

      Uh, nonetheless, the actual particular structures, neurological structures, uh, genes, neurotransmitters that mediate our most, uh, powerful experiences, uh, of, uh, panic, of pain, distress, uh, are nearly, uh, uh, identical. So short of radical, uh, skepticism or solipsism, yes, uh, I, I, I am as, uh, confident, uh, that a pig, uh, is as sentient as, as, as, as I am that you're, you're sentient. Uh, uh, and so yes, given what I think posterity will recognize as (laughs) -... you know, crime against sentience of, uh, almost unimaginable proportions. I think, uh, perhaps our most urgent priority right now is just to get factory farms and slaughterhouses, uh, uh, shut and outlawed. Now, moral argument clearly plays a role, but, uh, I'm quite cynical about human nature. I think we're going to need to, uh, uh, uh, develop, uh, hasten the development and commercialization of in vitro meat, meat, meat substitutes.

    17. CW

      Mm-hmm.

    18. DP

      Uh, it's not a distinctively transhumanist technology in vitro meat, uh, in vitro meat could be genetically engineered. It's much, uh, more likely to be widely accepted if, if, if it's not. Um, uh, but yeah, essentially, I, I would see, yeah, we have this, this absolute moral obligation to get factory farms, slaughterhouses, uh, uh, uh, uh, shut that I, I think, uh, heaven knows how we'll explain what we did, uh, uh, to our, to our, to our grandchildren. Um, so yeah. I mean, although I can happily talk to you about all the wonderful transhumanist technologies and ideas for the future, I think part of, uh, creating a world based on gradients of intelligent bliss, uh, involves stopping systematically harming sentient beings. Uh, we can't, we can't be serious about trying to build a happy biosphere if we're systematically, uh, uh, harming others to gratify our own appetites.

    19. CW

      Yeah, it's a- an interesting perspective to be able to, uh, try and take yourself away from your current experience and almost imagine the remembered self, uh, looking from the future. That degree of perspective I think is something that not a lot of people are, are used to doing, um, and it's obvious that when you do look at it, uh, with- in black and white, uh, you are correct that you're breeding animals to suffer purely for your own enjoyment to eat. Um, it's, it's a difficult, it's a difficult justification to make.

    20. DP

      Yeah. I mean, personally, I, yeah, c- it's completely, uh, uh, ethically indefensible. I, I mean, I sh- I should add that, you know, by an accident of birth, I've never even tasted, tasted meat. So it's not ... I mean, as I said, it's not as though I'm trying to parade my m- moral superiority-

    21. CW

      Mm-hmm.

    22. DP

      ... but if one hasn't got this, uh, this, this source of bias, yes, seeing what we are doing ... You know, one reads some, let's say, some horrific case of, uh, child abuse in the papers and sort of viscerally is feeling, "God, this terrible abuser ought to be locked up for- to life." But then you, yeah, you see acr- you know, across the table someone tucking into a bacon sandwich. Uh, it's ... Yeah. Uh, but yeah, if I pass the, you know, the meat counter in a supermarket, I, I think of Auschwitz and I think of child abuse and, uh, yeah. I mean, th- this is it. When, when people are actually shown one of these videos about what goes on in factory farms and slaughterhouses, some people seem genuinely, genuinely, uh, sort of, uh, shocked. But yeah, you know, far worse things go on off camera. I mean, it's, uh-

    23. CW

      Yeah, that's-

    24. DP

      Yeah, it's, it's ... Uh, yeah, it, it really is, uh, uh, uh, unspeakable. Yeah.

    25. CW

      Fantastic. So David, before we wrap up, I wanted to ask for a suggestion of a book or a resource or a blog which you would recommend if people think, "This was a, an interesting discussion and I'd, I'd quite like to find out some more about transhumanism and the transhumanist movement."

    26. DP

      Ooh, heavens. Uh, well, I have-

    27. CW

      You're put on the spot again, David.

    28. DP

      (laughs)

    29. CW

      You're gonna have to, you're gonna have to put your money where your mouth is. I'm not letting you slide- sidle out of this one like the, the government advisors thing that you got out of.

    30. DP

      Well, in spite of, in spite of my low, uh, testosterone, I'm going to say back in 1995, uh, oof, uh, when the web was young, uh ... In fact, it was 1996-

Episode duration: 54:22

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode snDy0vtmss4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome