Skip to content
Modern WisdomModern Wisdom

How Do We Define What Is Good & Bad? | Cosmic Skeptic | Modern Wisdom Podcast 214

Alex O'Connor is a philosopher & YouTuber. Get ready for a mental workout today as Alex poses some of the most famous and most difficult questions in ethics. What does it mean to say that something is good? Why SHOULD you do one thing instead of another thing? Why should we care about wellbeing? What is the definition of suffering? On whose authority is anything good or bad? Sponsor: Check out everything I use from The Protein Works at https://www.theproteinworks.com/modernwisdom/ (35% off everything with the code MODERN35) Extra Stuff: Watch Alex on YouTube - https://youtu.be/gcVR2OVxPYw Subscribe to Alex on Patreon - https://www.patreon.com/CosmicSkeptic Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #morality #ethics #cosmicskeptic - Listen to all episodes online. Search "Modern Wisdom" on any Podcast App or click here: iTunes: https://apple.co/2MNqIgw Spotify: https://spoti.fi/2LSimPn Stitcher: https://www.stitcher.com/podcast/modern-wisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: modernwisdompodcast@gmail.com

Alex O'ConnorguestChris Williamsonhost
Aug 27, 20201h 28mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    Well, there's a difference…

    1. AO

      Well, there's a difference between doing and allowing. There's a difference between allowing a bad thing to happen and being the cause of a bad thing to happen, but it gets even more complicated than that because, like, you've got to decide, like, how are you defining the difference between doing and allowing, like, what, what really is the difference there? For example, if I walk up to somebody who is attached to a life support machine and I unplug them, have I killed them or have I allowed them to die?

    2. CW

      (laughs) Alex bloody O'Connor, how are you?

    3. AO

      Chris, I am, I'm well. All the better for seeing you, as, as they say. Uh, it's been, it's been too, it's been too long. I haven't seen you in person since we went to that event in London, and I can't think how long ago that was now-

    4. CW

      February.

    5. AO

      ... where you made me do the yoga that you seem to be telling everyone and their dog about and-

    6. CW

      (laughs)

    7. AO

      ... uh, that photo of me, that photo of me that you keep posting is like, you know the one of Beyoncé that she wanted to disappear from the internet?

    8. CW

      (laughs)

    9. AO

      That's my version of that, is me trying to work out how to put my leg under the thing. Yeah, it, it was a nightmare. Um, but yeah, it's, it's a shame. It's good to be speaking to you again, even if it's a, a public conversation.

    10. CW

      Yeah, I know, man. It is, uh, that was one hell of a weekend.

    11. AO

      Mm-hmm.

    12. CW

      One, one, one hell of a weekend. I've got, I've got the full-length one-hour yoga form recording both of us doing it side by side, and I'm considering offering it out to the highest bidder.

    13. AO

      (laughs)

    14. CW

      I'm pretty certain there's some people on the internet, some fairly sort of prominent, uh, debaters of yours that would pay good money for that kind of ammunition.

    15. AO

      Yeah. Yeah. I, I, I, I do worry about some of the ammunition that my friends have on me and the people they could sell it to. I, I think maybe I could release it as a Patreon exclusive or something. That might be a good... Or you could release it as a Patreon exclusive. That, that's an even better idea.

    16. CW

      And then steal all your patrons.

    17. AO

      There you go. Yeah. (laughs)

    18. CW

      (laughs)

    19. AO

      Yeah, they'll, they'll jump over to you then. Yeah, I don't know. Um, we'll see. But, you know, I haven't even seen that video, so God knows what other-

    20. CW

      Awesome.

    21. AO

      ... weird shapes I try to morph my body into.

    22. CW

      It, it was graceful. It was, it was your first time, you know. No one's good at the first time.

    23. AO

      No one's g- (laughs)

    24. CW

      No, no one's good the first time.

    25. AO

      Yeah, no one's good at the first time, as they say. Um, hmm. Well...

    26. CW

      But no, yeah, that was s- that was February, man. That was a while ago.

    27. AO

      Was that February?

    28. CW

      Yeah.

    29. AO

      Man, it's crazy. It's a crazy world out there. Um, it, I, I've been getting a bit of flak for it actually, because, you know, um, speaking... I... 'cause I, I'm, we're gonna be talking about ethics today. Um, uh, 'cause I, I... when you reached out to me and I, I was on your podcast before, and we talked about veganism, which was the first time we properly spoke, and that was ages ago now. Um, but of course, talking about veganism requires ethics more broadly as a, as an underlying, but hopefully I can sway you in an ethical direction that, that puts you off the idea of sharing those videos of me online, but we'll see where it goes.

    30. CW

      (laughs) Yeah. Okay. I see how you're circling this back around now. I am a bad friend if that video ever surfaces-

  2. 15:0030:00

    (laughs) To everyone who's…

    1. AO

      they could just have a consistent worldview that's, that's wrong at its basis, right? So, it depends what you're trying to do. If you're trying to convince someone of a moral cause, then it's better to talk on a practical level and try to point out inconsistencies. But if you're trying to get to the question of what is actually good, uh, then you're better off talking a bit about metaethics. And one of the most important questions is, what is the focus of ethics? Does, does the focus of ethics, uh, or let's say, does ethics, should ethics focus on, say, the consequence of an action? Should it focus on the action itself? Should it focus on the agent performing the action, right? These three are, are broadly three ways in which people distinguish ethics. So, if we consider a statement like, "Murder is wrong," some people might analyze that to mean that murder is wrong because the consequences that it leads to, that is, you know, someone dying, people suffering, people mourning, are bad, right? And so, generally speaking, in order to determine whether something's right or wrong, we look at the consequences of the action. Seems somewhat intuitive, but some people like to instead say that the focus of ethics should be... and this is consequentialism, sometimes called teleology, from the Greek word telos for end or purpose. Um, some people prefer to look at the agent. They prefer to say that, "Don't... The reason you shouldn't murder is because the virtuous person wouldn't murder." Right? Like murder is not a virtuous thing to do. A- Aristotle's ethics was a, a virtue ethics theory, and it was kind of like, the right thing to do is what the virtuous person would do, in other words. So it's, it's not so much about the action or its consequences, it's about, it's, it's about the person committing the action, right? Um, some people prefer to just look at the, the action itself, not the consequence, but the action itself. They say murder is wrong in and of itself, regardless of the consequences, regardless of who's performing it. Um, and this generally comes from... This, this, this is a kind of typical view of religious people a lo- a lot of the time with divine command theory, who think that ethics is just what's commanded by God. So if God says, "Don't murder," then don't murder. That, that's, that's kind of it, full stop. Doesn't matter what the consequences are, it's just wrong in and of itself. And all of these things have kind of weight to them. And the reason why, why people, like, kind of flip back and forth in them when they're studying them is because each of them seem to have kind of difficult ethical territory. Like if you... I, I think most people are, at the beginning, tend to be more, uh, attracted to consequentialist ethics. Um, and that's because I think that generally speaking, our society is, is, is, is a bit more based on consequentialist ethics than anything else. Like in, in, in the modern era, that seems to be the, kinda the implicit way that people do ethics. But there are some really kind of, uh, difficult problems with that. So for instance, let me take your, uh, let me take your proposition that kind of the right thing to do is what maximizes wellbeing, right? So this would be a consequentialist view, and essentially a utilitarian one. Um, utilitarianism being the idea that we should maximize utility, and utilitarians identify utility, uh, utility with pleasure. So essentially, the, the, the best thing to do is to maximize pleasure or minimize suffering. Now, there are, there are various problems with this, but let me give you one example. Uh, this comes from a guy called Roger Crisp, who's a kind of leading John Stuart Mill, um, scholar. And every undergraduate at Oxford has to read his commentary on, on, uh, utilitarianism, and he gives this example of the rash doctor, right? So let me ask you a question here.... a doctor has a patient, uh, and, and they've got two potential medicines that they can give to the patient, okay, (laughs) like option A and option B. Uh, option A has... Uh, option A, if successful, will restore the patient to 100% health, but it's got a 99% chance of failure and only a 1% chance of working. The 99% chance is that they'll die, right? So 99% chance that this is just gonna kill the patient. Only 1% chance it's gonna succeed, but if it does, it's gonna restore them fully to health. Option B, it will only restore them to, say, 85% health, but it's got a 99% chance of being successful and only a 1% chance of the patient dying, right? Say the doctor chooses option A and it works. Did the doctor do the right thing?

    2. CW

      (laughs) To everyone who's listening, I've already warned you about this, but I want you to be playing along at home as well, because this difficulty, the, the mental gears that you're going to be able to hear in me that are whirring away, I want you to be suffering along with me. So, um, from a consequentialist, just the outcome... Does the end justify the means? Um, I suppose in that form, yes. Um, you could do that a million times and keep on getting the one out of 100.

    3. AO

      Well, so the difficult thing to say is, like, intuitively, when, when faced with the option before we know what's actually gonna happen, and you've got the two, the medicines in front of you, like, you'd probably advise the doctor to take option B, right?

    4. CW

      Yes. Yes, of course.

    5. AO

      I'd imagine so.

    6. CW

      Of course, yeah.

    7. AO

      And that seems, that seems justifiable, right? It s- it seems like that should be the case. But the weird conclusion is that if he uses option B and it works and the patient's restored to 85% health, whether or not the doctor did the right thing completely depends on what would have happened if he'd administered drug A. Because if had he administered drug A, it failed, then what the doctor did actually did maximize wellbeing, right? 'Cause it was 85% health versus death. Whereas if it were the case that had he gone for option A it would have worked, then what he's done has actually not maximized pleasure, right? Because instead... Or wellbeing, let's say. Uh, because instead of 85% health, he could have got 100% health. Now, the, the kind of easy answer to this is to say, well, okay, so it's not actually about... You, you shouldn't do what will actually maximize pleasure. You should do what will probably maximize pleasure, right? But you can see we've already kind of adapted the theory, right? We've already gone-

    8. CW

      Yes.

    9. AO

      ... from kind of saying, well, obviously it's, it's about kind of what maximizes... The right thing to do is whatever's gonna actually maximize someone's wellbeing. But, like, that's not always the case, because you, you-

    10. CW

      Now there's a caveat in there.

    11. AO

      Yeah. Even, even if, like, in this situation, yeah, had you done the other thing, it would have actually, like, in, in reality, in the actual world, would have caused more, more pleasure. It's like, it probably wasn't justified to do that, right? So yeah, now, now we're kind of talking about probabilistic utilitarianism, right? Um-

    12. CW

      Does this continue to roll down? So does probabilistic utilitarianism then split into some other subdivision, some other subdivision?

    13. AO

      Well, there, it doesn't always divide, kind of subdivide in that manner, but there are, there are lots of different kinds of divisions, so-

    14. CW

      I, I imagine there's just a tree branch that continues to go down. My, my point is like, uh, for every-

    15. AO

      Yeah.

    16. CW

      ... situation that you encounter, do you then need to continue to create a, um, subdiscipline within that that allows you to explore that particular type of solution?

    17. AO

      Pretty much. And, and luckily, because these questions have been being asked for thousands of years, you can find hundreds of essays on any particular kind of individual instance of, of a moral dilemma that you have. But, like, there's another, there's a further distinction that might be made, uh, between what you could call, uh, what, what Roger Crisp at least calls the criterion of good and the decision procedure, right? And these are two separate things. So the criterion of good is, like, the criterion by which we determine whether or something... whether or not something is good. Um, whereas the decision procedure is the way that we try to go bringing about that good, right? So take this utilitarian analysis where we've shown that, you know, you should act in a way that probably maximizes pleasure. Like, that would be our decision procedure. Like, we, we kind of decide intuitively that the way to determine how we decide what to do should be based on probabilistic utilitarianism, right? But has it changed the actual criterion of good? If we offer a kind of abstract analysis of what good is, well, we don't think that good is what would prob... We don't think that good is the result of what would pr- what you should probably do or something like that. We, we still think that, that the good thing is what maximizes pleasure and minimizes suffering, even if we've decided that the way we decide which actual action we're gonna take is more probabilistic. So the criterion of good for utilitarianism is still what is actually most pleasurable, but the decision procedure leads us to probabilistic utilitarianism. Uh-

    18. CW

      The route to get there now has some form of discounting that's been thrown in it.

    19. AO

      Yeah. And it seems a bit strange. Like, why is it that we've got an ethical theory where we've decided that this is what's good, but that's not what we should actually do in order to try and achieve that good, right? It seems like an inconsistency. Uh, seems a little, little strange, right? Um, further distinctions like, uh, the, the classic kind of argument against utilitarianism is, is something like, uh, an instance of a gang rape or something. It's like, well, don't, don't the pleasure of, of the many outweigh the suffering of the single individual? And some people would say, well, no, because the suffering is so great that even five people, you know, getting immense pleasure, it's not gonna outweigh it. But if, if you think that, then just make it six people or seven people or 100 people until the, the, the scales get balanced out. And some people would say something like, well, clearly it would still be wrong in all circumstances, right? So can we really say that the maximization of pleasure is the criterion of good, is how we should determine what we're, what we're doing? If we've got a situation where it seems it doesn't matter how much kind of the, the scale of the wellbeing tips one way or the other, like, we still wouldn't be in favor of this, right? Um, and it's like, yeah, you, you've now gotta rethink things, right? And this is why people prefer sometimes a, a, a kind of action-based view of morality, um, known as deontology, right? Like the idea that the thing is, is wrong in a, in and of itself, right? It's not about the consequence. It's that gang rape is wrong.... it's not wrong because of the suffering that it will bring about this person or something like that. I- i- it's just wrong, right? And so, when faced with an ethical dilemma like that, you've kind of got two choices. You either have to further adapt or explain or analyze your utilitarianism, or maybe you have to adopt deontology, or maybe you have to accept the conclusion that gang rape is actually moral, and that's the least popular-

    20. CW

      (laughs)

    21. AO

      ... (laughs) line to go down-

    22. CW

      Yeah.

    23. AO

      ... funnily enough. Um, but so, you know, the, the utilitarian might say, "Okay, well, look, I mean, it's, it's not about what will maximize pleasure in any given instance, but let's say, you know, uh, the best thing to do is to act according with a general rule, which if followed broadly would maximize pleasure, right?" So even if in that individual instance, you know, it would maximize pleasure to allow people to, to commit horrible crimes, like, if we allowed everybody to live by that rule, suffering would, would rise overall because of people being scared of being accosted on the street and people being scared of being robbed or raped, whatever it may be. So, okay, so, so it now becomes, look, the thing that we should do is act in accordance with rules which if generally a- abided by would, would maximize pleasure. Okay? So now our decision procedure has kind of morphed into, you shouldn't do what always maximizes pleasure. You shouldn't even do what always probably maximizes pleasure. Y- you should do what w- what would probably maximize pleasure if we made it a rule that everyone followed. It's like we're getting a lot more kind of further detached from, from the-

    24. CW

      We're down the tree. We're down this little tree now, bro.

    25. AO

      Right. And you notice the w- the way that we've done that is simply by taking ethical theory that w- that we started with, that, that you kind of just kind of hypothesized at the beginning and just said, "But that leads to this. Okay, so we should adapt it in this way. But then that leads to this, and that leads to this, and that leads to this, right?" And, like, yeah, the- these things kind of come out of nowhere. Like, a lot of the time someone will come up with an idea that just says, like, like, "What about this counter-example?" Um, and it just, it just kind of blows everyone away, and, and everything has to be rethought. That, that happened in the, uh, in the philosophy of knowledge, um, because (laughs) one of the, one of the most interesting things about philosophy to me is that nobody has a sufficient analysis really of what knowledge is. No one can really decide on, on w- what constitutes knowledge. Um, and the reason for that is because, well, let's, let's think about what kno-... I mean, uh, okay, let me, let me just ask you just out of interest, like, what do you think... I- if you had to give, like, a definition of knowledge, what would it be? Like, how can you say that you know something is true?

    26. CW

      Th- uh, that sounds like two questions, knowing that something is true versus knowing things. I guess it's-

    27. AO

      Well, I mean to say, like, what- what's the definition of knowledge in, in either case? What's it mean to know something?

    28. CW

      An accumulation of understanding about the world?

    29. AO

      Okay, so, so, uh, basically you now holding a belief about the world...

    30. CW

      Which is represented accurately in reality.

  3. 30:0045:00

    Oh, the fur, yeah.…

    1. AO

      with her?" And you have to go, "Oh, crap," and you run after them because-

    2. CW

      Oh, the fur, yeah. My favorite jacket.

    3. AO

      ... 'cause you realize that, that what you, what you've said, yeah, what you've said has actually taken away this really important belief of yours, right?

    4. CW

      (laughs)

    5. AO

      Because you, you're like, "I think this is, this is the right theory, this is the right way to go," and someone says, "Yeah, but you know that if you, if you do that, then this other belief you hold has to fly out the window." And you're like, "Oh, crap," and you're running after it as fast as you can, right?

    6. CW

      Yeah, okay.

    7. AO

      Um, and this kind of, this kind of happened with knowledge, right?

    8. CW

      So Edmond Get-Gettier.

    9. AO

      Gettier, yeah. Um, and such, such cases as he presented in this paper are known now as Gettier cases. Um, which essentially, a Gettier case is an instance of justified true belief that is not knowledge. Because again, we're working with counterexamples here. So if, if the theory is that, well, knowledge is justified true belief, then if you can offer an example of someone having justified true belief that isn't knowledge, then we have to throw out that theory-

    10. CW

      Yes.

    11. AO

      ... and we have to come up with something better. So-Gettier says, imagine... And I- I need to make sure. It's be- it's been a while since I've, since... I, I want to make sure I get this right. Imagine somebody is waiting for a j- th- they're in a job interview. There are two guys in a job interview. (laughs) And, uh, while they're waiting to hear back from the, from the interviewer, the, uh, the person who he's across from is getting bored and he decides to, to take the coins that he's got in his pocket out and starts counting them on the table because he's bored. And he sees him counting 10 coins, right? So he, he knows that this guy's got 10 coins in his pocket. Um, then what happens is the interviewer comes out and basically says, "Listen, uh, you know, I haven't spoken to the board yet, but it, but it seems like you're gonna get the job, right?"

    12. CW

      (laughs)

    13. AO

      "Like, we're, we're pretty sure you're gonna get the job." Um, which... Uh, oh, oh, sorry. He, the, i- we're pretty sure the other guy's gonna get the job, the guy who was counting the coins. They say, "We're pretty sure this guy's gonna get the job." And, uh, this kind of gives you a justified belief that this man's gonna get the job. But one thing that you know, a- although you could say that, you know, maybe you, you have a justified belief that this man's gonna get the job, another thing that you have a justified belief in, by derivative, is you have a justified belief that the person who will get the job has 10 coins in his pocket. 'Cause you've seen this guy counting out 10 coins and you've got a justified reason to think that he's gonna, that he's going to, um, to get the job. Uh, and so you have a justified belief that this person, that the person who gets the job is gonna have 10 coins in his pocket. Now, something goes wrong, something like really unexpected, something unlikely, right? So this, this... It's not fair to say that you could have predicted this, but something, something happens and as it turns out, you end up getting the job. It's you, not the other guy, you get the job. And you think, "Oh, this is great, I've got the job." But just as it happens, you happen to have 10 coins in your pocket, right? Just by chance, you've also got 10 coins in your pocket. So your belief, you had a justified true belief that the person who gets the job would have 10 coins in his pocket. But it, it seems, it, it seems like you can't say that you knew that. Because as it turns out, like, yeah, I, I guess it was true that the person who got the job had 10 coins in his pocket, and I guess you were justified in believing that. But, like, surely that's not knowledge, because clearly you kind of meant something else, right? Like, that can't be knowledge. But this is an instance of justified true belief. And this actually happened to me once in, in, in person. 'Cause there are all kinds of Gettier cases that you can, that you can construct. Um, and that's kind of a, a clumsier one to, to understand perhaps. But this happened to me once. Um, I was, I was in a car and I was driving around a big corner, right? And so I saw this, this child, um, kind of above the hedge, like around the corner, kind of bobbing up and down. And I looked over there and I thought she was riding a horse. So as we're going around the corner, I think, "Oh man," like, "there, there's a horse over there." It's, that's quite exciting. I was quite excited to see this horse, right? So we turn around the corner, and turns out she's not on a horse. She's walking on... She, she's on, like, her dad's back, right? But just by chance, there also happens to be a horse in the field.

    14. CW

      (laughs)

    15. AO

      Now, I, I kid you not, this actually happened to me. And I, I sat there and I thought to myself, "This is a Gettier case." Because, like, although, you know, maybe I wasn't entirely justified in believing that the girl was, was on a horse. I... Like, seeing a horse that high up bobbing along, I think, you know, I could form a justified belief that she was riding a horse. Um, and the belief was that there's a horse there. So I believed that a horse was there. It was true that a horse was there, and I was justified in believing that a horse was there. But did I know that the horse was there? Like, can you really say that I knew it before... Do, do you see what I'm saying? Like-

    16. CW

      Absolutely.

    17. AO

      This doesn't seem to count as knowledge, right? And so Gettier kind of was talking about these cases and people were like, "Oh, damn, so now we have to, now we have to change it up."

    18. CW

      Your mum's, your mom's gone off with the socks, yeah.

    19. AO

      And it just

    20. NA

      (laughs)

    21. AO

      Yeah. It, it just, it, it, it just kind of completely upends everything that we think about the analysis of knowledge. And this is what happens in ethics all the time. Um-

    22. CW

      (laughs) Some bastard comes in with a cricket-

    23. AO

      Yeah.

    24. CW

      ... a cricket bat and breaks everything.

    25. AO

      Exactly. Um, but sometimes it can also work in people's favor, right? So an example would be with, uh, an analysis of free will. Um, I'm someone who doesn't believe that free will exists. Or rather I say I have an active belief that free will does not exist. Um-

    26. CW

      Why do you have that distinction?

    27. AO

      Thi- Uh, because it's one thing to be just unconvinced of something. It's, it's another thing to believe that it's false, right? So like... Uh, let me put it this way, this comes from-

    28. CW

      Agnostic versus atheist.

    29. AO

      Pretty much, yeah. But it's... I, I would say that agnosticism is, is a claim to knowledge, whereas theism is a claim to belief. Um, so I'd characterize it like this. This, this comes from my friend Matt Dillahunty. If I had a random jar of, of coins, I w- I don't have anything I can use right now, um, and you didn't know how many coins were in the jar and neither did I, and I said, "Look, I think there's an even number of coins in this jar." Would you believe me?

    30. CW

      No.

  4. 45:001:00:00

    To interject there, did…

    1. AO

      Like, if somebody manages to prove somehow, philosophically, that we're not living in a simulated reality, that would be a really important and, and, uh, kind of philosophical discovery of how we'd manage to argue that, um, that would have wide-reaching implications, right? For instance, if someone kind of discovered, it just so happens that I've got a philosophical argument that says that we can't replicate consciousness. 'Cause you know the simulation argument of Nick Bostrom says that, you know, humanity will get to the point where it can simulate human consciousness, and that consciousness will be able to simulate consciousness, and so on and so on and so on. And, and the likelihood that you happen to be in base reality is, is minimal, it's tiny. Um-

    2. CW

      To interject there, did you watch the episode of Joe Rogan where Nick tries to explain that to him?

    3. AO

      Uh, yeah, I d- I didn't but, uh, I re- we've talked about that, you said that-

    4. CW

      Yes, I brought it up. Anyone that's listening-

    5. AO

      ... people couldn't, aren't getting-

    6. CW

      So, you'll, you'll have heard me talk about Superintelligence a number of times, one of my favorite books. Recently read The Precipice by Toby Ord, who I met, uh, texted you about-

    7. AO

      Mm-hmm.

    8. CW

      ... and said, "Is he one of your, uh, lecturers at uni?" Uh, The Precipice, about existential rescues from the same Future of Humanity Institute. Nick Bostrom, guy that I've read for t- tons and tons of time, sits down with my favorite podcaster, Joe Rogan. I think, "Fucking hell, this is great. This is gonna be brilliant."

    9. AO

      Yeah.

    10. CW

      Um, Joe simply does not get the simulation hypothesis, which is like Nick's sort of, or at least one of Nick's crowning works. Um, and it's fairly straightforward to understand. And then-

    11. AO

      Mm-hmm.

    12. CW

      ... for 45 minutes, continues to force the audience down the same groundhog sort of-

    13. AO

      Yeah.

    14. CW

      ... exchange. Um, so yeah, if you want to find out about Nick Bostrom, do not watch his episode with Joe-

    15. AO

      (laughs)

    16. CW

      ... with Joe Rogan unless you want to tear your eardrums out. Um, so yes, uh-

    17. AO

      Yeah. You can just read the paper that he put out. Um-

    18. CW

      Precisely. There's, uh, there's one interesting thing I was thinking there, that the discovery of knowledge-

    19. AO

      Mm-hmm.

    20. CW

      ... in philosophy or ethics, where does that come from? Or what is that discovery? Because it's not like we have discovered a new star, this is a particular new type of element-

    21. AO

      Yeah.

    22. CW

      ... this is a new proton. It's somehow universal and existent-

    23. AO

      Mm-hmm.

    24. CW

      ... and yet, is also manifest by someone's thoughts and also quite sort of transient and ephemeral.

    25. AO

      I think you can think of it in the same way ... This, this is probably the most helpful way to think of it, p- potentially is, to think about it, uh, in terms of like mathematical, uh, discoveries. Because, you know, maths is kind of a language that we invent to describe things that we believe are analytically true. Um, and it's essentially tautologies. You know, to say that one plus one equals two is kind of the same as just saying two is two. Um, but you can make mathematical discoveries, 'cause people kind of have a, they, they put together equations. And I'm not a mathematician but, you know, like, you can kind of make discoveries by putting different propositions together and seeing, and seeing how they work, right? Um, and it's weird to think that you can kind of discover things in this manner. They, in this kind of weird abstract kind of sense, but like-

    26. CW

      It really is.

    27. AO

      ... I think the same thing is roughly going on, like, ethical movements are made when, when people kind of realize implications of beliefs we already hold, or realize a new way of justifying them or something like that, or realize an inconsistency that we, that we hold. Um, most of the kind of, most of the kind of ethi- When, when you say something like ethical progression, people tend to think of, like, in practice they tend to think of things like, uh, slavery being abolished, or, or the vegan movement. That's number four.

    28. CW

      Yeah.

    29. AO

      Uh, and like, yeah, sure, but that, there are two different things we could be talking about. 'Cause there's that kind of ethical progression, um, which is where we, we, we like to think, "Oh, well, we're, we're practically changing to live up to, like, the objective ethical standard that we've constructed or, or that exists," or however you want to frame it. Um, but then another question is like, what about the frame itself? Like, can, can we kind of have a development in, in that frame? And sometimes the development in the frame leads to developments in practice. But this is, this is one of the questions that will help you determine whether you think objective morality exists. It's like, do we discover ethical truths or do we invent them? Do we, do we kind of come across objective truths about the way we should behave, or do we just kind of decide, uh, on, on a new way of living, uh, to be consistent with our preferences, or something like that. Like-... that's, that's one of the most fundamental questions to, to figure out whether object- whether ethics actually objectively exists. Um, but the-

    30. CW

      Is that the, is that the question that Jordan Peterson and Sam Harris got a little bogged down with during their live debate?

  5. 1:00:001:07:14

    It's hard, man. W-…

    1. AO

      the, the, the majority of people tend to kind of say, "Well, actually, I think I'd have to say that I'd rather kill the person than immediately forget." Because like, if you're talking about like, i- i- if... As you say, like, it, it's like you don't know what it's like to live that life believing that you've actually killed that person. But there, there are two different questions at, at, at, uh, on the table here. The first is what you would do, and the second is what you should do, right? So like, I think most people when they think about it deeply enough realize that because of the amount of sacrifice that they're making, um, they should, uh, they, they would probably kill the person and then forget because they just can't deal with the, with the, with the k- kind of backlash of that. Um, but that's a separate question from what they, from what they should do, and maybe they shouldn't kill the innocent person. But you could also say like, you know, should we expect somebody to essentially sacrifice a life of well-being for the life of another person? Like, do we have the right to expect that of a person? Or do they have a right to say that, you know, "If I don't commit this action, it's gonna have this horrible impingement on my, on the rest of my life, and I actually have a right to kind of look after my own interests first, even if my own interests are kind of lesser than another person's." In the same way that if you, uh, decide not to give to charity, that's your right to do so. But like, the, the 25 pounds that you're gonna save is so much, is, is worth so much less to you than it would be worth...... to people for whom you could buy mosquito nets or something and, and save them from getting malaria. Um, but we say that even though, like, the, the benefit that it gives you is, is much kind of, is much less than a benefit it would give the other person, like, you have a right to look after yourself first and look in- and, and look to your interests first. And maybe you could say the same thing in this instance. Uh, I'm not entirely sure, um, but it's a, it's a difficult question to think if you were really in that situation, what do you think you would do?

    2. CW

      It's hard, man. W- I mean, the, the question here, and it seems like this happens with a lot of it, is whether or not you're able to take a third-party perspective and the, the would versus the should, i.e. the armchair philosophy versus the actual brats, uh, uh, grassroots action. Those two things often, I'm gonna guess, come into conflict with each other because there isn't a third-party perspective. If we are talking about you doing the thing, there is no third-party perspective for you to take. There is only in the should, not the would.

    3. AO

      Right. But if... I mean, if morality is objective, then we should say, like, it doesn't matter what you would do. Like, there, there is a, there is a right answer to the question regardless to what you f- you find yourself actually doing. Like, uh, in the famous trolley problem, um, when you ask somebody whether they would pull the lever and whether they'd push the foot man of- fat man off the bridge, you know, the, the, the famous trolley tram.

    4. CW

      Take us, take us through it.

    5. AO

      So, uh, you know, the, the trolley is, is going down the track and it's about to run into five workers who are working on the track, and you can pull a lever and it will divert the, the tra- uh, the train onto a track that's got a single person working, so it'll kill one person instead of the five. And the question is, you know, should you pull the lever or would you pull the lever? And most people say, "Yeah, of course I'd pull the lever. I'm gonna s- I'm, I'd rather, you know, the train goes into one person than five people." The principle being, yeah, okay, so I'll sacrifice one person's life to save five innocent people's life. Fine. But then you ask the question, what if you're walking along and there's no lever and the train's going towards five people on the track, but there's this really fat man walking across the bridge, and if you push him off the bridge, he's gonna land on the train, it's gonna kill him, but it's gonna stop the train. Would you push the fat man?

    6. CW

      (laughs)

    7. AO

      And people are like, "Well, n- I don't... No, I don't think so. I don't think I have the right to." See, that's like, hold on, why not? Like, why is it that you're willing to kind of pull a lever that kills one person to save five people, but you're not willing to push the man to kill the man to save the five people? And Michael Sandel, who's, um, a philosopher at Harvard, one of the most famous philosophers living, he's got a great book by the way called Justice, um, which is a fantastic introduction to ethics. Like, I, I don't feel I've done a very good job here of like actually going through the various ethical theories. We've just kind of been mulling here and there. But if you want like a-

    8. CW

      That's his fun stuff.

    9. AO

      ... if you want like a really good introduction to just what ethics is, the different ways of thinking about it, Justice by Michael Sandel is a fantastic book. And, and in a, in a lecture you can watch on YouTube, he gives the trolley problem, which is a great starting point for practical ethics. It's one of the first things that people will, will talk about. And, and he's talking to his students and, and he's asking this question and, and the student kind of says, "But, you know, I don't wanna... It's different because there's a difference between me, like," um, if, if you, if you're like driving the train. So Michael Sandel says you're driving the train and you can, you can turn the, you can turn the wheel and it goes into one person instead of five people, and that's instead of the lever, right? So most people say, yeah they, they would turn, they would turn the wheel, they'd go into the one person instead of the five, but they wouldn't push the fat man. And one of the students says, "But look, the, I mean, the difference is like, there's a difference between like turning the wheel and like actually like getting your hands on a man and pushing him off the thing." And so Michael Sandel says, "Well, what if the fat man is kind of on a trap door, and the way to open the trap door is to grab a big wheel and you just turn the wheel to..." (laughs)

    10. CW

      (laughs)

    11. AO

      And it's like, okay, like, yeah, I still probably wouldn't do that, right? And, but again, the reason I bring this up is to say like, I think it's fair to say that maybe like most people would, would pull the lever but wouldn't push the fat man. But surely if, if the principle is the same then you should do the same thing in either situation. Or is there a difference? Like, y- you know, what's going on there? And i- interestingly, like there have been studies done where, where people have undergone an MRI scan whilst being asked the trolley problem, and when they think about like the lever... Uh, so, okay, so, so the people who say that they would both pull the lever and push the fat man, when they're thinking about the question, the parts of their brain associated with rationality are lighting up. For the people who say that they would pull the lever, but wouldn't push the fat man, when they're thinking about the questions, the emotional parts of the brain are lighting up, right? Implying that actually, yeah, no, no, the, the, the reason you wouldn't push the fat man is actually because of like, you know, your emotional tendency to what you would and wouldn't do rather than your rational thinking about what you should and shouldn't do.

    12. CW

      Mm.

    13. AO

      But okay, there are some situations in which I can give you two situations which are like almost exactly the same, and yet you would say that one is right and one is wrong. Um, let's say, for example, you're an ambulance driver and you've got two people in the back and they need to get to hospital immediately, right? And if you don't get there immediately, they're gonna die.

    14. CW

      Yeah.

    15. AO

      As you're driving along, you look out the window and you see a boulder just rolling and rolling towards an innocent man, right? Now what you could do is you could stop the ambulance, you could get out of the ambulance, you could push the boulder out of the way and you'd save the innocent man. But you shouldn't do that, right? You should stay driving it 'cause then the two people in the back of the ambulance are gonna die. So you stay, you stay in the ambulance, you think, "I wish I could save that man, but I can't. I'm willing to like allow him to die so that I can make sure that these two guys get to hospital." Right? Fair? Would you say that that's-

    16. CW

      Busy day.

    17. AO

      ... a fair analysis?

    18. CW

      Fairly busy day.

    19. AO

      Busy day, yeah.

    20. CW

      (laughs)

    21. AO

      Hope that guy gets a pay rise.

    22. CW

      (laughs)

    23. AO

      Um, but now imagine same situation, except this time you're, you're driving the ambulance, two people in the back, and there's a boulder in the way. It's in the road and the only way to, to keep moving forward is to, is to kind of push the boulder so that it rolls and kills an innocent man. Are you allowed to do that? Seems like maybe not.

Episode duration: 1:28:34

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode ZpDFF8aNs60

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome