Modern WisdomAre Sex Robots And Self-Driving Cars Ethical? - Sven Nyholm | Modern Wisdom Podcast 287
EVERY SPOKEN WORD
130 min read · 25,729 words- 0:00 – 15:00
The main issue that…
- SNSven Nyholm
The main issue that people worry about so far is that sex robots will inspire people to have strongly objectifying attitudes towards sex partners, to sort of make their empathy go away, because there's literally no mind or subjectivity there on the other side for you to be sensitive to. And so one worry would be that one keeps interacting with this robot and then stops caring about the feelings of the other, uh, whether they consent to what you're doing, and then that you would carry over that behavior to a human.
- CWChris Williamson
Why is the relationship between humans and robots interesting?
- SNSven Nyholm
Oh, okay. Um, (laughs) well, uh, it's interesting for a bunch of different reasons. I mean, one reason is that people are, in a certain sense, not well prepared to re- deal with or respond to robots because, uh, our brains, our psychology, uh, developed before we had any robots, before we had any AI. Uh, I mean, it developed during human evolution for hundreds of thousands of years, and then suddenly we have this enormous jump in technolo- logical development and now we have, we have robots, we have AI, we have all sorts of, uh, interesting technology and, but we're responding to it with the brains, the human psychology that developed during this long time. So, that sometimes means that we respond to things that look or act like humans, which some robots do, sometimes in humorous ways, sometimes in ways that might be dangerous for us, uh, but very often in fascinating ways. And so that's, that's one good reason, I think, to, to th- think about the relationship between humans and robots.
- CWChris Williamson
So, our current mental makeup is unsuited to interacting with robots. That's the basic sort of foundation?
- SNSven Nyholm
Yeah, I mean, so we're basically primed to, uh, anything that moves seemingly on its own accord, uh, in a s- apparently intelligent way, uh, our brains will think, "Okay, this is a, this is some sort of agent. Uh, it's an animal, it's a person," and, uh, our s- sort of human social attitudes are triggered. I mean, this can happen at the same time as we're thinking to ourselves, "You know, it's just a robot. It's just a machine. Doesn't have any feelings, uh, doesn't like me or dislike me." But nevertheless, emotionally, we respond to the entity as if it's, you know, another person. I mean, it, not all, all of the time, but, uh, uh, I mean, the interesting thing is that this is not just true of, uh, laypeople, but also experts. They talk about robots as if they have a mind, as if they have desires, beliefs, um, people will say about a self-driving car, for example, I- I- I'm thinking of a self-driving car as a kind of robot, that it wants to go left or right. It has to decide what to do. Um, so we have this tendency to, uh, anthropomorphize, as people say, I mean, so attribute human-like qualities to robots, to technologies. Uh-
- CWChris Williamson
I mean, you, you hear people, even before robots, people would describe, you know, your dad's car or your mom's car and they'd say, "Oh, she's, she's a little bit cold this morning. It might take a little bit of time to start her up." Like we, we personify inanimate objects in that way, don't we? Everything, Thomas the Tank Engine has a face. He's a tank engine with a face. He doesn't need a face. He's a tank engine. But lest we decide to do it.
- SNSven Nyholm
Yeah, I mean, the face makes, makes a difference. I mean, just put two p- pair of eyes on something and then people will feel like they're being watched. And, uh, you know, uh, if that thing with the pair of eyes can also move in a functionally autonomous way, uh, can behave in a way that seems intelligent to us, then again, you're gonna have all, all your sort of responses that have been programmed into you by evolution over thousand, hundreds of thousands of years. They're gonna be triggered even if you think to yourself, "Okay, it's a robot. Someone painted eyes on it."
- CWChris Williamson
(laughs)
- SNSven Nyholm
"They wanted me to respond in this way." Uh, but you can't help it. And, and as I said, it's not just, uh, it's not just laypeople, it's experts too. I mean, one of my colleagues, uh, Joanna Bryson, who argues that, uh, I mean, in her way of putting it, robots should be slaves, uh, meaning that, you know, they are things that we can buy and that we can own, and they're created to be, uh, you know, useful for us. Uh, and so, well, you shouldn't treat any person like this 'cause that would be a slave. However, uh, in the case of a robot, I mean, you have this, on the one hand you're responding to it, uh, as if it's a person, but, uh, you know, it's something that you buy and sold here. It's been created for human use, uh, and so, I mean, what she's arguing is that we should design robots in such a way that we don't have these responses. The problem, of course, being that a robot doesn't have to look like a human. It doesn't have to look like an animal. Uh, I mean, there are, there's interesting stories, uh, about, uh, military robots where they look, some of them look like, I don't know, vacuum cleaners or, uh, lawnmowers. Uh, but they are, they- they're part of the m- the team. Uh, the soldiers become attached to them. Uh, there was one robot that, uh, helped to, you know, find, uh, bombs in the battlefield, and of course, eventually, it, you know, it got blown up by a landmine or something like that. Uh, the soldiers wanted to fix that particular robot. They didn't want a replacement, not even a better one. Uh, eventually, it was destroyed beyond repair and then they gave it a, you know, military funeral. They wanted to give it medals of honor, et cetera. So, you can get very attached to a robot, even though it doesn't look like a person, like a human. Uh, I mean, that robot didn't even act like an animal or a human, but nevertheless, they got really attached to it. So, it's, it's fascinating.
- CWChris Williamson
Didn't... We've two other examples from your book. One is a robot got to meet the Queen of Denmark, and then another one-
- SNSven Nyholm
Ah.
- CWChris Williamson
... was given honorary citizenship in China or South Africa or something?
- SNSven Nyholm
Yeah, so the examples are right. Uh, the country is wrong. So the, the one that got to meet the Queen, that was, uh, here in the Netherlands where I am. Uh, so this was a robot, uh-... called Amigo, uh, who is, uh, or, well, which is I should per- perhaps say. It's an, it, uh, it's a medical care robot. Uh, uh, again, this doesn't look like a person. I mean, it, it does have a head and it has sort of arms. And so, um, the former university I worked for was developing this robot. And so when the Queen came to visit, uh, this robot, uh, gave the Queen a bouquet of flowers and, and asked the Queen, "So, what's your name?" And the Queen sort of immediately responded by a- accepting the flowers and, and saying her name. Uh, and, uh, I mean, none of the students at the university got to meet the Queen, only this robot and (laughs) uh-
- CWChris Williamson
(laughs)
- SNSven Nyholm
... yeah. The, the other one was Sophia the robot, and the country was Saudi Arabia. And so this, I mean, this has been quite controversial. So, uh, this is a robot that, uh, unlike the other one that I just mentioned, it does look like a human, so it has a very human-like face. Uh, the back of the head of the robot is transparent so that you can see the electronics in there. Uh, the idea is that no one should be fooled that this is a robot. But this is one where people really respond in anthropomorphizing ways. Uh, so this one has, um, you know, appeared in front of political bodies, the UN, uh, the Munich Security Council, uh, the British Parliament, I think, and, um, uh, has appeared on The Tonight Show with, uh, Jimmy Fallon, uh, took a selfie with Angela Merkel, the chancellor of Germany (laughs) , and, uh, yeah. I mean, why exactly? Well, maybe it's the novelty. It's a hu- a humanoid robot. Um, but, um, but yeah, it's, it's been controversial, and it's a very fascinating ex- example of people reacting in these ways.
- CWChris Williamson
There's a couple of terms that you've used so far talking about it being an agent, it having agency. Um, what are the premises that we need to understand before we can get into this conversation about robot-human ethics?
- SNSven Nyholm
Yeah, yeah, okay, so th- those are some technical terms indeed. So, uh, agency is something that philosophers such as myself, uh, we, we love to talk about it all the time. Uh, I mean, one... of course, one problem is that others such as, uh, software developers and, uh, I mean, people developing med- medicines, I mean, sometimes talk what, "What's the active agent, uh, ingredient?" so on. So it, uh, on the most general level, an agent is something that can act, uh, or react to the environment, uh, in a more or less predictable, intelligent-seeming, interesting way. Uh, of course, there can then be different kinds of agents. And so, um, I mean, a, a very simple, I don't know, insect, is a, is a sort of agent because it interacts with the environment in a goal-directed way, uh, but it's, it can't, you know, I don't know, have a conversation. Uh, whereas, you know, you and I are agents that can have a conversation such as this one. Uh, you know, you do something and, uh, so let's say I don't agree with it. I might try to hold you responsible. You might defend yourself, uh, thereby exercising a much more advanced form of agent than an insect, uh, you know, that maybe bites you does and, you know, you might just, I don't know, uh, just kill it immediately, don't think that it deserves any kind of, uh, chance to or opportunity to sort of explain itself. Um, I mean, I mean, in the class of human agents, you have anything from infants, I mean, they can't really do anything, I mean, but they're, they're learning (laughs) . Over time they become more and more advanced agents. And so, uh, one question then that arises is, could a robot be some form, some form of agent? Um, I mean, it's... okay, well, before we get to, into it too much, but that, that's the idea of agency. Like, the ability to, uh, sort of interact with the environment in a goal-directed way, uh, perhaps being able to talk or converse with other agents, perhaps being able to take responsibility for what one is doing, being able to make decisions, make plans, and, and so on and so forth.
- CWChris Williamson
What other... are there any other words that we might encounter over the next 50 minutes or so that need defining before we get into it?
- SNSven Nyholm
Yeah. I mean, I already mentioned the word, uh, a- anthropomorphize. To anthropomorphize something is to attribute sort of human-like qualities to it. Uh, we kind of already explained that, but that's an... a key word. Uh, I mean, maybe we should say something about what a robot is. Uh, I mean, obviously it's a everyday term, and as it happens, uh, it was introduced, uh, this, the term robot 100 years ago, uh, in a play. Uh, so this is a term, unlike artificial intelligence that was... that, uh, is a term scientists came up with, robot, uh, is a word that comes from a Czech word, uh, and I... not, not gonna attempt to pronounce the Czech word, but it's a Czech word that's been sort of changed into, uh, a noun. Uh, it was a, a verb meaning, I don't know, to perform forced labor or something like that, uh, and then, uh, turned into a, a noun in, in a play about robots, uh, about artificial human beings that are created to serve humans. That's the or- origin of the term. I think these days, uh, I mean, when we think of the term robot, we think of a maybe, like, a metallic, human-like shape. That would be the pa- paradigmatic robot. Uh, but, uh, if however we look at sort of real-world robots that are useful to anyone, that people actually, uh, are interested in, uh, in terms of buying and selling them, uh, it would be maybe something like a Roomba vacuum cleaner ro- robot, uh, a self-driving car. I mean, that's this very hyped, uh, robot. Um, on... so, so a lot of these functional robots, I mean, robots in logistics warehouses moving boxes around, they don't look like humans. They don't look like the paradigmatic robot out of science fiction like C-3PO in Star Wars. Uh, so those are two different kinds, I mean, the, the silvery metallic one, the, the ones from real life that look like boxes with arms, et cetera. And then there's robots such as Sophia that... the one that we already talked about, uh, I mean, made to look like a human, made to act like a human.... like, like what do all, what do all these things have in common, you might ask. Uh, well, okay, so here people that work on this, uh, sometimes they don't even want to give a definition because there's so many things that we mean when we talk about robots. But, uh, if one were s- sort of speaks forced to give a definition, people would sometimes say something like, "It's a machine with some degree of artificial intelligence that has some..." That, that of course is already a technical term, but also has some functional autonomy, another technical term. That means that the machine can operate on its own for some period of time without direct human in- uh, uh, interventions. Uh, and, uh, so it's basically a machine that can do something, (laughs) that, uh, seems to be intelligent. That's often what people, uh, mean by robots. I mean, sometimes people talk about what they call the, let's see if I can remember, the sense, plan, act definition of robots. It can sense the environment, it can plan a response, and then it can carry out that response, take action. That's another definition people sometimes use. But, I mean, from my point of view, I think it's better to just talk about different examples of things that people call robots and then ask, I mean, for example, do they have agency of some interesting sort? Do people anthropomorphize them? Is that good? Is it bad? And so on, yeah.
- CWChris Williamson
Mm. I mean, even if we roll the clock forward, nanotechnology probably over the next 100 years is going to be big. And what do people say? Tiny little robots. That's what they, that's what they call them, right? So yeah, from, from the grand scale to the smallest, those are the, the definition and the boundaries of what it means is, uh, is gonna continue to get blurred. So you lay out an argument quite early on in the opening chapter, your argument. Would you be able to take us through that?
- SNSven Nyholm
Yeah, so this goes back a little bit to what we were talking about ear- early on about how people respond to robots. Uh, again, we are, um, uh, we are equipped with a brain and a mind that, you know, developed over ar- thou- hundreds of thousands of years. Uh, that's sort of a multipurpose, uh, organ (laughs) or capacity that we have, uh, that to some extent has, has prepared us to, to build robots. I mean, we, we're creating them ourselves. But it doesn't necessarily prepare us to respond well to robots. Uh, and sometimes, uh, we will become maybe too attached to robots. We will trust them too much. Uh, one part of our mind will say, "It's just a machine. You know, it can't... it's not intelligent." Whereas the emotional part of our minds will, will get attached to the robot, will trust it too much, uh, and we will build, you know, technologies that, you know, have ro- robotic and other e- elements that will, you know, polarize us online, et cetera. So we, we have all these problematic responses to technology. Uh, I mean, online
- 15:00 – 30:00
Yeah, yeah, yeah. Throw…
- SNSven Nyholm
polarization, that's, that's, that's one thing. It's, uh, doesn't have too much to do with robots. But that's just another example of how we react, uh, in sometimes e- funny, sometimes dangerous, sometimes not very nice ways to the technologies that we have. And so, uh, we face a kind of choice, I argue. Either we try to change the technologies so that they are better sort of suited, uh, for us with our human nature. Uh, and typically, that's, that's the right response because, uh, you know, why should we change ourselves to sort of make ourselves more adaptive to technologies? But in some cases, it may actually be worth also asking, should we somehow try to change the way that we behave and the way that we think so that we're better adapted to interact with technology such as robots and AI? I mean, f- for our sake, I mean, maybe in the future, for the sake of the robots, if they get intelligent enough, I mean, some people already talk about the idea of maybe robots should have some sort of rights if they're intelligent enough, uh, if they awaken sort of social responses in people. So the, the idea is that for our sake, we have to do something 'cause we are exposing ourselves to so many risks when we are creating these technologies. Uh, and then two of the most obvious things we can do is to try to change the robots to make them better adapted to us or somehow try, try to change ourselves to make ourselves better able to interact with robots. But we... uh, in order to avoid the risks we are creating for ourselves, we have to do something rather, and so I'm kind of exploring both options, uh, in, in, in this book that we're talking about, uh, you know. Uh, sometimes, uh, as I said, the, the most obvious thing is to change the technology so that it's better suited for humans. But that might also make us, uh, miss out on some of the benefits that we want. Uh, and I don't know if we wanna jump into any particular examples at this point, but, uh-
- CWChris Williamson
Yeah, yeah, yeah. Throw some...
- SNSven Nyholm
... uh, I mean, one that I spent quite a bit of time on, and we already mentioned this, is the self-driving car. Uh, so this is, uh, you know, why do we want or need self-driving cars? Uh, the typical answer is that people are not super good at driving. Uh, we drive sort of well enough that we don't, you know, uh, crash most of the time, (laughs) but a lot of the time we do crash, uh, it's very dangerous. Uh, we're also driving in energy not inefficient ways. We're using up much more resources than we, uh, y- you know, need to 'cause we're, you know, we're accelerating in a kind of quick way and not breaking gently, et cetera. So one can envision a more ideal and more optimal form of driving, and that's where the self-driving car comes in. It's supposed to be, you know, a car that's better at driving than a human drives in a more environmentally, uh, friendly way, uh, killing few, fewer people (laughs) along the, along the way, so to speak. Uh, and so it's more optimal type of driving agent, if you will, to use that terminology again. Now, the problem is that if you have self-driving cars and human-driven cars on the road at the same time, you get a kind of coordination problem because humans expect, uh, c- cars, self-driven cars to behave like human-driven cars. And so, um, there have been a lot of, I mean, mostly minor crashes. Uh, what typically happens is that people drive into self-driving cars because they think that they, they're gonna be accelerating more quickly, drive more aggressively. But the average self-driving car today is sort of programmed to follow the rules, like, to the letter, so to speak. Like, never speed, never drive aggressively, et cetera. But humans do all of those things.And then some people have suggested, I mean, I went to one, uh, conference with, uh, people advising the Dutch government about how to develop our sort of self-driving cars research program here in the Netherlands. Uh, some, one person said, "Well, we need to adjust the self-driving cars so that they drive like human drivers do." Um, but of course-
- CWChris Williamson
So they need to inherit our bad driving habits.
- SNSven Nyholm
They should drive a certain speed. They should drive aggressively.
- CWChris Williamson
(laughs)
- SNSven Nyholm
Uh, et cetera. But I mean, then in a way that you, you would take away all the s-, you know, the, the benefits that are supposed to be there with self-driving cars. Um, there's, they should be drive less aggressively and br- uh, keep, you know, follow the traffic rules and not, you know, uh, drive like humans. Uh, and so this seems to be one of these cases where you do have an interesting choice. Should we try... Like, if we wanna drive at the same time as this also self-driving cars on the road, should we ad- adapt ourselves and our driving to these robotic, uh, cars, uh, self-driving cars, or should we make them adjust themselves to us? I mean, probably the best answer is some, some sort of compromise (laughs) where, where bo- both are adjusted to each other. But on the other hand, uh, it does seem that it would be good, uh, if we drove in a more safe way. Uh, uh, uh, of course, people differ a lot in how, uh, much, um, uh... So some people say, you know, within five years, we're gonna have fully automated, uh, super safe cars, and, you know, it's gonna save, you know, thousands of, uh, hundreds of thousands of lives that are killed by human drivers e- e- every year. Others say, well, actually, it's, it's really quite hard to develop a fully self-driving car that would work in all kind of traffic conditions, uh, in all weathers, uh, w- who, uh, that are able to, you know, interact with all kinds of environments, anything crazy that people do on the road. So the expert differ in how safe they're gonna be. But if we do accept the premise that eventually they are gonna be safer than humans, uh, then it would be strange in a way to say, "Well, let's make them drive like humans so that we can, you know, have them adjust themselves to us." So that, uh, seems to me to be one case where we, we might wanna investigate are there ways in which we can make humans drive more like robots. And-
- CWChris Williamson
I'll tell you an interesting-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... example that I remember hearing last year on a podcast. Someone was talking about the social habits that were being learned from people using voice control on devices like Alexa and Siri.
- SNSven Nyholm
Yeah.
- CWChris Williamson
And they were saying that young children, there was a fear that young children and older adults, uh, would begin becoming less polite and using fewer social norms when talking to people. Because you don't say, "Uh, hey, Siri, please turn on the light." You say, "Hey, Siri, turn on the light." Um, and when you then port that behavior back across into the social world, into the human world, you actually end up with it being very misaligned.
- SNSven Nyholm
Yes. So, I mean, that's just one example of, uh, many of where people worry about, uh, the robots would sort of inspire certain behavior on the part of the humans interacting with them, and then that behavior will then sort of carry over to the, t- to humans. And, uh, I mean, another example which we might talk about, uh, would be s- the sex robot. Uh, there (laughs) , uh, one of the biggest, uh, criticisms that have been raised against them from a sort of feminist point of view is that, well, uh, people will start treating sex robots made to look like humans, uh, in a very objectifying way, in a rude way, you know, you know, not at all, uh, the sort of nice way that we would like sex partners to treat each other. And then that attitude will be car- carried over to, uh, humans even more than it is today, so that people will objectify each other even more than they're already doing. Uh, so that's, I mean, that's a different s- example than the, the, the Alexa (laughs) or Siri or whatever. But it's the same, same sort of argument that we will learn a certain behavior by interacting with these robots or robotic o- or other technologies, and then that behavior will sort of carry over into our interaction with human beings.
- CWChris Williamson
If a self-driving car kills someone, who's responsible?
- SNSven Nyholm
Ah, yes, that is another topic that, uh, in terms of... I mean, f- actually, here too we have a bit of a technical term, responsibility. Uh, and of course, this is, it's an everyday term. We hold each other responsible. We ask who's responsible. But, uh, we sometimes don't know exactly what we mean, uh, or at least we don't know what the conditions are, uh, that we sort of, uh, that lead our intuitions or guide our intuition, intuitive judgments about these things. Uh, a lot of philosophers say if I'm to be held responsible for something, uh, I first have to be able to predict what's gonna happen when I act, or I have to understand my environment, uh, you know, what I'm doing. Uh, if I don't, you know, if I don't know what I'm doing and I don't know what's gonna happen, then in a way I have an excuse to, for, you know, not doing the right thing. So that's sort of one condition. Another condition is I should be able to control, uh, you know, what I'm doing. If I lose control, someone puts... uh, you know, I come in there, I put some sort of drug in your drink that you have (laughs) there, and you sort of go crazy and start doing strange things, uh, you know, you can say that, you know, you lost control over yourself because you were drugged or something like that. So, you know, you should be able to pred- uh, to, to know and understand your, what you're doing, and you should be able to have some control over it. Uh, that makes you responsible. Now, if I am using a self-driving car, uh, and I am not a very technical person, let's say, I don't really know how it works, uh, I mean, I can say, "Please take me to the grocery store." And as the car is doing that, it's, you know, hits and kills someone. Am I responsible? Well, I didn't really understand what was going on. I, maybe I, I had no con- direct control of it 'cause I may not have... I, I mean, if I said, "Okay, take me to the grocery store," but then everything that happened was, you know, done by the, you know, the, the computers, uh, in the car, the artificial intelligence, the, you know, whatever technologies are involved. So it would, would be strange to say that I am responsible, uh, for what happened. Now, there are complications, of course. Let's say that I'm the owner of the car, uh, and, uh, I maybe signed a contract saying that if something bad happens, then-... I, I should be responsible. A- actually, Tesla, uh, uh, they have, I mean, not, they don't have fully self-driving cars, but they have something that's called, uh, autopilot, which is sort of a, a, a certain form of automation. And so they do have a contract with their customers and users of this automation that if you engage the autopilot, then you are responsible. Tesla does not take responsibility for anything that happens. So this is, this is one way of solving the problem. We just make a contract. A lot of people have responded to this particular type of example by saying like, "Well, actually, Tesla built the car. They say it's safe. Uh, and they are benefiting a lot from having people buy and, you know, a- actually it's pretty expensive to get this upgrade to the autopilot feature. So since they are benefiting, maybe they should be held responsible." So that's another kind of answer. Uh, the first answer was, you know, that whoever signed a contract agreeing to take responsibility. The, another answer would be, who benefits the most from the existence of this technology? Maybe, maybe Tesla in this case. Uh, a- a- another kind of answer that maybe, um, possibly would seem more fair, I mean, of course, these things might all align, so maybe I sign a contract and I benefit the most. But you could also ask, you know, who has the ability sort of to update the technology to, to monitor, monitor it and see what it's doing, uh, to maybe stop using it if it turns out that it's not very, you know, useful to begin with? And, you know, you can create kind of a checklist. And, uh, the, well, the problem is that sometimes maybe one person can update it, another is monitoring what it's actually doing, a third person is in charge of s- stopping the, the program of, you know, using this technology, so to speak. So we do get what are sometimes called responsibility gaps. Uh, that means that, uh, there's a sort of a, you know, it's a feeling or a sense, an intuition that someone should be held responsible, but, uh, a lot of the conditions that we typically think that sh- uh, should be fulfilled in order for someone to be responsible, I mean, whether it's control, whether it's knowledge, whether it's, there's a contract, et cetera, either they're not fulfilled or different people sort of live up to this criteria. And s- so that's, that's a bit of a, a pro- problem.
- CWChris Williamson
Yeah. It's so interesting. Um, I went onto the MIT website, that ethics, uh, the car trolley problem thing.
- SNSven Nyholm
The, yeah, the Moral Machine website, if people wanna check it out.
- CWChris Williamson
Yeah, that's it. So anyone that wants to feel uncomfortable for 10 minutes, go on, just Google "the Moral Machine" and, uh, an MIT website will come up, and you do a, a little quiz, and you go left, you choose left or right. You basically choose who you want to kill. And, um, you just consistently, uh, don't know what the best option is for that. Do you have any sense of why people are so uncomfortable around autonomous cars, even if in the aggregate they would save lives?
- SNSven Nyholm
Ah, okay. Yeah. Good question. So l- yeah, let's, uh, let's say that, uh, they save thousands of lives every year, but they will kill a few people because any technology, uh, will, you know, it stops working sometimes. Uh, you know, there will be some, uh, a tree falling over and it, like there, you can't have any technology that's moving fast and is heavy and that it's 100% safe. It's not, impossible. So they will, they will sometimes kill people. Uh, but as we said, we can assume that they would just save a lot of lives. And so on aggregate, uh, they might be doing better than human drivers. Even so, the idea of being killed by a machine seems worse to people than being killed by a human. Of course, it's not nice to be killed by a human. However, uh, we also have another impulse. Uh, so I talked about we want to hold people responsible. Another sort of intuitive impulse or feeling that we have is that we want to punish (laughs) . And, um, of course, this goes back to what I talked about before, you know, our brains developed over long periods of time and we, we developed these sort of attitudes and emotional reactions that we have, and, uh, whenever someone is harmed, uh, and it seems that could have been avoided, we tend to want to find someone to blame and punish. Uh, and, you know, you, you, maybe you could say, I mean, I actually have a colleague who says, "Let's punish the self-driving car. Let's blame the car." But to the average person, this is not a very, uh, satisfying idea because, uh, well, a- another colleague of mine, uh, Rob Sparrow, he argues that, well, uh, if you punish someone, you cause suffering to them to some extent. You make it, life hard for them. But a car, a s- a robot, it, they can't suffer. So you can't punish them in the, in the way that you would, you know, want to hold someone accountable in the sense of, you know, giving them a hard time. Uh, I mean, some people would then come back and say, "Well, that's, that's one function of punishment, to sort of make the, the, the bad people s- suffer. Another function of punishment is to sort of indicate to other people what not to do, to deter them." And maybe if I punish a self-driving car, I could maybe deter other people from acting like the self-driving car. But it, I mean, it doesn't seem very, uh, plausible. I mean, maybe you could deter other companies from developing cars that would behave in that way. Uh, yeah, but, uh, so, eh, we again have this problem that, you know, who are we g- who are we gonna punish, who we're gonna blame? We want to
- 30:00 – 45:00
Wow. …
- SNSven Nyholm
blame someone, we want to hold someone responsible, we want to punish someone. Uh, I mean, you might say, actually, these are not very nice impulses that people have. Uh, this fact that we have this sort of deep-seated, uh, um, you know, desire (laughs) to punish people who cause harm. Maybe it would actually be better to, to, to try to remove that from our nature. And I, I mean, I have other colleagues, again, uh, people, you know, uh, we philosophers, we like to explore different options. And so we always have sort of someone working the job of, you know, you go and, and investigate, you know, whether we can ... that would be a good idea. And so one of my friends, uh, argues that, well, actually, we should try to take out these, uh, retributivist, uh, intuitions where we want to punish. That would be better. Uh, I mean, I guess, and my response would be that p- perhaps it would be better, but also good luck. I mean, that's, it's not so easy because it's very deeply ingrained in, into our nature.... so, I mean, I don't even know where you, where you would begin. Uh, some people would say, well, we do find that, uh, some of the, uh, medicines and drugs we take for other things, they sometimes change our emotions and our responses. Uh, I mean, I believe that you talked with a friend of mine, Brian Arp, uh, he's interested in exactly this topic. And, uh, one of the things that got Brian, uh, interested in whether we could actually use, uh, drugs to sort of control people's love lives and emotional lives is that it, it, it's been noticed that drugs for other things, such as depression, uh, et cetera, they sometimes affect our feelings and our attitudes, how trusting we are of others. And, well, maybe if we discover some sort of, uh, side effects of some drugs that would make us less, uh, willing to punish or eager to punish people, maybe we can use that, uh, uh, yeah, after someone has a crash with a self-driving car or is killed by a military robot or something like that, to sort of, um, to think about this, you know, what happened in a way that doesn't involve fi- wanting to find someone to punish.
- CWChris Williamson
Wow.
- SNSven Nyholm
Could be-
- CWChris Williamson
I mean-
- SNSven Nyholm
... could be one way of going, I don't know. (laughs)
- CWChris Williamson
... Tesla's gonna have to supply you with some supplements, uh, an annual supply of, of tablets so that you can do that as well. You're so right-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... the, um, the whole principle for why we have friendship and what it means to have rivals and w- what it means to have, uh, reciprocal altruism and kin selection and all of these, it is, it's the foundation of what makes us human, that social element, right? And, uh-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... yeah, to deprogram that, I think you're asking an awful lot. But on the flip side, there is something that feels awfully unfair about it. I mean, it's unfair to be run over by anybody, but it feels oddly unfair to be run over by a car. But that being said, I imagine that when cars first came out in the sort of early 1900s, there would've been complaints around, "Well, they're moving so quickly. Look at how many people they're gonna kill. Horses would've been a much better solution. We, the horses on the road, yeah, they make a mess and, but they go slower, there's gonna be fewer accidents. You wanna put these cars on the road, that's gonna cause more accidents." Or before the Tube and the London Underground were made, it's like, "Well, you know, if we, if we allow people to get on the Tube, some people are gonna fall in front of the Tube and they're gonna get killed. They're not gonna get killed when they're walking on the street, especially if there's only horses and carts upstairs." So I wonder how much of it is a status quo bias that's just simply people feeling uncomfortable getting out of inertia and interchange. I feel like that probably is a lot. We're pretty... th- the, the reason that we're so good as a species is that we're adaptive, right? And we are incredibly quick at adapting. So when the new thing happens, we'll probably end up adapting to it. The beauty or the challenge, I suppose, that we have at the moment is that we can step into this programming globally, the, um, technological programming, the societal programming, the cognitive programming, and we can say, "Okay, we have the opportunity to choose what sort of a direction we would like to go down right now before we actually get there and we just adapt to whatever the hell's going on. We can make the choice of the direction that we think would be optimal."
- SNSven Nyholm
Yeah, I mean, I do think that there is a bit of a development in this direction. I mean, so typically, what has happened in the past is that technologies are developed, put out into society, and then, you know, later problems are fixed.
- CWChris Williamson
(laughs)
- SNSven Nyholm
Uh, but, uh, but then at that point in time, I mean, it's typically hard to fix things because, you know, uh, technologies, they get ingra- ingrained into everyday life, and they sort of almost recede into the background and we, we don't even think of them as technologies anymore. We just think of them as, you know, part of everyday life. And so even what we, what's sort of called a technology, uh, you know, or a robot or artificial intelligence tends to be something that's new and a little bit unfamiliar. Uh, and, uh, but once it's, you know, it's taken on that sort of, you know, part of our sort of human landscape and like the world w- we move in, then it can be pretty hard to change it because... I mean, just think of cars. I mean, uh, our cities are now, you know, totally planned, uh, you know, f- for, you know, where people have to park, where they drive, et cetera, and where you can walk and you cannot walk. I mean, that's changed over time. And so actually, if you look at old pictures of, you know, when the cars and roads, uh, when this first ca- came into the cities, I mean, people were walking everywhere, uh, biking random directions on the road. Um, actually, where I live in the Netherlands, they s- they still do. But still, I mean, things change over time, but then it gets ingrained, uh, and p- part of the sort of the, just the backdrop of our lives and it's very hard to change. I do see more, uh, a development now that there's so much discussion about, uh, risks and fears related to AI and robots and things like that. So there's, there is a move towards trying to put the ethical reflection into the design process itself. Uh, I mean, that, that's part of the reason why MIT has that website with the moral machine. Uh, peop- they're trying to find out people's attitudes about self-driving cars before we have a lot of them everywhere. I mean, p- some people have responded that, "Well, you know, the self-driven cars are not gonna be choosing between killing two grandmothers or one grandfather and, and, and two dogs," is, uh, you know, that's the sort of dilemmas that you get on that website that you talked about before. Uh, they're gonna face very different kinds of challenges. For example, determining whether something is a person or a branch or something like that. Uh, the, the image recognition should be good enough that, uh, it, a self-driving car could tell if something is, I don't know, like a, a, a just a shape that looks human from a distance or whether it's actually a human. Maybe it's some sort of heat camera or something like that. (laughs) So they, they need to know what their environment is like. And so, uh, do they ever need to choose, you know, I'm gonna drive straight and, and run over two grandmothers or goes, you know, right and, and you drive over three, I don't know, granddaughters (laughs) or left and, and t- two grandfathers or something like that. I mean, that might happen every now and then, not, not very often.
- CWChris Williamson
(laughs)
- SNSven Nyholm
Uh, but nevertheless, it- it's all part of this idea that we have to think about these ethical issues before we face them, and we have to somehow try to program some sort of ethical, uh, I don't know, uh, uh, compass, if you will, in- into the self-driving car or into the technology that we're gonna be using.
- CWChris Williamson
(laughs) It seems to me that w-You, you hit the nail on the head there that technology tends to move quicker than the legislation that catches up with it. Governments have these big lumbering behemoths that take forever to do anything, uh-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... whereas Silicon Valley can just get a product out and see what happens. We're seeing that with, um, phone addiction at the moment. Everybody uses their phone too much because the tactics that are used by apps race to the bottom of the brainstem, and they're able to manipulate you in ways that perhaps if we'd known, if we were omnipotent and had known in advance, we would have said, "Actually, let's not have that feature. Let's not allow, allow infinite scroll. Let's not allow autoplay. Not to, let's not allow Bings and Bongs and TikTok," generally. But it's out there, and now we need to play catch-up. Um, one of the things that makes me a little bit more hopeful, at least for the self-driving car analogy, is that because the outcomes are so grave and newsworthy, I think that a lot of the companies are gonna err on the side of caution. Like, you do not want to be the company that's killed two people in the same city in the same week, like, you just don't, because it's going to be so bad for PR. But the, um, socialized costs, or the externalized costs, should I say, of the technology being wrong with regards to self-driving cars is so obvious and newsworthy compared with the more slippy, difficult to define technologies, like social media and stuff like that. Like, you don't really see someone's mental health degrade or their sense of self-worth get worse over the, o- over half a decade. You know? Like, it's, it's a lot harder to define. And that person themselves, you know if you're dead or a lot ... well, you don't know if you're dead, you're just dead, but you'd know if you're dead (laughs) or alive. Um-
- SNSven Nyholm
Your family will know. (laughs)
- NANarrator
(laughs)
- CWChris Williamson
Precisely.
- SNSven Nyholm
Yeah.
- CWChris Williamson
But whereas your family perhaps don't know the arrow of causation between you spending too much time on TikTok and you wasting your life. Um, so I wanted, I wanted to talk, I've wanted to talk about this for ages. Are sex robots ethical?
- SNSven Nyholm
Okay, yeah, from one thing to another. Uh, yeah, uh, I mean, just real quick maybe th- th- about that c- case of self-driving cars killing people and the company doesn't want to tell you it killed two w- in a week, I mean, it, it is interesting to compare it with space travel. Uh, the first time that someone went to the moon, it was a world event. Like, everyone was in front of the TV watching very ca- you know, carefully. The second time, smaller TV audience. The third, uh, I, I think, I don't rem- remember how many times they've been to the moon, but I think it's maybe, uh, less than, less than 10? I mean, five, five maybe or something like that. But e- each time i- it was a sm- less of a thing. Uh, and now when people travel to space, I mean, it's not even on the news. I mean, some- sometimes when Tesla has a new rocket that they're trying out that crashes, that's newsworthy. I mean, I think it just happened today as we're recording this. Anyway, but, eh, the same thing could happen, I, I fear, with self-driving cars crashing into and killing people-
- CWChris Williamson
It's gonna be so normalized that we-
- SNSven Nyholm
... it's gonna be normal. "Okay, no, it happened again." So, uh, I, I mean, I would agree with you that at the moment, it's, you know, world news when it happens. But th- that's something that we're probably gonna see the same sort of development that it's gonna be normalized and so they're gonna have more leeway (laughs) to sort of kill people. (laughs)
- CWChris Williamson
S- so we just need-
- SNSven Nyholm
But, yeah.
- CWChris Williamson
... with all of this stuff, we need to get out ahead of it, which is obviously why-
- SNSven Nyholm
Yeah.
- 45:00 – 1:00:00
To me, I struggle...…
- SNSven Nyholm
domain and that who wants to train themselves, maybe get to know human anatomy better, they're embarrassed about, uh, doing that with another person, but perhaps a sex robot could serve as a kind of educational, uh, tool, uh, for them, uh, a sort of teacher. Uh, I mean, that's, this is something that, uh, others are also think- thinking about. You know, what... not only what are the bad possible re- uh, consequence and risks, but also what are the potential, uh, benefits? I mean, take another case, uh, let's say that, uh, you are a... someone is the victim of a sort of sexual assault or- or rape or something like that, uh, and that they feel extremely uncomfortable a- around, uh, human sex partners, but they want to get back into a sort of the- the sexual, uh, uh, world so to speak. Maybe a- a robot, uh, that would, you know, do whatever they want, uh, could be a way of becoming comfortable again with having sexual interactions with other agents, and that could be a sort of stepping stone towards, like, r- returning to having sex with humans. That's, uh, um, that's one possible thing that people say would make... would be an argument in favor of having them, and now there would be... Well, let's say that there is someone who really can't find a sex partner. Maybe people around them, they don't find them attractive, uh, they just have some sort of (laughs) impossible personality, I don't know, so th- they really can't find a sex partner, but they still have a deep longing to have sexual interactions. Well, maybe for them, uh, a sex robot would be better than nothing. Uh, that would be... that's another argument that I've seen for why sex robots would be ethical rather than unethical and- and actually ethically required that we should try to develop them. So, uh, it- it certainly seems a bit something that there are arguments on both sides.
- CWChris Williamson
To me, I struggle... I haven't found a compelling argument, um, that says it's unethical, uh, to use sex robots, not... pe- personally, I just haven't been convinced by any of them yet. I understand that we don't want to train people to go out and behave in bad ways, but those externalized, uh, sort of costs, I- I- I don't think that they're going to happen all that much. I wouldn't be too concerned about it, and outside of that, I don't really see anything that's that compelling to, um, to stop it from happening. However, there will be a lot of people listening who may disagree with me, so I would be interested to see in the comments below what- what everybody- ... what everybody thinks. Um-
- SNSven Nyholm
Well, I mean, l- let me give you a case that's maybe the- the- the most difficult case, uh-
- CWChris Williamson
Hit me. I'm excited.
- SNSven Nyholm
Yeah. Yeah, yeah, (laughs) well, let's see if you can stay excited. (laughs) So, uh, you know, sex robot, they are made to look like children. Uh, that would be the case where a lot of people feel, uh, that- that it crosses the line, so they might say, okay, if it's a sex robot that looks like an adult human being... Well, a human being, that's another thing. (laughs)
- CWChris Williamson
(laughs)
- SNSven Nyholm
The two- the two other cases that people have b- concern about are ones that are made to look like, I don't know, animals, a- a dog, let's say, or, uh, ma- es- especially, I think, a more serious example is, uh, uh, the sex robot made to look like child.
- CWChris Williamson
Mm-hmm.
- SNSven Nyholm
Uh, there too, though, uh, you have people who argue, uh, not implausible I would say, that well, let's say that someone is a pedophile and they- they do recognize that it's wrong or they think that it's wrong to have sex with children, and yet, they can't control... I mean, the- they- they can't... there's no conversion therapy, let's say. Uh, so the only physical outlet, so to speak, (laughs) would be to have sex with either a child or a robot looking like a child. Maybe it's ethically good if someone, you know, takes the... goes for the robot rather than the- the human child. Uh, so it's- it's- it's been suggested that it could be a kind of therapy tool. Um, some would then say, well, nevertheless, there seems- there's something sort of inherent or repugnant, there's a sort of moral taboo that we should respect, et cetera, et cetera. So this would be the case where maybe... uh, I- I don't know how you feel about that case, but that- that's one where-
- CWChris Williamson
Yeah, so f- uh, od- oddly-I had ... this is going to sound so odd. I had a three-hour conversation about the ethics of sex robots, uh, and at least half of it was talking about this example as well. We were on a plane out to Dubai, so there was only one person that could obviously s- hear, understand English in the, in the vicinity. And this poor girl must have been thinking, "What am I listening to for the entirety of this journey?" Um, my mind, ever since I, I was seeing a girl at university who, uh, was doing medicine and was very, very much into medical ethics. And she completely changed my view of how I saw pedophilia. The difference being between pedophilia and child molestation, and that's a, a distinction that I think a lot of, uh ... sadly because the way that we use those words, they're used interchangeably but they're not the same. Um, the first sort of thing to understand was that people do not control what they are attracted to. They have no conscious control over that. This has been shown in FMRIs and also in arousal response where you show someone a, every sexual situation under the sun with not children, nothing happens. You show them something with children and everything happens. And the reverse happens too. People can't control what will happen. Okay, so what that means is that there are some people who are brought into this world cursed, knowing that their sexual proclivity is ... they're disgusted by society, no one ... they're terrified to reach out for help, all of these sorts of things. Now, if it would appear that if we find out that, uh, upon bringing in sex robots, the externalities of someone using a sex robot seems to bleed that behavior out into the real world, then we have quite a big problem. If the reverse happens, and it seems that by using the sex robot we actually get a decrease in that behavior out into the real world, and I don't know what is more likely, I'm not a neurologist, I'm not a behavioral psychologist, I don't understand sort of what would tend to happen there. Um, but you can actually imagine a, a world in which you would reduce human suffering by basically giving someone what is ostensibly a vibrator or some sort of ... You know, what's the difference between a, uh, th- uh, it's an odd question to ask, but what's the difference between a very small vibrator and a small child sex toy that would be used by a straight female who had that sort of a proclivity? Like, really, we're just getting back to anthropomorphizing, we're bestowing some sense of agency onto this being, especially if it, you're talking about a sex doll which literally has no inner workings at all. (sniffs) It's a fascinating, like, I, I think about this far more, uh-
- SNSven Nyholm
(laughs)
- CWChris Williamson
... than I, than I care to admit because I just find it, I, I find the, these things where the, um, the precipice on both sides is incredibly steep, I enjoy thinking about that because it makes me be very rigorous with my thinking and I hope that everyone that's listening kind of feels the same. This isn't here to make us feel uncomfortable. It's here because it's a, an interesting and rigorous discussion around something which obviously has some pretty grave ethical implications. What's your opinion? Do you have a, a personal stance on this?
- SNSven Nyholm
Uh, well, I mean, uh, this is actually something that I am thinking about at the moment and, and working (laughs) together with a colleague on a, on a, on an academic article about this. And so, uh, so I don't, I don't have a settled, uh, opi- opinion about it yet because we're, we're grappling with this issue. I mean, one difference between the, the child sex robot and the small, uh, sex toy, uh, that you were talking about that doesn't look like a human, uh, would be the symbolic, uh, difference in terms of one symbolizes, uh, you know, nothing maybe as th- the sex, uh, toy, let's say. Whereas the other symbolizes a child, and so depending on how much, uh, importance one puts on symbolism, one might go different directions here. So if you think that, uh, it's somehow disrespectful towards human children to m- you know, create and buy and sell, uh, a robot that looks like a child that people want to have sex with, if you think that, I mean, that's enough of a problem, uh, you know, out of respect for human children you shouldn't make money, let's say, off of this-
- CWChris Williamson
Mm.
- SNSven Nyholm
... uh, then you might have a problem with this. Uh, however, let's, let's say that it's not, I mean, it's, it's some sort of therapy tool that's created not for profit, uh, and that it's, uh, you know, highly regulated or something like that, and so there's not sort of a commercial market for it, uh, then that symbolism argument of sort of making money off of this maybe go- goes away a little bit, uh, and it may- might become more acceptable. But I think this is one of the things where, you know, you could imagine, uh, it being more or less unethical. Like, let's say that, you know, if you're, you come up with this idea, "Oh, I want to make money. I'm going to start creating, uh, and selling for very, you know, big prices, uh, child sex robots," uh, you do seem, uh, I mean, not you, but the person who would have that idea, they don't, uh, they seem at, at the very least insensitive in their attitudes (laughs) and, uh, they, they, they seem to be open to moral criticism. But if someone, however, had this idea that, "Well, maybe you can save one or more children from being molested by creating a sort of therapy tool that can be, uh, offered to, uh, you know, people with pedophilia, pedophilic, uh, desires so to speak, uh, in a sort of controlled setting," well, then, uh, you seem less open to this same sort of criticism. So-
- CWChris Williamson
Well, we had, um ... during my discussion with Brian Earp, he was talking-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... about people taking, people who are pedophiles taking SSRIs to dampen down their libido. Um, the external effect that you get in both situations from that, hopefully, if we can roll the clock forward and get the particular therapeutic tool, uh, use out of a child sex robot that we want, um, the same externality occurs in both situations, that you have less of a predation worry from this particular subgroup.
- SNSven Nyholm
Right.
- CWChris Williamson
... the difference that I can see is in one of them, the person actually gets to proceed through life normally. And I know the people that listen to this show are incredibly balanced and normal. But I know that there is that emotional, "Well, y- y- they're, they're f- freaks. They should just be locked up." And it's like, oh, like, you're not as, you haven't seriously ethically thought about this problem, and you don't understand, like, empathetically what's going on. Um, I, I wonder what happens when you scale this sort of thing up to society wide because inevitably, the loudest voices are the ones that, that are heard. And some of those are going to be, you know, if you were a survivor of child sexual abuse, finding out that child sex dolls, and maybe this one, maybe this one looks like you looked when you were a ... I mean, we're getting into some very-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... uncomfortable water as we get through this. So yeah, I, um, I wonder what are-
- SNSven Nyholm
But again, I, like, let, let's take that person, uh, and if they know that this is done for the sake of not having this happen to other children, and it's not done to make big bucks, you know? (laughs)
- CWChris Williamson
Mm.
- SNSven Nyholm
Like, uh, maybe it becomes more acceptable to them. But nevertheless, I mean, of course, they're gonna be emotionally, you know, uh, going in different directions because maybe on the one hand, they think that, "Well, if it can s- have someone avoid ha- having happen to them what happened to me, that's good." At the same time, it can might seem deeply offensive and might seem as some sort of acceptance and normalizing of something that was really traumatic and bad for them. So I could certainly imagine that people-
- CWChris Williamson
So difficult, man. It's so-
- SNSven Nyholm
... go in different directions.
- CWChris Williamson
... so messy. Um, you talk about robot rights. And we mentioned right at the top about making robots slaves. Should, should we, should we make robot slaves?
- SNSven Nyholm
Um, yeah, uh, I mean, some people have responded to that, uh, thesis by saying that we shouldn't use, we shouldn't use that t- terminology because it brings up, uh, uh, ideas about, uh, you know, people making some others into their slaves. And that whole mentality is, is, should go out the window. I mean, nothing should be a slave, neither robot nor person. But, uh, I, you know, in defense of my colleague, Joanna Bryson, she never meant to sort of en- be enthusiastic about this past slavery or anything like that. The i- idea was just that since people are gonna be owning, buying, and selling robots, and, and, I mean, that's one feature of sl- uh, slavery, like, you know, you can buy and sell the slave, and, uh, the slave is there to be useful to, you know, to, to, to the owner. The robots are gonna have these properties. And so her suggestion was it's best to create robots that wouldn't, uh, be morally ambiguous for people so that they wouldn't feel a sense of responsibility. So like you make the robot look like a box. Uh, you know, it doesn't have eyes (laughs) . It doesn't sort of, uh, generate a sense of responsibility. That's the best situation because then we don't have to have these worries about robot rights. However, for some purposes, uh, it might be more efficient to have a robot that actually does generate the social attitudes. Uh, there is one robot, uh, that is being developed for treatment of autistic children, and so the idea is that, uh, for some children with autism, they have trouble, uh, engaging with other humans, uh, because they find it overwhelming. Uh, so if you have a sort of a, a robot that looks like a simplified human, uh, this actually has, in some experimental studies, been, uh, shown to, uh, possibly work. They, the child sort of opens up and then even turns to the experimenter and, and sort of points at the robot and say like, "Look at this." And that's already a nice step forward. Now, for, for that sort of therapy tool, uh, you would need the robot to look a little bit human-like, uh, because, you know, that's part of the idea, that it should look a little bit like a human, et cetera. Now, take that robot and then let's say that after a day of experimentation, you know, you take the robot to another room and then you take out like, I don't know, a baseball bat (laughs) and start hitting it or, you know, uh, you, uh, you know, do something else to it that doesn't seem to be sort of, uh, very fitting. Uh, that can seem, I don't know, not ... A- again, you, if it's not directly wrong, it can seem like insensitive, let's say. Uh, you, you know, something that's been developed for this purpose, for treating these children should be treated maybe in, in a more respectful sort of way. Uh, is that to say that the, the robot should have, the therapy robot should have some sort of rights? Well, uh, it's to say rather that out of r- again, out of respect for those children that are being involved in this treatment, maybe one should maybe treat the robot in a, I don't know, dignified, respectful way.
- CWChris Williamson
Mm. Yeah.
- 1:00:00 – 1:05:36
Yeah, I think that's,…
- SNSven Nyholm
Uh, and then I mistreat that robot. I mean, in a way, that can seem as like some sort of attack on you. So maybe again, out of respect for you, I shouldn't ... Either I don't make a robot that looks like you, perhaps the best option, or if I do, for whatever reason, you know, there should be some, maybe some limits how I treat this robot out of respect for you. Uh, the, but this is still just a question of, you know, how can we behave in a way that's respectful towards other humans? The real question would be, you know, would there be any circumstances where out of respect to the robot, you know, you should treat it well in some way. Well, I mean, even t- today, I saw on Twitter some, uh, video about some scientist that claim that they have created robots that can feel pain. Uh, there's another, uh, not the same team, I think, a, a Japanese team led by, uh, Professor Asada, I think is his name, uh, who also tries to create robots that can feel pleasure and pain 'cause that, uh, the idea is that, uh, they can learn in the way that infants do. You know, before we learn language, we learn in an emotional sort of way. Uh, y- you know, you might ask, does it even make sense to think that a machine, a robot could feel pleasure or pain?... if you believe that they are achieving this in, in their research, and I must say I'm skeptical, then you do a, get a interesting si- situation because maybe the robot is not very intelligent, but it has, you know, the capacity to feel something in some sense or other. Uh, here too you might start thinking like, "Well, better to be safe than sorry. So let's not cause too much unnecessary pain (laughs) to this robot." Uh, and this, this of course depend- uh, it's dependent on whether you think it makes any sense to say that a machine could feel pain, which-
- CWChris Williamson
Yeah, I think that's, that's where the sort of slippage is at the moment, right? Because inevitably you have a reward function in most sorts of circuits. There is an outcome that you want. That's o- uh, you know, uh, AlphaGo Zero worked, that there was an outcome it wanted and it learned. Essentially it was like, "If you've done this, then great. This is the sort of direction that we want you to go in." Now if I just decide to re-categorize that as pleasure and not that as pain, uh, okay, but there is no phenomenal logical experience that it is going through, that the machine is going through, which causes the suffering, which causes the second order meta-cognizant experience of the suffering itself. To be able to say, I mean, to be able to say, as far as I'm concerned, whoever it is that's on Twitter, "We have created a robot which is able to feel pain," is to say we have created consciousness, because I don't think that you can feel pain without consciousness. Any, like I, I whack a rock with a stick, the s- the rock and the stick both aren't in pain because neither of them have consciousness. Um, yeah, I'm, I'm unsure around that. For m- for me personally, I think that treating robots as if they are just a scaled up version of a MacBook makes the most sense at the moment. Now giving them, uh, you, you talk about at the beginning of the book giving them citizenship, and you slipped up at the very, very beginning, a Freudian slip, to say, uh, "Who, who is," as opposed to-
- SNSven Nyholm
Yeah.
- CWChris Williamson
... "it is."
- SNSven Nyholm
Right.
- CWChris Williamson
Um, and th- this gets into kind of like, uh, the, the through thread that we're talking about here, just how misaligned we are when we deal with any sort of robot, and the more that these robots can look and act like humans, the more and more our behavior is going to be modified toward them. That's gonna have some externalized consequences to our interactions with other humans. There's also, um, concerns around whether or not those robots themselves should have some sort of right, some sort of sense of anything else. For me, I don't think that that really is a, it might be a concern to think about for the future, but right now I don't think that it is. Um, but yeah, it's going to be, it must be for yourself in this industry right now, it must feel like a very interesting and exciting place to be. You know, the next sort of 20 years or so is, we're gonna see some insane changes.
- SNSven Nyholm
Yeah, I mean, and part of this because, uh, as I said, there are scientists who claims that, who, who, who are saying that they are creating robots that can feel pleasure and pain. I mean, I share your skepticism about whether they have achieved that goal, but certainly there are people at universities, (laughs) you know, they're, they're trying to do this. They claim that they can do it. Uh, there are people developing sex robots, uh, self-driving cars, all sorts of interesting and seemingly, uh, science-fiction-like, uh, things. But, you know, people are actually doing it, uh, and it's very clear that there are interesting s- you know, philosophical, ethical questions about it. So yeah, for people like me, it's great.
- CWChris Williamson
Sven, thank you very much for today. Humans and Robots: Ethics, Agency, and Anthropomorphism will be linked in the show notes below. If people want to check out any more of your stuff, where should they go?
- SNSven Nyholm
Uh, well, I already mentioned Twitter, and I think that's, uh, a good place. I mean, so I, whenever I, you know, do something like this, I appear on a podcast or I, I write a book or an article, I always put it there as a sort of ad- to advertise it. So, that's a good place if people want to know what I'm doing.
- CWChris Williamson
Perfect. Thank you very much.
- SNSven Nyholm
And it's, it's just like @SvenNyhollm.
- CWChris Williamson
It'll be linked in the show notes below. Sven, thank you so much, man.
- SNSven Nyholm
Well, thank you.
- NANarrator
(instrumental music)
Episode duration: 1:05:36
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode C_dnOBdfbZk
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome