
Are Sex Robots And Self-Driving Cars Ethical? - Sven Nyholm | Modern Wisdom Podcast 287
Sven Nyholm (guest), Chris Williamson (host), Narrator, Narrator
In this episode of Modern Wisdom, featuring Sven Nyholm and Chris Williamson, Are Sex Robots And Self-Driving Cars Ethical? - Sven Nyholm | Modern Wisdom Podcast 287 explores sex Robots, Self-Driving Cars, and the Future of Moral Machines The conversation explores how humans psychologically and ethically relate to increasingly autonomous technologies like robots, sex robots, and self-driving cars. Sven Nyholm explains that our evolved social instincts lead us to anthropomorphize machines, which can create both emotional attachments and ethical confusion. They examine questions of responsibility when autonomous systems cause harm, whether robots should have rights, and if technologies like sex robots—especially childlike ones—can ever be ethically acceptable. Throughout, they stress the need to think ahead and embed ethical reflection into the design of emerging technologies before they become deeply entrenched in everyday life.
Sex Robots, Self-Driving Cars, and the Future of Moral Machines
The conversation explores how humans psychologically and ethically relate to increasingly autonomous technologies like robots, sex robots, and self-driving cars. Sven Nyholm explains that our evolved social instincts lead us to anthropomorphize machines, which can create both emotional attachments and ethical confusion. They examine questions of responsibility when autonomous systems cause harm, whether robots should have rights, and if technologies like sex robots—especially childlike ones—can ever be ethically acceptable. Throughout, they stress the need to think ahead and embed ethical reflection into the design of emerging technologies before they become deeply entrenched in everyday life.
Key Takeaways
Our brains are primed to treat robots like social agents.
Because human psychology evolved in a world without machines, anything that moves autonomously or appears intelligent tends to trigger social and moral responses, even when we know intellectually it is “just a machine.”
Get the full analysis with uListen AI
Anthropomorphism can mislead us about responsibility and risk.
Talking as if cars or robots “want” things or “decide” can obscure who actually has knowledge and control, complicating judgments about who is responsible when autonomous systems cause harm.
Get the full analysis with uListen AI
Self-driving cars create hard trade-offs between safety and human habits.
Programming autonomous cars to follow rules strictly may be safer overall, but clashes with human driving norms; making them drive like humans sacrifices some safety benefits, raising the question of whether we should adapt ourselves instead.
Get the full analysis with uListen AI
Technology can reshape our social manners—for better or worse.
Interacting bluntly with voice assistants (Alexa, Siri) or robots may erode politeness and empathy that then carries over into human relationships, suggesting design and usage norms matter.
Get the full analysis with uListen AI
Sex robots raise concerns about objectification but may have therapeutic roles.
Critics fear repeated use could reduce empathy and respect toward human partners, yet proponents argue they might help people with trauma, social difficulties, or lack of partners practice intimacy or sexuality in safer ways.
Get the full analysis with uListen AI
Childlike sex robots pose an especially acute ethical challenge.
They might lower the risk of abuse by offering a non-human outlet for some pedophiles, but they also carry powerful symbolic meanings that could be seen as normalizing or profiting from child abuse, deeply offending survivors and public moral sensibilities.
Get the full analysis with uListen AI
Robot ‘rights’ debates often reflect respect for humans more than machines.
Even without conscious robots, how we treat humanlike machines—especially those resembling specific people or vulnerable groups—can express respect or disrespect toward actual humans, which may justify behavioral limits long before robots have true moral standing.
Get the full analysis with uListen AI
Notable Quotes
“We’re responding to robots with brains that evolved for a world without robots.”
— Sven Nyholm
“Just put a pair of eyes on something and people will feel like they’re being watched.”
— Sven Nyholm
“We face a choice: change the technologies to fit human nature, or change ourselves to better interact with them.”
— Sven Nyholm
“It would be strange to say, ‘Let’s make self‑driving cars drive like humans’ if they could actually be safer than us.”
— Sven Nyholm
“I haven’t found a compelling argument that says it’s unethical to use sex robots; I just haven’t been convinced yet.”
— Chris Williamson
Questions Answered in This Episode
If our intuitions about blame and punishment are evolutionarily hard‑wired, how realistic is it to expect society to accept deaths caused by autonomous machines?
The conversation explores how humans psychologically and ethically relate to increasingly autonomous technologies like robots, sex robots, and self-driving cars. ...
Get the full analysis with uListen AI
Should we design robots to minimize anthropomorphism to avoid confusion and emotional attachment, or lean into it when it clearly helps (e.g., autism therapy)?
Get the full analysis with uListen AI
Under what conditions, if any, could sex robots—especially childlike ones—be ethically justified as therapeutic tools rather than condemned as inherently wrong?
Get the full analysis with uListen AI
Who should ultimately bear legal responsibility when an autonomous system harms someone: users, manufacturers, programmers, or some new category of actor?
Get the full analysis with uListen AI
At what point, and based on what criteria, would it make sense to talk about robots themselves having rights or interests we’re obliged to respect?
Get the full analysis with uListen AI
Transcript Preview
The main issue that people worry about so far is that sex robots will inspire people to have strongly objectifying attitudes towards sex partners, to sort of make their empathy go away, because there's literally no mind or subjectivity there on the other side for you to be sensitive to. And so one worry would be that one keeps interacting with this robot and then stops caring about the feelings of the other, uh, whether they consent to what you're doing, and then that you would carry over that behavior to a human.
Why is the relationship between humans and robots interesting?
Oh, okay. Um, (laughs) well, uh, it's interesting for a bunch of different reasons. I mean, one reason is that people are, in a certain sense, not well prepared to re- deal with or respond to robots because, uh, our brains, our psychology, uh, developed before we had any robots, before we had any AI. Uh, I mean, it developed during human evolution for hundreds of thousands of years, and then suddenly we have this enormous jump in technolo- logical development and now we have, we have robots, we have AI, we have all sorts of, uh, interesting technology and, but we're responding to it with the brains, the human psychology that developed during this long time. So, that sometimes means that we respond to things that look or act like humans, which some robots do, sometimes in humorous ways, sometimes in ways that might be dangerous for us, uh, but very often in fascinating ways. And so that's, that's one good reason, I think, to, to th- think about the relationship between humans and robots.
So, our current mental makeup is unsuited to interacting with robots. That's the basic sort of foundation?
Yeah, I mean, so we're basically primed to, uh, anything that moves seemingly on its own accord, uh, in a s- apparently intelligent way, uh, our brains will think, "Okay, this is a, this is some sort of agent. Uh, it's an animal, it's a person," and, uh, our s- sort of human social attitudes are triggered. I mean, this can happen at the same time as we're thinking to ourselves, "You know, it's just a robot. It's just a machine. Doesn't have any feelings, uh, doesn't like me or dislike me." But nevertheless, emotionally, we respond to the entity as if it's, you know, another person. I mean, it, not all, all of the time, but, uh, uh, I mean, the interesting thing is that this is not just true of, uh, laypeople, but also experts. They talk about robots as if they have a mind, as if they have desires, beliefs, um, people will say about a self-driving car, for example, I- I- I'm thinking of a self-driving car as a kind of robot, that it wants to go left or right. It has to decide what to do. Um, so we have this tendency to, uh, anthropomorphize, as people say, I mean, so attribute human-like qualities to robots, to technologies. Uh-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome