Lex Fridman PodcastKate Darling: Social Robotics | Lex Fridman Podcast #98
EVERY SPOKEN WORD
125 min read · 25,016 words- 0:00 – 3:31
Introduction
- LFLex Fridman
The following is a conversation with Kate Darling, a researcher at MIT, interested in social robotics, robot ethics, and generally how technology intersects with society. She explores the emotional connection between human beings and life-like machines, which for me, is one of the most exciting topics in all of artificial intelligence. As she writes in her bio, she's a caretaker of several domestic robots, including her Pleo dinosaur robots named Yochai, Peter, and Mr. Spaghetti. She's one of the funniest and brightest minds I've ever had the fortune to talk to. This conversation was recorded recently, but before the outbreak of the pandemic. For everyone feeling the burden of this crisis, I'm sending love your way. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors: Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to Masterclass at masterclass.com/lex and getting ExpressVPN at expressvpn.com/lexpod. This show is sponsored by Masterclass. Sign up at masterclass.com/lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all-access pass to watch courses from, to list some of my favorites, Chris Hadfield on space exploration; Neil deGrasse Tyson on scientific thinking and communication; Will Wright, creator of SimCity and Sims, love those games, on game design; Carlos Santana on guitar; Garry Kasparov on chess; Daniel Negreanu on poker; and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. By the way, you can watch it on basically any device. Once again, sign up on masterclass.com/lex to get a discount and to support this podcast. This show is sponsored by ExpressVPN. Get it at expressvpn.com/lexpod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy to use. Press the big power-on button, and your privacy's protected. And, if you like, you can make it look like your location's anywhere else in the world. I might be in Boston now, but I can make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux, shout-out to Ubuntu 2004, Windows, Android. But it's available everywhere else too. Once again, get it at expressvpn.com/lexpod to get a discount and to support this podcast. And now, here's my conversation with Kate Darling.
- 3:31 – 4:36
Robot ethics
- LFLex Fridman
You co-taught robot ethics at Harvard. What are some ethical issues that arise in the world with robots?
- KDKate Darling
Yeah, that was a reading group that I did when I, like, at the very beginning, first became-
- LFLex Fridman
Early days.
- KDKate Darling
... interested in this topic.
- LFLex Fridman
Right.
- KDKate Darling
So I think if I taught that class today, it would look very, very different. Um, robot ethics, it sounds very science fiction-y, especially it did back then. But I think that some of the issues that people in robot ethics are concerned with are just around the ethical use of robotic technology in general. So for example, responsibility for harm, automated weapon systems, things like privacy and data security, things like, you know, automation and labor markets. Um, and then personally I'm really interested in some of the social issues that come out of our social relationships with robots.
- LFLex Fridman
So one-on-one relationship with robots?
- KDKate Darling
Yeah.
- LFLex Fridman
I think most of the stuff we have to talk about is, like, one-on-one social stuff. That's what I love, and, and I think that's what you're- you love as well and are a expert in. But at a societal level, there's, like, um...
- 4:36 – 6:31
Universal Basic Income
- LFLex Fridman
There's a presidential candidate now, Andrew Yang, running, uh, concerned about automation and robots and AI in general taking away jobs. He has a proposal of, uh, UBI, universal basic income, of everybody gets 1,000 bucks-
- KDKate Darling
Yeah.
- LFLex Fridman
... uh, as a way to sort of save you if you lose your job from automation, to allow you time to discover what it is, uh, that you would l- like to or even love to do.
- KDKate Darling
Yes. So I lived in Switzerland for 20 years, and-
- LFLex Fridman
Mm-hmm.
- KDKate Darling
... universal basic income has been more of a topic there, separate from the whole robots and jobs issue. So, um, it's so interesting to me to see kind of these Silicon Valley people latch onto this concept that came from a very kind of left-wing socialist, um, you know, d- kind of a different place in Europe. Um, but on the automation/labor markets topic, I think that it's very... So sometimes in those conversations, I think people overestimate where robotic technology is right now, and we also have this fallacy of constantly comparing robots to humans and thinking of this as a one-to-one replacement of jobs. So even, like, Bill Gates a few years ago said something about, you know, maybe we should have a system that taxes robots for taking people's jobs.
- LFLex Fridman
Mm.
- KDKate Darling
And it just...I, I mean, I'm sure that was taken out of context. You know, he's a really smart guy. But that sounds to me like kind of viewing it as a one-to-one replacement versus viewing this technology as kind of a supplemental tool that, of course, is gonna shake up a lot of stuff, it's gonna change the job landscape. But I don't see, you know, robots taking all the jobs in the next 20 years. That's just not how it's gonna work.
- 6:31 – 17:17
Mistreating robots
- LFLex Fridman
All right. So maybe drifting into the land of more personal relationships with robots and interaction and so on. I gotta warn you, I go... I may ask some silly philosophical questions. I apologize.
- KDKate Darling
Oh, please do.
- LFLex Fridman
Okay. Uh, do you think humans will abuse robots in their interactions? So you've, you've had a lot of... and w- we'll talk about it, sort of anthropomorphization and, and w- where, you know, this, this intricate dance, emotional dance between human and robot. But, uh, there seems to be also a darker side where people, when they treat the other as servants, especially, they can be a little bit abusive or a lot abusive. Do you think about that? Do you worry about that?
- KDKate Darling
Yeah. I do think about that. So, I mean, one of my, one of my main interests is the fact that people subconsciously treat robots like living things and... even though they know that they're interacting with a machine, and what it means in that context to behave, you know, violently. I d- I don't know if you could say abuse because you're not actually, you know, abusing the, the inner mind of the robot. The robot isn't... doesn't have any feelings.
- LFLex Fridman
As far as you know.
- KDKate Darling
Well, yeah. (laughs) It also depends on how we define feelings and consciousness. But I think that's another area where people kind of overestimate where we currently are with the technology.
- LFLex Fridman
Right.
- KDKate Darling
Like, the robots are not even as smart as insects right now. And so I'm not worried about abuse in that sense. But it is interesting to think about what does people's behavior towards these things mean for our own behavior? Um, is it desensitizing the people to, you know, be verbally abusive to a robot or even physically abusive? And we don't know.
- LFLex Fridman
Right. S- s- a similar connection from, like, if you play violent video games.
- KDKate Darling
Yeah.
- LFLex Fridman
What connection does that have to desensitization to violence? It's like, I haven't, uh, I haven't read literature on that. I wonder about that. Uh, because everything I've heard, people don't seem to any longer be so worried about violent video games.
- KDKate Darling
Correct. We've seemed... With the... The research on it is... It's a difficult thing to research. So it's sort of inconclusive, but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When it's something on a screen and you're like, you know, shooting a bunch of characters or running over people with your car, that doesn't necessarily translate to you doing that in real life. We do, however, have some concerns about children playing violent video games. And so we do restrict it there. Um, I'm not sure that's based on any real evidence either, but it's just the way that we've kind of decided, you know, we, we wanna be a little more cautious there. And the reason I think robots are a little bit different is because there is a lot of research showing that we respond differently to something in our physical space than something on a screen. We will treat it much more viscerally, much more like a physical actor. Um, and so I... it's, it's totally possible that this is not a problem, um, and it's the same thing as violence in video games. You know, maybe, you know, restrict it with kids to be safe, but adults can do what they want. But we just need to ask the question again because we don't have any evidence at all yet.
- LFLex Fridman
Maybe there's an intermediate place, too. I did my research, uh, on Twitter. By research, I mean scrolling through your Twitter feed.
- KDKate Darling
(laughs)
- LFLex Fridman
(laughs) You mentioned that you were going at some point to an animal law conference. So I have to ask, do you think there's something that we can learn from animal rights that guides our thinking about robots?
- KDKate Darling
Oh, I think there is so much (laughs) to learn from that. I'm actually writing a book on it right now. That's why I'm going to this conference. So I'm, I'm writing a book that looks at the history of animal domestication and how we've used animals for work, for weaponry, for companionship. And, you know, one of the things the books, the book tries to do is move away from this, um, fallacy that I talked about of comparing robots and humans because I don't think that's the right analogy. Um, but I do think that on a social level, even on a social level, there's so much that we can learn from looking at that history because throughout history, we've treated most animals like tools, like products, and then some of them, we've treated differently. And we're starting to see people treat robots in really similar ways. So I think it's a really helpful predictor to how we're gonna interact with the robots.
- LFLex Fridman
Do you think we'll look back at this time, like, a hundred years from now and see, uh, what we do to animals as, like, similar to the way we view, like, the Holocaust, uh, and with the World War II?
- KDKate Darling
That's a great question. I mean, I hope so. I am not convinced that we will. (laughs) But I often wonder, you know, what are my grandkids gonna view as, you know, abhorrent that my generation did that they would never do? And I'm like, "Well, what's the big deal?"
- LFLex Fridman
Right.
- KDKate Darling
You know, it's, it's a fun question to ask yourself.
- LFLex Fridman
And it always seems that there's atrocities that we discover later. So things that, at the time, people didn't see as, uh, you know, y- you look at everything from, uh, sl- slavery, uh, to any kinds of abuse throughout history, (laughs) to the kind of insane wars that were happening, to the way war was carried out and rape and the kind of violence that was happening during war in the ʼ8... that we now, you know, we see as atrocities, but at the time, perhaps didn't as much. And so now, I have this intuition that-I have this worry. Maybe I'm, you're going to, um, probably criticize me, but I do anthropomorphize robots. I have, I don't see a fundamental philosoph- philosophical difference between a robot and a human being, uh, in terms of once the capabilities are matched. So, the fact that we're really far away doesn't, um, in terms of capabilities, then that, from natural language processing, understanding generation to just reasoning and all that stuff, I think once you solve it, I see the, there's a very gray area, and I don't feel comfortable with the kind of abuse that people throw at robots. Subtle, but I can see it becoming, uh, I can see basically a civil rights movement for robots in the future. Do, do you think, uh, let me put it in the form of a question. Do you think robots should have some kinds of rights?
- KDKate Darling
Well, it's interesting because I came at this originally from your perspective. I was like, "You know what? There's no fundamental difference between technology and, like, human consciousness." Like, we- we can probably recreate anything, we just don't know how yet. And so there's no reason not to give machines the same rights that we have once, like you say, they're kind of on an equivalent level. But I realize that that is kind of a far future question. I still think we should talk about it 'cause I think it's really interesting, but I realize that it's actually, we- we might need to ask the robot rights question even sooner than that, um, while the machines are still, you know, quote unquote, "really," you know, "dumb" and not on our level because of the way that we perceive them. And I think one of the lessons we learn from looking at the history of animal rights, and one of the reasons we may not get to a place in 100 years where we view it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because historically, we've always protected those things that we relate to the most. So one example is whales. No one gave a shit about the whales. Am I allowed to swear?
- LFLex Fridman
Yeah.
- KDKate Darling
(laughs)
- LFLex Fridman
You can swear as much as you want.
- KDKate Darling
Woo! (laughs)
- LFLex Fridman
(laughs) Freedom.
- KDKate Darling
Yeah. No one gave a shit about the whales until someone recorded them singing, and suddenly people were like, "Oh, this is a beautiful creature and now we need to save the whales." And that started the whole Save the Whales movement in the '70s. So I'm, a- as much as I, and- and I think a lot of people, wanna believe that we care about consistent biological criteria, that's not historically how we've formed our alliances.
- LFLex Fridman
Yeah, so what, why do we, why do we believe that all humans are created equal? Killing of a human being, no matter who the human being is, that's what I meant by equality, is bad. And then, uh, 'cause I'm connecting that to robots, and I'm wondering whether mortality, so the killing act, is what makes something... That's the fundamental first right. So I'm, I am currently allowed to take a shotgun and shoot a Roomba, I think. I'm not sure, but-
- KDKate Darling
Yeah. (laughs)
- 17:17 – 20:27
Robots teaching us about ourselves
- KDKate Darling
is that what-
- LFLex Fridman
Well, it bothers me because it reve- so I personally believe, 'cause I've studied way too, so I'm Jewish so I studied the Holocaust and World War II exceptionally well. I personally believe that most of us have evil in us, that what bothers me is the abuse of robots reveals the evil in human beings.
- KDKate Darling
Yeah.
- LFLex Fridman
And it's, I think it doesn't just bother me. It's, I think it's an opportunity for ro- roboticists to make, help people bec- find the better sides, the- the angels of their nature, right?
- KDKate Darling
Yeah.
- LFLex Fridman
That that abuse isn't just a fun side thing. That's a, you revealing a dark part that you shouldn't, that should be hidden deep inside. (laughs)
- KDKate Darling
Yeah, I mean, it- it, (laughs) you laugh but some of our research does indicate that maybe people's behavior towards robots reveals something about their tendencies for empathy generally, even using very simple robots that we have today that, like, clearly don't feel anything. So-... you know, Westworld (laughs) is maybe, you know, not so far off and it's like, you know, depicting the bad characters as willing to go around and shoot and rape the robots, and the good characters as not wanting to do that, um, even without assuming that the robots have consciousness.
- LFLex Fridman
So there's an opportunity, it's interesting, yeah, there's an opportunity to almost practice empathy. The- on- r- robots is an opportunity to practice empathy.
- KDKate Darling
I agree with you. Some people would say, "Why are we practicing empathy on robots instead of, you know-"
- LFLex Fridman
On humans.
- KDKate Darling
"... on our fellow humans, or e- on animals that are actually alive and experience the world?" And I don't agree with them because I don't think empathy is a zero-sum game, and I do think that it's a muscle that you can train and that we should be doing that. But, um, some people disagree.
- LFLex Fridman
(laughs) So the interesting thing, and you've heard, you know, r- raising kids, uh, sort of, uh, asking them or telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during their requests. I don't know if, um... I'm a h- huge fan of that idea because, yeah, that's towards the idea of practicing empathy. I feel like politeness, I'm always polite to all the, all the systems that we build, especially anything that's speech interaction based, like when we talk to the car. I always have a pretty good detector for please, to... I- I feel like there should be a room for encouraging empathy in those interactions. Yeah.
- KDKate Darling
Okay. So I agree with you, so I'm gonna play devil's advocate.
- LFLex Fridman
Sure. (laughs)
- KDKate Darling
(laughs) So-
- LFLex Fridman
Yeah, what is the, what is the devil's advocate argument there?
- KDKate Darling
The devil's advocate argument is that if you are the type of person who has abusive tendencies or needs to get some sort of, like, behavior like that out, needs an outlet for it-
- LFLex Fridman
Yeah.
- KDKate Darling
... that it's great to have a robot that you can scream at so that you're not screaming at a person.
- LFLex Fridman
Oh.
- KDKate Darling
And we just don't know whether that's true, whether it's an outlet for people or whether it just kind of, as my friend once said, "Trains their cruelty muscles and makes them more cruel in other situations."
- 20:27 – 24:29
Intimate connection with robots
- KDKate Darling
- LFLex Fridman
Oh boy, yeah. And that expands to other topics, uh, which I- I don't know. The- you know, there's a, there's a topic of sex which is a weird one that I tend to avoid from a robotics perspective and most of the general public doesn't. They talk about sex robots and so on. Is that an area you've touched at all research wise? Like, the way... 'Cause that's what people imagine sort of any kind of interaction between human and robot that's co- shows any kind of compassion, they immediately think from a product perspective and the near term is sort of expansion of what pornography is and all that kind of stuff.
- KDKate Darling
Yeah.
- LFLex Fridman
Is- is- do researchers touch this?
- KDKate Darling
Yeah, well that's a, that's kind of you to, like, characterize it as though they're thinking rationally about product. I feel like sex robots are just such a, like, titillating news hook for people that it- they become, like, the story. And it's really hard to not get fatigued by it when you're in the space because you tell someone you do human-robot interaction, of course the first thing they wanna talk about is sex robots.
- LFLex Fridman
Really? Okay.
- KDKate Darling
Like you said. Yeah. It happens a lot. And it's- it's unfortunate that I'm so fatigued by it because I do think that there are some interesting questions that become salient when you talk about, you know, sex with robots.
- LFLex Fridman
See, what I think would happen when people get sex robots, like if you... Let's say guys, okay? Guys get female sex robots. What I think there's an opportunity for is an actual, like- like they'll actually interact... What am I trying to say? They- they won't... Outside of the sex will be the most fulfilling part. (laughs) Like, the interaction, it's like the folks who... There's movies on this, right? Who pr- uh, pay a prostitute and then end up just talking to her the whole time. So I feel like there's an opportunity. It's like most guys and people in general joke about the s- the sex act, but really people are just lonely inside and they're looking for connection, many of them. And it'd be unfortunate if th- that c- it's, uh, that connection is established through the sex industry. I feel like it should go the- in- to the front door (laughs) of, like, people are lonely and they want a connection.
- KDKate Darling
Well, I also feel like we should kind of de- you know, destigmatize the sex industry-
- LFLex Fridman
Sure.
- KDKate Darling
... because, um, you know, even prostitution, like there are prostitutes that specialize in disabled people who don't have the same kind of opportunities, um, to explore their sexuality. So it's... I- I- I feel like we should, like, destigmatize all of that generally.
- LFLex Fridman
Yeah.
- KDKate Darling
But yeah, that connection and that loneliness is an interesting, you know, topic that you bring up because while people are constantly worried about robots replacing humans and, oh, if people get sex robots and the sex is really good and then they won't want their, you know, partner or whatever, but we rarely talk about robots actually filling a hole where there's nothing.
- LFLex Fridman
Yeah.
- KDKate Darling
And what benefit that can provide to people.
- LFLex Fridman
Yeah. I think that's an exciting... There's a whole gi- there's a giant hole that's n- unfillable by humans. It's asking too much of your f- of people you- your friends and people you're in a relationship with and your family to fill that hole. There's a- 'cause, you know, it's exploring the full, like, pe- you know, exploring the full complexity and richness of who you are. Like, who are you really? Like, it's just, uh, w- people- your family doesn't have enough patience to really sit there and listen to who are you really. And I feel like there's an opportunity to really make that connection with robots.
- KDKate Darling
I- I just feel like we're complex as humans and we're capable of lots of different types of relationships. So whether that's, you know, with family members, with friends, with our pets, or with robots, I feel like there's space for all of that, and all of that can provide value in a different way.
- 24:29 – 31:59
Trolley problem and making difficult moral decisions
- LFLex Fridman
Yeah. Absolutely. So I'm jumping around. Currently, most of my work is in autonomous vehicles. So the most popular topic among the general public is, um, the trolley problem. So most-
- KDKate Darling
Hmm.
- LFLex Fridman
(laughs) Most roboticists, uh, uh, kind of hate this question, but, uh, what do you think of this thought experiment? What do you think we can learn from it outside of the silliness of the actual application of it to the autonomous vehicle? I think it's still an interesting ethical question. And that in itself, just like much of the interaction with robots, has something to teach us. But from your perspective, do you think there's anything there?
- KDKate Darling
Well, I think you're right that it does have something to teach us because... But- but I think what people are forgetting in all of these conversations is the origins of the trolley problem and what it was meant to show us.
- LFLex Fridman
Mm-hmm.
- KDKate Darling
Which is that there is no right answer, and that sometimes our moral intuition that comes to us instinctively is not actually what we should follow, um, if we care about creating systematic rules that apply to everyone. So I think that as a philosophical concept, it could teach us at least that. Um, but that's not how people are using it right now. Like, we have... And these are friends of mine and, like, I love them dearly and their project adds a lot of value. But if we're viewing the Moral Machine Project as what we can learn from the trolley problems. The Moral Machine is, I'm sure you're familiar, it's this website that you can go to and it gives you different scenarios like, "Oh, you're in a car. You can decide to run over, you know, these two people or this child." You know, what do you choose? Do you choose the homeless person? Do you choose the person who's jaywalking? And so it pits these, like, moral choices against each other and then tries to crowdsource the "correct answer."
- LFLex Fridman
Yeah.
- KDKate Darling
Which is really interesting and I think valuable data, but I don't think that's what we should base our rules in autonomous vehicles on because it is exactly what the trolley problem is trying to show which is your first instinct might not be the correct one if you look at rules that then have to apply to everyone and everything.
- LFLex Fridman
So how do we encode these ethical choices in, in interaction with robots? So for example, autonomous vehicles, there is a serious ethical question of, do I protect myself? Does my life have higher priority than the life of another human being? Because that changes certain control decisions that you make. So if your life matters more than other human beings, then you'd be more likely to swerve out of your current lane. So currently, automated emergency braking systems, they just brake. Uh, they don't ever swerve.
- KDKate Darling
Right.
- LFLex Fridman
So swerving into oncoming traffic or, uh, or no, just in a different lane can cause significant harm to others but it's possible that it causes less harm to you. So that's a difficult ethical question. Do you, do you, do you have a hope that, um... Like the trolley problem is not supposed to have a right answer. Right? Do you hope that when we have robots at the table, we'll be able to discover the right answer for some of these questions?
- KDKate Darling
Mm-hmm. Well, what's happening right now, I think, is this, this question that we're facing of, you know, what ethical rules should we be programming into the machines-
- LFLex Fridman
Right.
- KDKate Darling
... is revealing to us that our ethical rules are much less programmable than we, you know, probably thought before. And so that's a really valuable insight, I think, that- that these issues are very complicated and that in- in a lot of these cases, it's- you can't really make that call. Like, not even as a legislator. And so what's gonna happen in reality, I think, is that, you know, car manufacturers are just gonna try and avoid the problem and avoid liability in any way possible or, like, they're gonna always protect the driver because who's gonna buy a car if it's, you know, programmed to kill someone-
- LFLex Fridman
Right.
- KDKate Darling
Uh, kill- kill you instead of someone else? Um, so that's what's gonna happen in reality. But what did you mean by, like, once we have robots at the table? Like, do you mean when they can help us figure out what to do?
- LFLex Fridman
No. (laughs)
- KDKate Darling
(laughs)
- LFLex Fridman
I mean when robots are part of the ethical decisions. So no, no, not- not they help us. Well, uh...
- KDKate Darling
Oh, you mean when it- when it's like, should I run over a robot or a person?
- LFLex Fridman
Right. That kind of thing. So what- what... No, well, no, no, no. So when you... It's exactly what you said which is, when you have to encode the ethics into an algorithm, you start to- to try to really understand what are the fundamentals of the decision-making process, you make the- make certain decisions. Should- should you, um, like capital punishment, should you take a person's life or not to punish them for a certain crime? Sort of, you can use... You can develop an algorithm to make that decision, right? And the hope is that the act of making that algorithm, however you make it, so there's a few approaches, will help us actually get to the core of what- what is right and what is wrong under our current societal standards.
- KDKate Darling
But isn't that what's happening right now? And we're realizing that we don't have a consensus on what's right and wrong.
- LFLex Fridman
You mean in politics in general?
- KDKate Darling
Well, like, when we're thinking about these trolley problems and autonomous vehicles and how to program ethics into machines and how to, you know, make- make AI algorithms fair, um, and equitable. We're- we're realizing that this is so complicated and it's complicated in part because there is- doesn't seem to be a one right answer in any of these cases.
- LFLex Fridman
Do you hope for... Like, one of the ideas of Moral Machine is that crowdsourcing can help us-... converge towards a democracy can help us converge towards the right answer.
- KDKate Darling
I think-
- LFLex Fridman
Do you have a hope for crowdsourcing? (laughs)
- KDKate Darling
Well, yes and no. So I think that in general, you know, I have a legal background, and policymaking is often about trying to suss out, you know, what rules does this soci- this particular society agree on and then trying to codify that. So the law makes these choices all the time and then tries to adapt according to changing culture. But, um, in the case of the Moral Machine Project, I don't think that people's choices on that website necessarily, necessarily reflect what laws they would want in place-
- LFLex Fridman
Right.
- KDKate Darling
... if given the... I think you would have to ask them a series of different questions in order to get at what their consensus is.
- 31:59 – 38:09
Anthropomorphism
- LFLex Fridman
let's talk about anthropomorphism.
- KDKate Darling
(laughs)
- LFLex Fridman
To me, anthropomorphism, if I can pronounce it correctly, is, is one of the most fascinating phenomena from, like, both an engineering perspective and a psychology perspective, machine learning perspective, and robotics in general. Can you step back and define anthropomorphism, how you see it in general terms in your, in your work?
- KDKate Darling
Sure. So anthropomorphism is this tendency that we have to project human-like traits and behaviors and qualities onto non-humans. And we often see it with animals. Like, we'll, we'll project emotions onto animals that may or may not actually be there. We, we often see that we're trying to interpret things according to our own behavior when we get it wrong.
- LFLex Fridman
Mm-hmm.
- KDKate Darling
Um, but we do it with more than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in the headlights of cars. Um, and we do it with robots very, very extremely.
- LFLex Fridman
Do you think that can be engineered? Can that be used to enrich an interaction between-
- KDKate Darling
Oh, yeah.
- LFLex Fridman
... an AI system and a, and a human?
- KDKate Darling
Oh, yeah. For sure.
- LFLex Fridman
Do you... And do you see it being used that way often? Like, um, I, I don't... I haven't seen a, whether it's Alexa or any of the smart speaker systems often trying to optimize for the anthropomorphization.
- KDKate Darling
You said you haven't seen?
- LFLex Fridman
No, I haven't seen. They, they keep moving away from that. I think they're afraid of that.
- KDKate Darling
They, they actually... So I, I only recently found out, but did you know that Amazon has, like, a whole team of people who are just there to, um, work on Alexa's personality?
- LFLex Fridman
(laughs) So I've, I, I know... It depends what you mean by personality. I didn't know, I didn't know that exact thing. But I do know that the, how the voice is perceived is worked on a lot, whether the... if it's a pleasant feeling about the voice. But that has to do more with the texture of the sound and the audio and so on. But personality is more like...
- KDKate Darling
It's like, what's her favorite beer when you ask her.
- LFLex Fridman
Yeah.
- KDKate Darling
And, and the personality team is different for every country too.
- LFLex Fridman
Exactly.
- KDKate Darling
Like, there's a different personality for a German Alexa than there is for American Alexa. That said, I think it's very difficult to, you know, use the... or, or really, really, um, harness the anthropomorphism with these voice assistants, because the voice interface is still very primitive. And I think that in order to get people to really suspend their disbelief and treat a robot like it's alive, less is sometimes more. You, you want them to project onto the robot, and you want the robot to not disappoint their expectations for how it's going to answer or behave in order for them to have this kind of illusion. Um, and with Alexa, I don't think we're there yet, or Siri. They're just, they're just not good at that. But if you look at some of the more animal-like robots, like the baby seal that they use with-
- LFLex Fridman
Yeah.
- KDKate Darling
... the dementia patients-
- LFLex Fridman
Yes.
- KDKate Darling
... it's a much more simple design. It doesn't try to talk to you. It can't disappoint you in that way. It just makes little movements and sounds, and people stroke it, and it responds to their touch. And that is, like, a very effective way to harness people's tendency to kind of treat the robot like a living thing.
- LFLex Fridman
Yeah. So, uh, you bring up some interesting ideas in, uh, your paper chapter, I guess. Uh, anthropomorphic framing in human-robot interaction, that I read the last time we scheduled this. (laughs)
- KDKate Darling
Oh, my God. That was a long time ago.
- LFLex Fridman
Uh, what are some good and bad cases of anthropomorphism in, in your perspective? Like, when is it good, when is it bad? What are, what are some cases?
- KDKate Darling
Well, I should start by saying that, you know, while design can really enhance the anthropomorphism, it doesn't take a lot to get people to treat a robot like it's alive. Like, people will... Over 85% of Roombas have a name, which I'm, I don't know the numbers for your regular type of vacuum cleaner, but they're not that high, right?
- LFLex Fridman
Yeah.
- KDKate Darling
So people will feel bad for the Roomba when it gets stuck. They'll send it in for repair and want to get the same one back. And that's, that one is not even designed to, like, make you do that.
- 38:09 – 41:19
Favorite robot
- LFLex Fridman
What's your favorite robot? Like, uh-
- KDKate Darling
Fictional or real?
- LFLex Fridman
No, real. Uh, r- real robot which you have felt a connection with or not, like- (laughs) not- not a anthropomorphic connection, but I mean like a, you- you sit back and said, "Damn, this is an impressive system."
- KDKate Darling
Wow. So two different robots. So the- the Pleo baby dinosaur robot that-
- LFLex Fridman
Mm-hmm.
- KDKate Darling
... is no longer sold that came out in 2007. That one I was very impressed with. It was... But- but from an anthropomorphic perspective.
- LFLex Fridman
Right.
- KDKate Darling
I was impressed with how much I bonded with it, how much I, like, wanted to believe that it had this inner life.
- LFLex Fridman
Can you describe Pleo, the... Can you describe what- what it is? How big is it? What can it actually do?
- KDKate Darling
Yeah. Pleo is about the size of a small cat. It, um, had a lot of, like, motors that gave it this kind of lifelike movement. It had things like touch sensors and an infrared camera. So it had all these, like, cool little technical features, even though it was a toy. Um, and the thing that really struck me about it was that it- it could mimic pain and distress really well. So if you held it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing, and it would start to squirm and, like, cry out. Um, if you hit it too hard, it would start to cry. It... So it was very impressive in design.
- LFLex Fridman
Mm-hmm. And what's the second robot that you're, uh... You said there might have been two that you liked-
- KDKate Darling
Yeah. So, um-
- LFLex Fridman
... that you were impressed by.
- KDKate Darling
... the Boston Dynamics robots are just impressive feats of engineering.
- LFLex Fridman
Have you met them in person?
- KDKate Darling
Yeah. I recently got a chance to go visit, and I... You know, I was always one of those people who watched the videos and was like, "This is super cool, but also it's a product video." Like, I don't know how many times that they had to shoot this to get it right.
- LFLex Fridman
Yeah.
- KDKate Darling
But visiting them, I, you know... I'm pretty sure that... I- I was very impressed. Let's put it that way.
- LFLex Fridman
Yeah, in- in terms of the control. I think that was a tr- transformational moment for me when I met SpotMini in person.
- KDKate Darling
Yeah.
- LFLex Fridman
Because... Okay, maybe this is a psychology experiment, but I anthropomorphized the- the crap out of it. That's how I immediately... It was like my best friend, right? Uh-
- KDKate Darling
I think it's really hard for anyone to watch Spot move and not feel like it has agency.
- LFLex Fridman
Yeah. Be... Those movement... And especially the arm on SpotMini really obvi- obviously looks like a head.
- KDKate Darling
Yeah.
- LFLex Fridman
That... And they say, "No, we didn't mean it that way," but it obviously... It- it looks exactly like that. And so it's almost impossible to not think of it as a... Almost like the baby dinosaur, but slightly larger. And then this move- movement of the... Of course, the intelligence is... Th- their whole idea is that it's not supposed to be intelligent. It's a platform on which you build higher intelligence. It's actually really, really dumb and just a basic movement platform.
- KDKate Darling
Yeah, but even dumb robots can... Like, we can immediately respond to them in this visceral
- 41:19 – 42:46
Sophia
- KDKate Darling
way.
- LFLex Fridman
What are your thoughts about, uh, Sophia the robot? This kind of mix of some basic natural language processing and, uh, basically an art experiment.
- KDKate Darling
Yeah. An art experiment is a good way to characterize it. I'm much less impressed with Sophia than I am with Boston Dynamics.
- LFLex Fridman
She said she likes you. She said she admires you.
- KDKate Darling
Is she... Yeah, she followed me on Twitter-
- LFLex Fridman
She-
- KDKate Darling
... at some point. Yeah. Um...
- LFLex Fridman
And- and she tweets about how much she likes you, so-
- KDKate Darling
So what does that mean? I have to be nice or-
- LFLex Fridman
No, no, no.
- KDKate Darling
(laughs)
- LFLex Fridman
Oh, see, I was emotionally manipulating you.
- KDKate Darling
(laughs)
- LFLex Fridman
Uh, no, how do you- how do you think of, um... The whole thing that happened with Sophia is, uh, quite a large number of people kind of immediately had a connection and thought that maybe we're far- far more advanced with robotics than we are or actually didn't even think much. I was surprised how little people cared. The- that... They kind of assumed that, "Well, of course AI can do this."
- KDKate Darling
Yeah.
- LFLex Fridman
And then they... If they assume that, I felt they should be more impressed.
- KDKate Darling
(laughs) Well, people-
- LFLex Fridman
You know what I mean? Like-
- KDKate Darling
... really overestimate where we are.
- LFLex Fridman
Right.
- KDKate Darling
And so when something... I don't even- I don't even think Sophia was very impressive or is very impressive. I think she's kind of a puppet, to be honest.
- LFLex Fridman
Mm-hmm.
- KDKate Darling
But yeah, I think people have... Are- are a little bit influenced by science fiction and pop culture to think that we should be further along than we are.
- 42:46 – 47:01
Designing robots for human connection
- LFLex Fridman
So what's your favorite robots in movies and fiction?
- KDKate Darling
WALL-E.
- LFLex Fridman
WALL-E. What, what do you like about WALL-E? The humor? The cuteness? The, uh, the perception control systems operating on WALL-E that makes it all work ...
- KDKate Darling
(laughs)
- LFLex Fridman
... or what, which, just in general?
- KDKate Darling
The design of WALL-E the robot, I think that animators figured out, you know, starting in, like, the 1940s how to create characters that don't look, um, real but look like something that's even-
- LFLex Fridman
Mm-hmm.
- KDKate Darling
... better than real, that we really respond to and think is really cute. They figured out how to make them move and look in the right way. And WALL-E is just such a great example of that.
- LFLex Fridman
You think eyes, big eyes, or big something that's kinda eye-ish? So it's always playing on some aspect of the f- human face, right?
- KDKate Darling
Often, yeah. So big eyes. Well, I think one of the, one of the first, like, animations to really play with this was Bambi, and they weren't originally gonna do that. They were originally trying to make the deer look as lifelike as possible. Like, they brought deer into the studio and had a little zoo there so that-
- LFLex Fridman
Right.
- KDKate Darling
... the animators could work with them. And then at some point, they were like, "Hmm, if we make really big eyes and, like, a small nose and, like, big cheeks, kind of more like a baby face, then people like it even better than if it looks real."
- LFLex Fridman
Do you think the future of, um, things like Alexa in the home has possibility to take advantage of that, to build on that, to create these systems that are better than real, that, uh, create a close human connection?
- KDKate Darling
I can pretty much guarantee you without having any knowledge that those companies are working on that, on that design behind the scenes. Like-
- LFLex Fridman
I- I-
- KDKate Darling
... I'm pretty sure.
- LFLex Fridman
... I totally disagree with you.
- KDKate Darling
Really?
- LFLex Fridman
So that's what I'm interested in. I'd like to build such a company. I know a lot of those folks, and they're afraid of that, because you don't ... Well, how do you make money off of it?
- KDKate Darling
Well, but even just, like, making Alexa look a little bit more interesting than just, like, a cylinder would do so much.
- LFLex Fridman
It's, it's an interesting thought, but, uh, I don't think people are, from Amazon perspective, are looking for that kind of connection. They want you to be, uh, addicted to the services provided by Alexa, not to the device. So the de- the device itself, it's felt that you can lose a lot, because if you create a connection and then it does, it, it creates more opportunity for frustration, for, for negative stuff than it does for positive stuff, is I think the way they think about it.
- KDKate Darling
That's interesting. Like, I agree that there's, it's very difficult to get right and you have to get it exactly right. Otherwise, you wind up with Microsoft's Clippy.
- LFLex Fridman
That's true. Okay, easy now. What's, what's your problem with Clippy?
- KDKate Darling
(laughs) You like Clippy? Is Clippy your friend?
- LFLex Fridman
I like Clippy. Yeah, I miss Clippy.
- KDKate Darling
(laughs)
- LFLex Fridman
I was just ... I just (laughs) , I just talked to the ... We just had this argument, and they, uh, so the Microsoft CTO, and they said, he said he's not bringing Clippy back. Um, they're not bringing Clippy back, and that's very disappointing. It wa- I think it was Clippy was the greatest assistant we've ever built. It was a horrible attempt, of course, but it's the best we've ever done, because it was an real attempt to have a, like a actual personality. And I mean, it was f- obviously technology was way not there at the time, of being able to be a recommender system for assisting you in anything and typing in, in Word or any kind of other application. But still, it was an attempt of personality that was legitimate.
- KDKate Darling
That's true.
- LFLex Fridman
Which I thought was brave.
- KDKate Darling
Yes. I'll g- yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy.
- 47:01 – 50:03
Why is it so hard to build a personal robotics company?
- LFLex Fridman
Anki and Jibo, the two companies, two amazing companies, the social robotics companies, they've recently been closed down.
- KDKate Darling
Yes.
- LFLex Fridman
Why do you think it's so hard to create a personal robotics company? So making a business out of essentially something that people would anthropomorphize, have a deep connection with. Why is it so hard to make it work?
- KDKate Darling
I think-
- LFLex Fridman
Is the business case not there, or what, what is it?
- KDKate Darling
I think it's a number of different things. I don't think it's going to be this way forever. I think at this current point in time, it takes so much work to build something that only barely meets people's m- like, minimal expectations because of science fiction and pop culture giving people this idea that we should be further than we already are. Like, when people think about a robot assistant in the home, they think about Rosey from The Jetsons or something like that, and Anki and J- and Jibo did such a beautiful job with the design and getting that interaction just right, but I think people just wanted more. They wanted more functionality. I think you're also right that, you know, the business case isn't really there because they, there hasn't been a killer application that's useful enough to get people to adopt the technology in great numbers. I think what we did see from the people who did, you know, get Jibo is a lot of them became very emotionally attached to it. But that's not ... I mean, it's kind of like the PalmPilot back in the day. Most people are like, "Why do I need this? Why would I ..." They don't see how they would benefit from it until they, you know, have it or some other company comes in and makes it a little better.
- LFLex Fridman
Yeah, like, how, how far away are we, do you think?
- KDKate Darling
I mean-
- LFLex Fridman
Like, how hard is this problem?
- KDKate Darling
It's a good question, and I think it has a lot to do with people's expectations an- and those keep shifting depending on what science fiction that is popular.
- LFLex Fridman
But- but also, it's two things. It's people's expectation and people's need for an emotional connection.
- KDKate Darling
Yeah.
- LFLex Fridman
And, uh, the- I- I believe the need is pretty high.
- KDKate Darling
Yes. But I don't think we're aware of it.
- LFLex Fridman
And... That's right. There's like (stutters) - I- I- I really think (laughs) we're- this is like the life as we know it, so we've just kind of gotten used to it, of really... I hate to be dark 'cause I have close friends, but we've gotten used to really never being close to anyone. All right? And (laughs) we're deeply, I believe... Okay, this is hypothesis. I think we're deeply lonely, all of us, even those in deep, fulfilling relationships. In fact, what makes those relationships fulfilling, I think, is that they at least tap into that deep loneliness a little bit. But I feel like there's more opportunity to explore that, that doesn't inter- doesn't interfere with the human relationships you have. It expands more on the- the- yeah, the- the rich, deep, unexplored complexity that's all of us weird apes. Okay. Um-
- KDKate Darling
(laughs) I think you're
- 50:03 – 56:39
Is it possible to fall in love with a robot?
- KDKate Darling
right.
- LFLex Fridman
Do you think it's possible to fall in love with a robot?
- KDKate Darling
Oh, yeah. (laughs) Totally.
- LFLex Fridman
Do you think it's, uh, possible to have a long term, committed, monogamous relationship with a robot?
- KDKate Darling
Well, yeah, there are lots of different types of long term, committed, monogamous relationships.
- LFLex Fridman
I think monogamous implies, like, you're not going to see other humans in- sexually or... Like, you basically, on Facebook, have to say, "I'm in a relationship with this person, uh, this robot."
- KDKate Darling
I just don't... I- like, again, I think this is comparing robots to humans when I would rather-
- LFLex Fridman
You don't like that?
- KDKate Darling
... compare them to pets. Like, you get a robot. It fulfills, you know, this loneliness that you have, um, in a s- maybe not the same way as a pet, maybe in a different way that is even, you know, supplemental in a different way. But, you know, I- I'm not saying that people won't, like, do this, be like, "Oh, I wanna marry my robot," or, "I wanna have like a, you know, sexual relation- monogamous (laughs) relationship with my robot." Um, but I don't think that that's the main use case for them.
- LFLex Fridman
But you think that there's still a gap between human and, um, pet? (laughs) So between, uh, husband and pet (laughs) -
- KDKate Darling
(laughs)
- LFLex Fridman
... there's- there's a-
- KDKate Darling
It's a different relationship.
- LFLex Fridman
... eng- it's an engineering... So that- that's a gap that can't be closed through-
- KDKate Darling
I think it could be closed someday, but why would we close that? Like, I- I think it's so boring to think about recreating things that we already have when we could re- when we could create something that's different. I know you're thinking about the people who, like, don't have a husband and, like, what could we give them. Um...
- LFLex Fridman
Yeah. But it- but let's- the- I guess what I'm getting at is, um, maybe not, so like the movie Her.
- KDKate Darling
Yeah.
- LFLex Fridman
Right? So, a better husband.
- KDKate Darling
Well, maybe better in some ways. Like, it's- I- I- I do think that robots are gonna continue to be a different type of relationship. Even if we get them, like, very human looking or when, you know, the voice interactions we have with them feel very, like, natural and humanlike, I think there's still gonna be differences. And there were in that movie too, like, towards the end-
- LFLex Fridman
Yeah.
- KDKate Darling
... it kinda goes off the rails.
- LFLex Fridman
But it's just a movie. So the- your intuition is, uh, that- that... 'Cause- 'cause you kinda said two things, right? So one is why would you want to basically replicate the husband?
- KDKate Darling
Yeah.
- LFLex Fridman
Right? And the other is kind of implying that it's kinda hard to do. So, y- like, any time you try, you might build something very impressive, but it'll be different. I- I guess my question is about human nature is like w- how hard is it to, uh, satisfy that role of the husband? So removing any of the sexual stuff aside, is the, th- it's- it's more like the mystery, the tension, the dance of relationships do you think with robots that's difficult to build? What's your intuition on it?
- KDKate Darling
I think that... Well, it- it also depends on are we talking about robots now, in 50 years, in like, uh, indefinite amount of time where like the abilities are-
- LFLex Fridman
I- I'm thinking like five to ten years.
- KDKate Darling
Five or ten years, I think that robots at best will be like a- a- s- more similar to the relationship we have with our pets than relationship that we have with other people.
- LFLex Fridman
I got it. So, what do you think it takes to build a system that exhibits greater and greater levels of intelligence, like, uh, impresses us with its intelligence? You know, a Roomba... So you talk about anthropomorphization. That doesn't... I- I think intelligence is not required. In fact, intelligence probably gets in the way sometimes-
- KDKate Darling
Mm-hmm.
- LFLex Fridman
... like you mentioned. But w- w- what do you think it takes to create a system where we sense that it has a human level intelligence? So something that obvi- uh, probably something conversational, human level intelligence. How hard do you think that problem is? It'd be interesting to sort of hear your perspective not just purely... So I talked to a lot of people, o- how hard is it to conversational agents?
- 56:39 – 58:33
Robots displaying consciousness and mortality
- LFLex Fridman
So what do you think about consciousness and mortality, uh, eh, being displayed in a robot? So not actually ha- having consciousness, but having these kind of human elements that are much more than just the interaction, m- much more than just, like you mentioned, with a dinosaur moving kind of in- in interesting ways, but really being worried about its own death and really acting as if it's aware and self-aware in identity. Have you seen that done in robotics? What do you think about doing that?
- KDKate Darling
(laughs)
- LFLex Fridman
(laughs) Do- does- is that a- i- is that a powerful good thing?
- KDKate Darling
Well, it's a, I think it can be a design tool that you can use for different purposes, so I can't say whether it's inherently good or bad, but I do think it can be a powerful tool. Um, the fact that the, you know, Pleo mimics distress when you, quote unquote, "hurt" it, is, is a really powerful, um, tool to get people to engage with it in a certain way. I had a research partner that I did some of the empathy work with, uh, named Palash Nandi, and he had built a robot for himself that had, like, a lifespan and that would stop working after a certain amount of time just because he was interested in, like, whether he himself would treat it differently.
- LFLex Fridman
Mm-hmm.
- KDKate Darling
And we know from, you know, Tamagotchis, those, like, those little games that, that we used to have that were extremely primitive that, like, people respond to, like, this idea of mortality and, you know, (laughs) you can get people to do a lot with, with little design tricks like that. Now, whether it's a good thing depends on what you're trying to get them to do.
- LFLex Fridman
Have a deeper relationship. Have a deeper connection, sorry, not a relationship.
- KDKate Darling
If it's for their own benefit, that-
- LFLex Fridman
Yep.
- KDKate Darling
... that sounds great.
- LFLex Fridman
Okay.
- 58:33 – 1:04:40
Manipulation of emotion by companies
- LFLex Fridman
But you could see-
- KDKate Darling
But you could do-
- LFLex Fridman
How- how-
- KDKate Darling
... you could do that for a lot of other reasons. (laughs)
- LFLex Fridman
I see. So what kind of stuff are you worried about? So is, is it mostly about manipulation of your emotions for, like, advertisement and so on, things like that?
- KDKate Darling
Yeah, or data collection, or, I mean, you could think of governments misusing this to extract information from people. It's, you know, just, just like any other technological tool, it just raises a lot of questions.
- LFLex Fridman
What's, if you, if you look at Facebook, if you look at Twitter and social networks, there's a lot of concern of data collection now, how do... Um, what's, uh, from the legal perspective or in general, uh, ho- how do we prevent the violation of sort of these, th- these companies crossing a line? It's a gray area, but crossing a line they shouldn't in terms of manipulating, like we're talking about, manipulating our emotion, manipulating our behavior using tactics, um, that are not so savory?
- KDKate Darling
Yeah, it's, it's really difficult because we are starting to create technology that relies on data collection to provide functionality.
- LFLex Fridman
Yeah.
- KDKate Darling
And there's not a lot of incentive, even on the consumer side, to curb that because the other problem is that the harms aren't tangible. Um, they're not really apparent to a lot of people because they kind of trickle down on a societal level, and then suddenly we're living in, like, 1984. (laughs) Um, which, you know, sounds extreme, but I, that book was very prescient. And I'm not worried about, you know, these systems. I, you know, I, I, I have, you know, Amazon's Echo at home and, and, you know, tell Alexa all sorts of stuff, and, and it helps me because, you know, Alexa knows what, you know, brand of diaper we use.
- LFLex Fridman
Yeah.
- KDKate Darling
And so I can just easily order it again. So I don't have any incentive to, like, ask a lawmaker to curb that. But when I think about that data then being used against, you know, low-income people to target them for, you know, scammy loans or education programs, that's then a societal effect that I think is very severe and w-... you know, legislators should be thinking about.
- LFLex Fridman
But yeah, there's the, the gray, gray area is the removing ourselves from consideration of like, uh, of explicitly defining objectives, and more saying, "Well, we want to maximize engagement in our social network."
- KDKate Darling
Yeah.
- LFLex Fridman
And, and then just, uh, 'cause you're not actually doing a bad thing. It makes sense. You want people to, to keep a conversation going, to have more conversations, to keep coming back again and again to have conversations. And, and whatever happens after that, you're, you're kind of not exactly directly responsible. You're only indirectly responsible. So it's, I think it's a really hard problem. Do you... (laughs) do you have... are, are you optimistic about us ever being able to solve it?
- KDKate Darling
(laughs) Um, you mean the problem of capitalism? It's like-
- LFLex Fridman
What's-
- KDKate Darling
... because the problem is that the companies are acting in the company's interests and not in people's interests. And when those interests are aligned, that's great. Um, but, uh, the completely free market doesn't seem to work because of this information asymmetry.
- LFLex Fridman
But, but it's hard to know how to... so say you were trying to do the right thing. Like, uh, I guess what I'm trying to say is, um, it's not obvious for these companies what the good thing for society is to do. Like, I don't think they sit there and, um, with, uh, I don't know... with a, with a glass of wine and a cat, like petting a cat, evil cat.
- KDKate Darling
(laughs)
- LFLex Fridman
And, and there's two decisions, and one of them is good for society and one is good for the f- for the, for the profit, and they choose the profit. I think they actually... there's a lot of money to be made by doing the, the right thing for society. Like that... 'cause, 'cause Google, Facebook have so much cash-
- KDKate Darling
(laughs)
- LFLex Fridman
... that they actually would sign- especially Facebook, would significantly benefit from making decisions that are good for society. It's good for their brand, right? So, but I don't know if they know what's good for society. (laughs) That's the... we... I don't think we know what's good for society in terms of, uh, how, uh, yeah, how we manage the conversation on Twitter or how we design the... we're talking about robots. Like, should it, should we emotionally manipulate you into having a deep connection with Alexa or not?
- KDKate Darling
Yeah. Yeah. It's-
- LFLex Fridman
Do you have, do you have optimism that we'll be able to solve some of these questions? (laughs)
- KDKate Darling
Well, I'm gonna say something that's controversial, like, in my circles, which is that I don't think that companies who are reaching out to ethicists and trying to create interdisciplinary ethics boards, I don't think that that's totally just trying to whitewash the problem and, and, and so that they look like they've done something. I think that a lot of companies actually do, like you say, uh, care about what the right answer is. They don't know what that is, and they're trying to find people to help them find them. Not in every case, but I think... I... you know, it's much too easy to just vilify the companies as, like you said, as sitting there with their cat going, "Ha, ha, ha" $1 million.
- LFLex Fridman
Yeah.
- KDKate Darling
Um, that's not what happens. A lot of people are well-meaning, even within companies. Um, I think that what we do absolutely need is more interdisciplinarity, both within companies but also within the policymaking space, because we're... you know, we've hurdled into the world where technological progress is much faster, it seems much faster than it was, and things are getting very complex. And you need people who understand the technology, but also people who understand what the societal implications are and people who are thinking about this in a more systematic way to be talking to each other. There's no other solution, I think.
- 1:04:40 – 1:09:23
Intellectual property
- LFLex Fridman
So you've also done work on intellectual property. So if you look at the algorithms that these companies are using, like YouTube, Twitter, Facebook, so on, I mean, that's kind of, uh... tho- those are mostly secretive, uh, you know, the, the recommender systems behind, behind these algorithms. Do, do you think about IP and the transparency of algorithms like this? Like what is the responsibility to these companies to, um, open source the algorithms or at least reveal to the public what's... how these algorithms work?
- KDKate Darling
So I personally don't work on that. There are a lot of people who do though, and there are a lot of people calling for transparency. In fact, Europe's even trying to legislate, um, transparency. Maybe they even have at this point, where, like if, if an algorithmic system makes some sort of decision that affects someone's life, that you need to be able to see how that decision was made. Um, I... you know... (laughs) it's, it's a, it's a tricky balance because obviously companies need to have, you know, some sort of competitive advantage and you can't take all of that away or you stifle innovation. But yeah, for some of the ways that these systems are already being used, I think it, it is pretty important that people understand how they work.
- LFLex Fridman
What are your thoughts in general on intellectual property in this weird age of software, AI, robotics?
- KDKate Darling
Oh. That it's broken. I mean, the system is just broken.
- LFLex Fridman
So d- can you describe... I actually, I don't even know what intellectual property is in the space of s- software, what it means to... I mean, I th- so I believe I have a patent on a piece of software from my PhD.
- KDKate Darling
You believe? You don't know?
- LFLex Fridman
No, we went through a whole process. Yeah, I, I do. Uh-
- KDKate Darling
Do you get the spam emails like, "We'll frame your patent for you"?
- LFLex Fridman
(laughs) Yeah, it's much like a thesis.
- KDKate Darling
(laughs)
- LFLex Fridman
So, uh, so I... but that's useless, right? Or, or not? Where does IP stand in this age? What, what is... what's the right way to do it? What's the right way to protect and own ideas in this... when, when it's just code and, and this mishmash of...... something that feels much softer than a piece of machinery or-
- KDKate Darling
Yeah.
- LFLex Fridman
... an idea?
- KDKate Darling
I mean, it's hard because, uh, you know, there are different types of intellectual property, and they're kind of these blunt instruments. They're, they're like, it's like patent law is like a wrench. Like, it works really well for an industry like the pharmaceutical industry, but when you (laughs) try and apply it to something else, it's like, "I don't know. I'll just, like, hit this thing with a wrench and hope it works." (laughs) It's... So software, you know, software you have a couple of different options. Software, or, like, any code that's written down in some tangible form is automatically copyrighted. So you have that protection, but that doesn't do much because if someone takes the basic idea that the code is executing and just does it in a slightly different way, um, they can get around the copyright.
- LFLex Fridman
Right.
- KDKate Darling
So that's not a lot of protection. Then you can patent software, but that's kind, I mean, (sighs) getting a patent costs... I don't know if you remember what yours cost or, like, was it-
- LFLex Fridman
So we-
- KDKate Darling
... through an institution?
- LFLex Fridman
Yeah. It was through a university. That's why-
- KDKate Darling
Yeah.
- LFLex Fridman
... they... It was insane. There were so many lawyers, so many meetings. And it made me feel like it must have been hundreds of thousands of dollars. It was, must have been something-
- KDKate Darling
Oh, yeah.
- LFLex Fridman
... crazy.
- KDKate Darling
It's, it's insane the cost of getting a patent. And so this idea of, like, protecting the, like, inventor in their own garage, like, who came up with a great idea is kind of... That's a thing of the past. It's all just companies trying to protect things, and it costs a lot of money. And then with code, it's oftentimes, like, you know, by the time the patent is issued, which can take, like, five years, you know, probably your (laughs) code is obsolete at that point.
- LFLex Fridman
Absolutely.
- KDKate Darling
Um, so it's, it's a very, again, a very blunt instrument that doesn't work well for that industry. And so, you know, at this point, we should really have something better, but we don't.
- LFLex Fridman
Do you like open source? Yeah. Is open source good for society? You think all of us should open source code?
- KDKate Darling
Well, so at the Media Lab at MIT, um, we have an open source default because what we've noticed is that people will come in. They'll, like, write some code, and they'll be like, "How do I protect this?" And we're like, "Mm. Like, that's not your problem right now." Your problem isn't that someone's gonna steal your project. Your problem is getting people to use it at all.
- LFLex Fridman
Right.
- KDKate Darling
Like, there's so (laughs) much stuff out there. Like, eh, we don't even know if you're gonna get traction for your work. And so open sourcing can sometimes help, you know, get people's work out there but ensure that they get attribution for it, um, for the work that they've done. So, uh, like, I'm a fan of it in a lot of contexts. Obviously, it's not, like, a one-size-fits-all solution.
- 1:09:23 – 1:10:41
Lessons for robotics from parenthood
- LFLex Fridman
So what I gleaned from your Twitter is, uh, your mom... I, I saw a quote, a reference to Baby Bot.
- KDKate Darling
(laughs)
- LFLex Fridman
Uh, what have you learned about robotics and AI from raising a human baby bot?
- KDKate Darling
Well, I think that my child has just made it more apparent to me that the systems we're currently creating aren't like human intelligence. Like, it's, there's not a lot to compare there. Uh, it's just he, he has learned and developed in such a different way than a lot of the AI systems we're creating that that's not really interesting to me to compare. Um, but what is interesting to me (laughs) is how these systems are gonna shape the world that he grows up in. And so I'm, like, even more concerned about kind of the societal effects of developing systems that, you know, rely on massive amounts of data collection, for example.
- LFLex Fridman
So is he gonna be allowed to use, like, Facebook or... What's-
- KDKate Darling
(laughs)
- LFLex Fridman
What, it's-
- KDKate Darling
Facebook is over. Kids-
- LFLex Fridman
S-
- KDKate Darling
... don't use that anymore.
- LFLex Fridman
Snapchat? What do they use? Instagram?
- KDKate Darling
I don't know. Snapchat's over too. I don't know. I just heard that TikTok is over, which I've never even seen, so I don't know.
- LFLex Fridman
Oh no.
- KDKate Darling
We're old. We don't know.
- 1:10:41 – 1:12:35
Hope for future of robotics
- KDKate Darling
- LFLex Fridman
T- Twitch. I need to start, I, I'm gonna start gaming and streaming my, my gameplay. So w- what, what do you see as the future of personal robotics, social robotics, interaction with our robots? Like, what are you excited about if you were to sort of philosophize about what w- might happen in the next five, 10 years that would be cool to see?
- KDKate Darling
Oh. I really hope that we get kind of a home robot that makes it, that's a social robot and not just Alexa. Like, um, it's... You know, I, I really love the Anki products. Um, I thought Jibo was, had some really, uh, great aspects. So I'm hoping that a company cracks that.
- LFLex Fridman
Me too. So Kate, it was, uh, wonderful talking to you today. It was a pleasure.
- KDKate Darling
Likewise. Thank you so much.
- LFLex Fridman
It was fun. Thanks for listening to this conversation with Kate Darling, and thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the podcast by signing up to Masterclass at masterclass.com/lex and getting ExpressVPN at expressvpn.com/lexpod. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter @LexFridman. And now let me leave you with some tweets from Kate Darling. First tweet is, "The pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of the cereal bowl." (laughs) Second tweet is, "I came on here to complain that I had a really bad day and saw that a bunch of you are hurting too. Love to everyone." Thank you for listening and hope to see you next time.
Episode duration: 1:12:35
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 7KTbEn7PiaY
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome