Lex Fridman PodcastAyanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66
EVERY SPOKEN WORD
150 min read · 30,074 words- 0:00 – 2:09
Introduction
- LFLex Fridman
The following is a conversation with Ayanna Howard. She's a roboticist, professor at Georgia Tech, and director of the Human Automation Systems Lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments. Like me, in her work, she cares a lot about both robots and human beings, and so I really enjoyed this conversation. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play, and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Ayanna Howard.
- 2:09 – 5:05
Favorite robot
- LFLex Fridman
What, or who, is the most amazing robot you've ever met, or perhaps had the biggest impact on your career?
- AHAyanna Howard
I haven't met her, but I grew up with her, but of course Rosie. So, and I think it's because also-
- LFLex Fridman
Who's Rosie?
- AHAyanna Howard
Rosie from the Jetsons. She is all things to all people, right?
- LFLex Fridman
(laughs)
- AHAyanna Howard
Think about it, like anything you wanted, it was like magic, it happened. Um-
- LFLex Fridman
So people not only an-anthropomorphize, but project whatever they wish for the robot to be onto-
- AHAyanna Howard
Onto Rosie.
- LFLex Fridman
... Rosie.
- AHAyanna Howard
But also, I mean, think about it, she was socially engaging. She every so often had an attitude, right? Um, she kept us honest. She, she would push back sometimes when, you know, George was doing some weird stuff. Um, but she cared about people, especially the kids. Uh, she's, she was like the, the perfect robot. (laughs)
- LFLex Fridman
And, and you've said that people don't want their robots to be perfect. Can you elaborate that? What do you think that is? Just like you said, uh, Rosie pushed back a little bit every once in a while.
- AHAyanna Howard
Yeah. So I, I think it's that... So if you think about robotics in general, we want them because they enhance our quality of life, and, and usually that's linked to something that's functional, right? Uh, even if you think of self-driving cars, why is there a fascination? Because people really do hate to drive. Like there's the, like Saturday driving where I can just speed, but then there's the I have to go to work every day and I'm in traffic for an hour. I mean, people really hate that. Um, and so robots are designed to basically enhance our ability to increase our quality of life, and so the perfection comes from this aspect of interaction. Um, if I think about how we drive, if we drove perfectly, we would never get anywhere, right? So think about how many times you had to, uh, run past the light because you see the car behind you is about to crash into you, or that little kid, uh, kind of runs into the, the street and so you have to cross on the other side 'cause there's no cars, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Like if you think about it, we are not perfect drivers. Some of it is because it's our world, and so if you have a robot that is perfect in that sense of the word, they, they wouldn't really be able to function with us.
- LFLex Fridman
Can you linger a little bit on the word perfection? Sort of from the robotics perspective, what does that word mean and how is sort of the optimal behavior, as you're describing, different than the, what we think of as perfection?
- AHAyanna Howard
Yeah. So perfection, if you think about it in the more theoretical point of view, it's really tied to accuracy, right? So if I have a function, can I complete it at 100% accuracy with zero errors? Um, and so that's kind of if you think about perfection, um, in the sense of the word.
- 5:05 – 8:43
Autonomous vehicles
- AHAyanna Howard
- LFLex Fridman
And in a self-driving car realm, do you think from a robotics perspective, we kind of think that, uh, perfection means following the rules perfectly, sort of defining staying in the lane, changing lanes, when there's a green light you go, when there's a red light you stop, and that, that's the f-... And be able to perfectly see all the entities in the scene? That's the limit of what we think of as perfection?
- AHAyanna Howard
A- and I think that's where the problem comes is that, uh, when people think about perfection for robotics, the ones that are the most successful are the ones that are, quote-unquote, "perfect." Like I said, Rosie is perfect, but she actually wasn't perfect in terms of accuracy, but she was perfect in terms of how she interacted and how she adapted. And I think that's some of the disconnect is that we really want perfection with respect to its ability to adapt to us. We don't really want perfection with respect to 100% accuracy with respect to the rules that we just made up anyway, right? And so I think there's this disconnect sometimes between what we really want...... and what happens. And we see this all the time, like in, in my research, right? Like the, the optimal, quote-unquote, "optimal interactions" are when the robot is adapting based on the person, not 100% following what's optimal based on the rules.
- LFLex Fridman
Just to linger on autonomous vehicles for a second, just your thoughts, maybe off the top of the head, uh, is how hard is that problem, do you think, based on what we just talked about? You know, there's a lot of folks in the automotive industry that are very confident, from Elon Musk to Waymo to, to all these companies. How hard is it to solve that last piece-
- AHAyanna Howard
The last mile. (laughs)
- LFLex Fridman
... that's the, the, the gap between the perfection and the human definition of how you actually function in this world?
- AHAyanna Howard
Yeah, so this is a moving target. So I remember when, um, all the big companies started to heavily invest in this, and there was, uh, a number of even roboticists as well as, you know, folks who were putting in the VCs and, and corporations, Elon Musk being one of them, that said, you know, "Self-driving cars on the road with people, you know, within five years."
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Uh, uh, that was a little while ago, um, and now people are saying, "Five years, 10 years, 20 years." Some are saying, "Never," right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
I think if you look at some of the things that are being successful is these, um, basically fixed environments where you still have some anomalies, right? You still have people walking, you still have stores, but you don't have, um, other drivers, right, like other human drivers, or it's a dedicated space for the, for the cars. Um, because if you think about robotics in general, where it's always been successful is in f-... I mean, you can say manufacturing, like way back in the day, right? It was a fixed environment. Humans were not part of the equation. We're a lot better than that, but, like when we can carve out scenarios that are closer to that space, then I think that it's where we are.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
So a closed campus where you don't have self-driving cars-
- LFLex Fridman
Yep.
- AHAyanna Howard
... and maybe some protection so that the students don't jet in front just because they wanna see what happens. Like having a little bit, I think that's where we're gonna see the most success in the near future.
- LFLex Fridman
And be slow-moving.
- AHAyanna Howard
Right. Not, not, you know, 55, 60, 70 miles an hour, but the, the speed of, uh, a golf cart, right? (laughs)
- LFLex Fridman
So that said,
- 8:43 – 20:03
Tesla Autopilot
- LFLex Fridman
the most successful, in the automotive industry, robots operating today in the hands of real people are ones that are traveling over 55 miles an hour and in unconstrained environments, which is Tesla vehicles, so the Tesla autopilot. So I just... I would love to hear sort of your just thoughts of, uh, two things. So one, I don't know if you've gotten to see or have heard about something called Smart Summon, uh, where Tesla system, autopilot system, where the car drives zero-occupancy, no driver in the parking lot, slowly sort of tries to navigate the parking lot to find itself to you. And there's some incredible amounts of videos and just hilarity that happens as it awkwardly tries to navigate this (laughs) environment. But it's-
- AHAyanna Howard
(laughs)
- LFLex Fridman
... it's a beautiful nonverbal communication between machine and human that I think is, uh, from... i- it's like, it's some of the work that you do in this kind of interesting human/robot interaction space. So what are your thoughts in general about it?
- AHAyanna Howard
So I, I do have that feature.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Um...
- LFLex Fridman
Do you drive a Tesla?
- AHAyanna Howard
I do.
- LFLex Fridman
Oh, nice.
- AHAyanna Howard
Um, mainly because I'm a gadget freak, right?
- LFLex Fridman
(laughs)
- AHAyanna Howard
So I, I see... It's a, it's a gadget that happens to have some wheels.
- LFLex Fridman
Yeah.
- AHAyanna Howard
And yeah, I've seen some of the videos. Um...
- LFLex Fridman
But what's your experience like? I mean, you're, you're a human/robot interaction roboticist. You're a legit sort of expert in the field, so what does it feel for a machine to come to you?
- AHAyanna Howard
I- it's one of these very fascinating things, but also I am hyper, hyperalert, right? Like I'm hyper-alert.
- LFLex Fridman
(laughs) Yes.
- AHAyanna Howard
Like my butt... my thumb is like, "Oh, okay. I'm, I'm ready to take over." Um, even when I'm in my car or I'm doing things like automated backing into... Uh, so there's like a feature where you can do this automating backing into-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... a parking space, um, or bring the car out of your garage, um, or even, you know, pseudo autopilot on the freeway.
- LFLex Fridman
Right.
- AHAyanna Howard
Right? I am hypersensitive. I can feel... Like as I'm navigating, I'm like, "Yeah, that's an error right there."
- LFLex Fridman
Yeah.
- AHAyanna Howard
Like I, I'm very aware of it, uh, but I'm also fascinated by it, and it does get better. Like it... I, I look and see it's learning from all of these people who are cutting it on.
- LFLex Fridman
Yeah.
- AHAyanna Howard
Like every time I cut it on-
- LFLex Fridman
It gets better and better.
- AHAyanna Howard
... it's getting better, right? And so I think that's what's amazing about it is that.
- LFLex Fridman
So this nice dance of you're still hypervigilant, so you're still not trusting it at all- (laughs)
- AHAyanna Howard
Yeah, that's-
- 20:03 – 28:11
Ethical responsibility of safety-critical algorithms
- LFLex Fridman
w- do you think, from a robotics perspective, you know, if you, if you're kind of honest of what cars do, they, they kind of, we kind of threaten each other's life all the time. So cars are very as- I mean, in order to navigate intersections, there's an assertiveness, there's a risk-taking, and if you were to reduce it to an objective function, there's a probability of murder in that function, meaning you killing another human being, and you're using that... First of all, y- it has to be low enough to be acceptable to you on an ethical level, as a individual human being, but it has to be high enough for people to respect you, to not sort of take advantage of you completely, and jaywalk in front of you and so on. So h- uh, I mean, I don't think there's a right answer here, but what's, how do we solve that? How, how do we solve that from a robotics perspective when danger and human life is at stake?
- AHAyanna Howard
Yeah. As they say, cars don't kill people, people kill people.
- LFLex Fridman
People kill people. (laughs)
- AHAyanna Howard
Um, right. Um, so I, I think-
- LFLex Fridman
And now robotic algorithms would be killing people.
- AHAyanna Howard
Right. So it will be, uh, robotics algorithms that are pro... No. It would be robotic algorithms don't kill people, developers of robotic algorithms-
- LFLex Fridman
Developers of-
- AHAyanna Howard
... kill people. Right? I mean-
- LFLex Fridman
So-
- AHAyanna Howard
... one of the things is people are still in the loop, and at least in the near and mid-term, I think people will still be in the loop, at some point, even if it's the developer. Like, we're not necessarily at the stage where, you know, robots are programming autonomous robots with different behaviors-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... quite yet. Um, not-
- LFLex Fridman
That's a scary notion. Sorry to interrupt. That a developer is, has some responsibility in, in a, in the death of a human being. That's a-
- AHAyanna Howard
I mean, I think that's why-
- LFLex Fridman
... that's a heavy burden.
- AHAyanna Howard
... th- the whole aspect of, of ethics in our community is so, so important, right? Like, because it's true. If, if, if you think about it, um, you can basically say, "I'm not going to work on weaponized AI," right? Like people can say, "That's not what I'm gonna do." But yet, you are programming algorithms that might be used in healthcare a- algorithms that might decide whether this person should get this medication or not, and they don't, and they die. You, okay, so w- that is your responsibility, right? And if you're not conscious and aware that you do have that power when you're coding and, and things like that, I think that's, that's, that's just not a good thing. Like, we need to think about this responsibility as we program robots and, and computing devices, um, much more than we are.
- LFLex Fridman
Yeah, so it's not an option to not think about ethics. I think it's a majority, I would say, of computer science, sort of, there, it's kind of a hot topic now, I think about bias and so on, but it's, and we'll talk about it, but usually it's kind of, you (laughs) , it's like a very particular group of people that work on that. And then people who do, like, robotics are like, "Well, I don't have to think about that, you know, there's other smart people thinking about it." It seems that everybody has to think about it. It's not, you can't escape the e- ethics, whether it's bias or just every aspect of ethics that has to do with human beings, yeah.
- AHAyanna Howard
Everyone. So think about, I'm gonna age myself, but I remember, uh, when we didn't have, like, testers, right? And so what did you do? As a developer, you had to test your own code, right? Like, you had to go through all the cases and figure it out, and you know, and then they realized that, you know, like, we probably need to have testing because we're not getting all the things. And so from there, what happens is, like, most developers, they do, you know, a little bit of testing, but it's usually like, "Okay, did my compiler bug out?"
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
"Let me look at the warnings. Okay, is that acceptable or not?" Right? Like, that's how you typically think about it as a developer, and you'll just assume that it's going to go to another process, and-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... they're gonna test it out. But I think we need to go back to those early days when, you know, you're a developer, you're developing. There should be like this, a, you know, "Okay, let me look at the ethical outcomes of this." Because there isn't a second, like, testing, ethical testers, right? It's you.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Um, we did it back in the early coding days. Um, I think that's where we are with respect to ethics. Like, let's go back to what was good practices, and only because we were just developing the field.
- LFLex Fridman
Yeah, and it's, uh, I mean, it's a really h- heavy burden. I've f- I've had to feel it recently in the last few months, but I think it's a, it's a good one to feel. Like, I've gotten the message, more than one, from people...You know, I've unfortunately gotten some attention recently, and s- I've gotten messages that say that I have blood on my hands because of working on semi-autonomous vehicles. The idea that you have semi-autonomy means people will become... will lose vigilance, and so on. And it's actually, be humans, as we describe. And because of that, because of this idea that we're creating automation, there'll be people be hurt because of it, and I think that's a beautiful thing. I mean, it's, you know, there's many nights where I wasn't able to sleep because of this notion, you know? You really do think about people that might die because of this technology. Of course, you could then start rationalizing and saying, "Well, you know what? 40,000 people die in the United States every year, and we're trying to ultimately try to save lives." But the reality is, your code you've written might kill somebody, and that's an important burden to carry with you as you design the code.
- AHAyanna Howard
I don't even think of it as a burden if we train this concept correctly from the beginning. And I use... And not to say that coding is like being a medical doctor, but think about it. Medical doctors, if y- they've been in situations where their patient didn't survive, right? Do they give up and go away? No. Every time they come in, they know that there might be a possibility that this patient might not survive. And so when they approach every decision, like, that's in their back of their head, and so why isn't that... we aren't teaching... And those are tools though, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
They're given some of the tools to address that so that they don't go crazy, but we don't give those tools so that it does feel like a burden versus something of, "I have a great gift and I can do great, awesome good, but with it comes great responsibility." I mean, that's what we teach in terms of if you think about the medical schools, right? Great gift, great responsibility. I think if we just change the messaging a little-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... great gift, being a developer, great responsibility, and this is how you combine those.
- 28:11 – 38:20
Bias in robotics
- AHAyanna Howard
- LFLex Fridman
On a slightly related note, uh, y- in a recent paper, The Ugly Truth About Ourselves and Our Robot Creations, you, you discuss, you highlight some biases that may affect the functioning of various robotic systems. Can you talk through, if you remember, examples of some?
- AHAyanna Howard
There's a lot of examples. I usually...
- LFLex Fridman
What is bias, first of all? I mean...
- AHAyanna Howard
Yeah, so bias is, um, this... And, and so bias, which is different than prejudice. So bias is that we all have these preconceived notions about particular, um, everything from particular groups for... to habits to, um, identity, right? So, we have these predispositions, and so when we address a problem, we look at a problem, we make a decision, those preconceived notions, uh, might affect our, our outputs or outcomes.
- LFLex Fridman
So, there the bias could be positive and negative, and then is prejudice the negative kind of bias?
- AHAyanna Howard
Prejudice is the negative, right? So, prejudice is that not only are you aware of your bias, but you are then take it and have a negative outcome-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... even though you're aware, like...
- LFLex Fridman
And there could be gray areas too? I mean, that's-
- AHAyanna Howard
There's always gray areas.
- LFLex Fridman
That's the challenging aspect of all ethical questions, right?
- AHAyanna Howard
Right, right. So, I always like... Uh, so there's, there's a funny one, and in fact, I think it, it might be in the paper 'cause I think I talk about, uh, self-driving cars. But think about this. We, um... For teenagers, right, typically, we... insurance companies charge quite a bit of money if you have a teenage driver.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
So, you could say that's an age bias-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
But no one will cl- I mean, parents will be grumpy, but no one really says that that's not fair, that it's their-
- LFLex Fridman
That's interesting. We don't, uh... That's right, that's right. It's, uh... Everybody in human factors and safety research al- almost, I mean, is quite ruthlessly critical of teenagers. (laughs) And we don't question, "Is that okay? Is that okay to be ageist in this kind of way?"
- AHAyanna Howard
It is, and it is ageist, right?
- LFLex Fridman
Yeah, yeah.
- AHAyanna Howard
It's definitely age-
- LFLex Fridman
There's no question about it. Yeah.
- AHAyanna Howard
And so, so th- these are, these... This is a gray area, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
'Cause you, um, y- you know that, you know, teenagers are more likely-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... to be in accidents-
- LFLex Fridman
Yeah.
- 38:20 – 40:35
AI in politics and law
- LFLex Fridman
So speaking of this really nice notion that AI is maybe flawed but better than humans, so just made me think of it, one example of flawed humans is our political system. Do you think, or, uh, y- you said, uh, judicial as well, do you have a hope for AI sort of, uh, being elected for president? Or of, or running our Congress or being able to be a powerful representative of the people?
- AHAyanna Howard
So I mentioned, and I truly believe that this whole world of AI is in partnerships with people. And so what does that mean? I, I don't believe, or, and maybe I just don't, I don't believe that we should have an AI for president, but I do believe that a president should use AI as an advisor, right? Like, if you think about it, um, every president has a cabinet of individuals that have different expertise that they should listen to, right? Like, that's kind of what we do, and you put smart people with smart expertise around certain issues, and you listen. I don't see why AI can't function as one of those smart individuals giving input. So maybe there's an AI on healthcare, maybe there's an AI on education and, right? Like all of these things that a human is processing, right? I, I s- because at the end of the day, there's people that are human that are going to be at the end of the decision, and I don't think as a world, as a culture, as a society, that we would totally beli- and this is us, like this is some fallacy about us. But we need to see that leader, that person as human, um, and most people don't realize that, like, leaders have a whole lot of advice, right? When they say something, it's not that they woke up... well, usually, they don't wake up in the morning and be like, "I have a brilliant idea," right? It's usually a, "Okay, let me listen. I have a brilliant idea, but let me get a little bit of feedback on this, like, okay." And then it's a, "Yeah, that was an awesome idea," or it's like, "Yeah, let me go
- 40:35 – 47:44
Solutions to bias in algorithms
- AHAyanna Howard
back."
- LFLex Fridman
We already talked through a bunch of them, but are there some possible solutions to the biases present in our algorithms beyond what we just talked about?
- AHAyanna Howard
So I think there's two paths. One is to figure out how to systematically do the feedback and correction. So right now, it's ad hoc, right? It's a researcher identifies some outcomes that are not s- don't seem to be fair, right? They publish it. They write about it. Um, and they, either the developer or the companies that have adopted the algorithms may try to fix it, right? And so it's really ad hoc and it's not systematic. There's, it's just, it's kind of like, "I'm a researcher, that seems like an interesting problem," um, which means that there's a whole lot out there that's not being looked at, right? 'Cause it's kind of researcher driven. Um, I, and, and I don't necessarily have a solution, but that process I think could be done a little bit better. Uh, one way is, um, I'm going to poke a little bit at some of the corporations.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Right? Like, maybe the corporations, when they think about a product, they should, um, instead o- in addition to hiring these, you know, bug, they give these, um-
- LFLex Fridman
Oh, yeah, yeah, yeah. (laughs)
- AHAyanna Howard
... where you, you-
- LFLex Fridman
Like awards when you find a bug.
- AHAyanna Howard
Yeah.
- LFLex Fridman
In your security, security hole.
- AHAyanna Howard
Yeah, a security bug.
- LFLex Fridman
Yeah.
- AHAyanna Howard
You know, let's, let's put it like, we will give the, whatever the award is that we give for the people who find these security holes, find an ethics hole, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
Like, find an unfairness hole and we will pay you X for each one you find. I mean, why can't they do that?
- LFLex Fridman
Yeah.
- AHAyanna Howard
One, it's a win-win. They show that they're concerned about it, that this is important, and they don't have to necessarily dedicate it their own, like, internal resources. And it also means that everyone who has, like, their own bias lens, like, "I'm interested in age and so I'll find the ones based on age," and, "I'm interested in gender and..." Right? Which means that you get, like, all of these different perspectives.
- LFLex Fridman
But you think of it in a data-driven way, so like we'll see... sort of, if we, if we look at a company like Twitter, it gets, it's under a lot of fire for discriminating against certain political beliefs.
- AHAyanna Howard
Correct.
- LFLex Fridman
And sort of, there's a lot of people, this is the, the sad thing, 'cause I know how hard the problem is, and I know the Twitter folks are working really hard at it, even Facebook that everyone seems to hate are working really hard at this. Uh, you know, w- the kind of evidence that people bring is basically anecdotal evidence. "Well, me or my friend, all we said is X, and for that, we got banned." And, and that's kind of a discussion of saying, well, look, that's usually, first of all, the whole thing is taken out of context, so there, they present sort of anecdotal evidence. And how are you supposed to as a company, in a healthy way, have a discourse about what is and isn't ethical? What, how do we make algorithms ethical when people are just blowing everything? Like, they're outraged about a particular anecdotal evident, piece of evidence that's very difficult to sort of contextualize in the big data-driven way. Do, do you have a hope for companies like Twitter and Facebook?
- AHAyanna Howard
Yeah. Um, so I think there, there's a couple of things going on, right? Uh, first off, uh, the, remember this whole aspect of we are becoming, um, reliant on technology. We're also becoming reliant on, um, a lot of these, um, the, the apps and the s- resources that are provided, right? So some of it is kind of anger, like, "I need you," right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
"And you're not working for me." Right?
- LFLex Fridman
Yeah. (laughs) "You're not working for me," right?
- AHAyanna Howard
Um, but I think it's-
- LFLex Fridman
(laughs)
- AHAyanna Howard
... a- and so some of it, and I, and I wish that, um, there was a little bit of change of rethinking. So some of it is like, "Oh, we'll fix it in-house." No. That's like, okay, I'm a fox, and I'm going to watch these hens because I think it's a problem that foxes eat hens.
- LFLex Fridman
Yeah.
- AHAyanna Howard
No, right? Like, use, like be good citizens and say, "Look. We have a problem, and we are willing to open ourselves up for others to come in and look at it, and not try to fix it in-house." Because if you fix it in-house, there's conflict of interest. If I find something, I'm probably gonna wanna fix it and hopefully the media won't pick it up, right? And that then causes distrust because someone inside is gonna be mad at you and go out and talk about how, "Yeah, they canned the resume"-
- LFLex Fridman
Yeah.
- 47:44 – 49:57
HAL 9000
- AHAyanna Howard
- LFLex Fridman
So y- you've, uh, started your career at NASA Jet Propulsion Laboratory, but before I'd ask you a few questions there, have you happened to have ever seen Space Odyssey, 2001 Space Odyssey? (laughs)
- AHAyanna Howard
Yes.
- LFLex Fridman
Okay. Do you think (laughs) , do you think HAL 9000... So we're talking about ethics, do you think HAL did the right thing by taking the priority of the mission over the lives of the astronauts? Do you think HAL is good or evil? Easy questions today.
- AHAyanna Howard
Yeah. HAL was misguided.
- LFLex Fridman
You're one of the people that would be in charge of an algorithm like HAL.
- AHAyanna Howard
Yeah.
- LFLex Fridman
So how would you do better?
- AHAyanna Howard
If you think about what happened was there was no fail-safe, right? So we, uh, perfection, right? Like, what is that? I'm gonna make something that I think is perfect, but if my assumptions are wrong, it'll be perfect based on the wrong assumptions, right? That's something that you don't know until you deploy and then you're like, "Oh, yeah. Messed up." But what that means is that when we design software, such as in Space Odyssey, uh, when we put things out, that there has to be a s- fail-safe. There has to be the ability that once it's out there, you know, we can grade it as an F, and it fails-... and it doesn't continue, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
If there's some way that it can be brought in and, and removed and, and that's aspect, uh, because that's what happened with, with HAL. It was like assumptions were wrong. It was perfectly correct based on those assumptions-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... and there was no way to change, change it, change the assumptions at all.
- LFLex Fridman
And the change to fall back would be to a human, so you ultimately think like humans should be, uh, you know, it's not turtles or AI all the way down. It's, at some point there's a human that actually makes a decision.
- AHAyanna Howard
I still think that an- and again, because I do human-robot interaction, I still think the human needs to be part of the equation at some point.
- LFLex Fridman
So what
- 49:57 – 51:53
Memories from working at NASA
- LFLex Fridman
... Just looking back, what are some fascinating things in robotic space that NASA was working at the time? Or just in general, what, what have you gotten to play with and what are your memories from working at NASA?
- AHAyanna Howard
Yeah. So one of my first memories was they were working on a surgical robot system that could do eye surgery, right?
- LFLex Fridman
Wow.
- AHAyanna Howard
And this was back in, oh my gosh, it must have been, oh, maybe '92, '93, '94.
- LFLex Fridman
So it's like, almost like a remote operation of a robot?
- AHAyanna Howard
Yeah. It was, it was remote operation, and in fact I, you can even find some old tech reports on it. So think of it, you know, like now we have da Vinci-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... right? Like think of it, but these were like the late '90s, right? And I remember going into the lab one day and I was like, "What's that?" Right? And of course it, it wasn't pretty, right? 'Cause the technology, but it was like functional, and you had this, this individual that could use version of haptics to actually do the surgery, and they had this mockup of a human face and like the eyeballs, and you can see this l- little drill.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
And I was like, "Oh, that is so cool."
- LFLex Fridman
(laughs)
- AHAyanna Howard
Um, that one I vividly remember, uh, because it was so outside of my, like, possible thoughts of what could be done.
- LFLex Fridman
Just the kind of precision and the ... I mean, what, what's the most amazing of a thing like that?
- AHAyanna Howard
I think it was the precision. It was the kind of first time that I had physically seen-
- LFLex Fridman
Hmm.
- AHAyanna Howard
... this robot machine human interface, right? Versus ... 'Cause y- y- manufacturing had been, you saw those kind of big robots, right? But this was like, oh, this is ... And a person. There's a person-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... and, and a robot like-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... in the same space.
- LFLex Fridman
So meeting them in person. I,
- 51:53 – 54:27
SpotMini and Bionic Woman
- LFLex Fridman
like for me, it was a magical moment that I can't ... It was life transforming that I recently met Spot Mini from Boston Dynamics.
- AHAyanna Howard
Oh, see.
- LFLex Fridman
I don't know why, but on a human-robot interaction, for some reason I realized how easy it is to anthropomorphize.
- AHAyanna Howard
(laughs)
- LFLex Fridman
And it was, I don't know, I, it was, um, it was almost like falling in love with this feeling of meeting ... And I've obviously seen these robots a lot on video and so on, but meeting in person, just having that one-on-one time-
- AHAyanna Howard
It's different.
- LFLex Fridman
... is different. So do you ha- Have you had a robot like that in your life that was ... made you maybe fall in love with robotics? Sort of o- like meeting in person? Or was it *******?
- AHAyanna Howard
I mean, I, I mean, I, I loved robotics since-
- LFLex Fridman
From the beginning? (laughs)
- AHAyanna Howard
Yeah. So ... (laughs) I was a 12-year-old, like-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... "I'm gonna be a roboticist."
- LFLex Fridman
Yeah.
- AHAyanna Howard
Uh, actually it was, I called it cybernetics. But, so my, my motivation was Bionic Woman. I don't know if you know that.
- LFLex Fridman
Yes.
- AHAyanna Howard
Um, and so, I mean, that was like a seminal moment, but I didn't meet ... Like, that was TV, right? Like, it wasn't like I was in the same space and I met and I was like, "Oh my gosh, you're, like, real."
- LFLex Fridman
Just lingering on Bionic Woman, which by the way, because I, I read that about you, uh, I, I watched uh, bit- bits of it and it's just so, no offense, terrible.
- AHAyanna Howard
(laughs)
- LFLex Fridman
(laughs)
- AHAyanna Howard
It's cheesy-
- LFLex Fridman
It's cheesy.
- AHAyanna Howard
... if you look at it now.
- LFLex Fridman
It's cheesy. Now-
- AHAyanna Howard
I've, I've seen a couple of reruns lately. (laughs) Uh ...
- LFLex Fridman
(laughs) But it's, uh, but it of course at the time was probably, uh-
- AHAyanna Howard
But the sound effects.
- LFLex Fridman
... captured the imagination. (laughs)
- AHAyanna Howard
(laughs)
- LFLex Fridman
(laughs) Especially when, when y- uh, when you're younger it just catches you. But which aspect? Did you think of it ... You mentioned cybernetics. Did you think of it as robotics or did you think of it as almost constructing artificial beings? Like is it the intelligent part that, that captured your fascination or was it the whole thing, like even just the limbs and just the-
- AHAyanna Howard
So for me it would've, in another world, I probably would've been more of a biomedical engineer-
- 54:27 – 57:11
Future of robots in space
- AHAyanna Howard
- LFLex Fridman
But just to linger on NASA for a little bit, uh, what do you think maybe, if you have other memories, but also what do you think is the future of robots in space? We mentioned HAL, but there's incredible robots that NASA's working on in general, thinking about in our ex- as we venture out, human civilization ventures out into space. What do you think the future robot is there?
- AHAyanna Howard
Yeah. So I mean, there's the near term. For example, they just announced the, uh, the rover that's going to the moon.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Which, you know, that's kind of exciting, but that's like near term. Uh, you know, my favorite, favorite, favorite series is Star Trek.
- LFLex Fridman
(laughs)
- AHAyanna Howard
Right? You know, I-I really hope, and even Star Trek, like, if I calculate the years, I wouldn't be alive. But I would really, really love to be in that world. (laughs) Like, even if it's just at the beginning, like, you know, like voyage, like adventure one.
- LFLex Fridman
So, basically living in space?
- AHAyanna Howard
Yeah.
- LFLex Fridman
With wha- what robots, wh- what do robots-
- AHAyanna Howard
With Data.
- LFLex Fridman
What role-
- AHAyanna Howard
With Data, it would have to be, even though that wasn't, you know, that was, like, later, but...
- LFLex Fridman
So, Data is a robot that has human-like qualities.
- AHAyanna Howard
Right, without the emotion chip, yeah. T-
- LFLex Fridman
You don't like emotion in your robots?
- AHAyanna Howard
Well, so Data with the emotion chip was kind of a mess, right?
- LFLex Fridman
(laughs)
- AHAyanna Howard
It, it took a while- (laughs)
- LFLex Fridman
Yeah.
- AHAyanna Howard
... for, for that thing to adapt. Um, but, uh, and- and so why was that an issue? The issue is, is that emotions make us irrational agents, that's the problem, and yet he could think through things even if it was based on an emotional scenario, right? Based on pros and cons. But as soon as you made him emotional, one of the metrics he used for evaluation was his own emotions, not people around him, right? Like, and so...
- LFLex Fridman
We do that as children, right? So, we're- we're very egocentric when we're young.
- AHAyanna Howard
We are very egocentric.
- LFLex Fridman
And so isn't that just an early version of the emotion chip then? I haven't watched much Star Trek.
- AHAyanna Howard
Yeah.
- LFLex Fridman
I, I... (laughs)
- AHAyanna Howard
Except I have also met adults, um-
- LFLex Fridman
With that, yeah.
- AHAyanna Howard
Right? And so that is, that is a developmental process, and, um, I'm sure there's a bunch of psychologists that could go through-
- LFLex Fridman
Yes.
- 57:11 – 1:02:38
Human-robot interaction
- AHAyanna Howard
- LFLex Fridman
So, how much psychology do you think, a topic that's rarely mentioned in robotics, but how much does psychology come into play when you're talking about HRI, Human Robot Interaction, when you have to have robots that actually interact with people?
- AHAyanna Howard
Tons. So, we, like my group as well as I, read a lot in the cognitive science literature as well as the psychology literature, um, because they understand a lot about human-human relations and developmental milestones and things like that, um, and so we tend to look to see what, what's been done out there. Um, sometimes what we'll do is we'll try to match that to see is that human-human relationship the same as human-robot? Sometimes it is and sometimes it's different, and then when it's different, we have to, we- we try to figure out, okay, why is it different in this scenario, but it's the same in the other scenario, right? And so we try to do that quite a bit.
- LFLex Fridman
Would you say that's, if we're looking at the future of human-robot interaction, would you say the psychology piece is the hardest? Like if you, it's, I mean it's a funny notion for you as a- a- I don't know if you consider... Yeah, I mean, one way to ask it, do you consider yourself a roboticist or a psychologist? (laughs)
- AHAyanna Howard
Oh, I consider myself a roboticist that plays the act of a psychologist. (laughs)
- LFLex Fridman
But if you were to look at yourself sort of, um, you know, 20, 30 years from now, do you see yourself more and more wearing the psychology hat? Eh, uh, sort of another way to put it is, are the hard problems in human-robot interactions fundamentally psychology, or is it still robotics, the per- the perception, manipulation, planning, all of that kind of stuff?
- AHAyanna Howard
It's actually neither. The hardest part is the adaptation and the interaction. So-
- LFLex Fridman
The learning?
- AHAyanna Howard
It's the, it's the interface, it's the learning, and so if I think of, like, I've become much more of a roboticist/AI person-
- LFLex Fridman
Mm.
- AHAyanna Howard
... than when I... Like, originally, again, I was about the bionics. I was a con- I was electrical engineer, I was control theory, right?
- LFLex Fridman
Okay.
- AHAyanna Howard
Like, and then I started realizing that my algorithms needed, like, human data, right? And so then I was like, okay, what is this human thing, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
How do I incorporate human data? And then I realized that human perception had th- like, there was a lot in terms of how we perceived the world, and so trying to figure out how do I model human perception for my r- and so I became a HRI person, Human Robot Interaction person from being a control theorian, realizing that humans actually offered quite a bit.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Uh, and then when you do that, you become a, more of a artificial intelligence, AI, and so I see myself evolving more in this AI world, um, under the lens of robotics having hardware, interacting with people.
- LFLex Fridman
So, you're a, a world-class expert researcher in robotics, and yet others, you know, there's a few, it's a, it's a small but, uh, fierce community of people, but most of them don't take the journey into the H of HRI, into the human. So, why did you brave into the interaction with humans? It seems like a really hard problem.
- AHAyanna Howard
It's a hard problem and it's very risky as an academic.
- LFLex Fridman
Yes.
- AHAyanna Howard
And I, I knew that when I started down that journey, that it was very risky as an academic, um, in this world that was nuanced, it was just developing. We didn't even have a conference, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
At the time. Because the- it was the interesting problems, that was what drove me. It was... The fact that I looked at what interests me in terms of the application space and the problems, and that pushed me into trying to figure out what people were and what humans were and how to adapt to them. Um, if those problems weren't so interesting-I'd probably still be sending rovers to glaciers, right? Uh, but the problems were interesting. And the other thing was that they were hard, right? So, it's, I like having to go into a room and being like, "I don't know what to do."
- LFLex Fridman
(laughs)
- AHAyanna Howard
And then going back and saying, "Okay, I'm gonna figure this out." I, uh, do not... I'm not driven when I go in, I'm like, "Oh, there are no surprises."
- LFLex Fridman
Yeah.
- AHAyanna Howard
Like, I, I don't find that satisfying. Um, if that was the case, I'd go someplace and make a lot more money.
- LFLex Fridman
Yeah.
- AHAyanna Howard
Right? I think... I stay in academic because... and choose to do this because I can go into a room, I'm like, "That's hard."
- LFLex Fridman
Yeah, I think, uh, just from my perspective, maybe you can correct me on it, but if I just look at the field of AI broadly, it seems that human-robot interaction has the most... one of the most, uh, number of open problems. Like, uh, people... especially relative to how many people are willing to acknowledge that there are. (laughs) This, uh, because most people are just afraid of the human, so they don't even acknowledge how many open problems there are, but it's, uh... in terms of difficult problems to solve, exciting spaces, it seems to be, uh, uh, in- incredible for that.
- 1:02:38 – 1:09:26
Trust
- AHAyanna Howard
- LFLex Fridman
You've mentioned trust before. What role does trust from, uh, interacting with autopilot to in the medical context... what role does trust play in the human-robot interactions piece?
- AHAyanna Howard
Um, so some of the things I study in this domain is not just trust, but it really is over-trust.
- LFLex Fridman
How do you think about over-trust? Like, what is... first of all, what is (laughs) , uh, what is trust and what is over-trust?
- AHAyanna Howard
Basically, the way I look at it is trust is not what you click on a survey. Trust is about your behavior. So, if you interact with the technology based on the decision or the actions of the technology as if you trust that decision-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... then you're trusting, right? Um, and, um, even in my group, we've done surveys that, you know, on the thing, do I... "Do you trust robots?" Of course not. "Would you follow this robot in a b-burning building?" Of course not.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Right? And then you look at their actions and you're like, clearly your behavior does not match what you think, right? Or what you think you would like to think, right? And so I'm really concerned about the behavior, 'cause that's really... at the end of the day, when you're in the world, that's what will impact others around you. It's not whether before you went onto the street, you, you clicked on, like, "I don't trust self-driving cars."
- LFLex Fridman
You know, that, uh... from an outsider perspective, it's always frustrating to me. Well, I read a lot, so I'm insider in a certain philosophical sense. Uh, the... it's frustrating to me how often trust is used in surveys and how people say... make claims out of any kind of finding they make while somebody clicking an answer. Because trust is a, uh, yeah, behavior just... you said it beautifully. I mean, the action, your own behavior is, is what trust is. I mean, that... everything else is not even close. It's almost like a absurd comedic, uh, poetry that you weave around your actual behaviors. So, some people can say their, they, their trust... "You know, I truf- trust my wife, husband," or not, whatever, but the actions is what speaks volumes.
- AHAyanna Howard
Right. You bug their car-
- LFLex Fridman
Yeah. (laughs)
- AHAyanna Howard
(laughs) ... you probably don't trust them.
- LFLex Fridman
I trust them, I'm just making sure. No, no, that's, uh, yeah.
- AHAyanna Howard
Like, even if you think about cars, I think is a beautiful case. I came here, uh, at some point I'm sure on either Uber or Lyft, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
I remember when it first came out, right? I bet if they had had a survey, "Would you get in the car with a stranger-"
- LFLex Fridman
Oh.
- AHAyanna Howard
"... and pay them?"
- LFLex Fridman
Yes.
- AHAyanna Howard
How many people do you d- think would have said, "Ugh. Like, really?" You know? Wait, even worse. "Would you get in the car with a stranger at 1:00 AM in the morning-"
- LFLex Fridman
No.
- AHAyanna Howard
"... to have them drop you home as a single female?"
- LFLex Fridman
Yeah.
- AHAyanna Howard
Like, how many people would say, "Uh, that's stupid"?
- LFLex Fridman
(laughs) Yeah.
- AHAyanna Howard
And now look at where we are. Be... I mean, people put kids, like, right? Like-
- LFLex Fridman
Yes.
- AHAyanna Howard
... "Oh, yeah. My, my child has to go to, um, school."
- LFLex Fridman
Yeah.
- 1:09:26 – 1:15:06
AI in education
- LFLex Fridman
So, what are some exciting applications of human-robot interaction? You started a company, maybe you can talk about the- the- the exciting efforts there. But in general also, what other space can robots interact with humans and help?
- AHAyanna Howard
Yeah, so besides healthcare, 'cause, you know, that's my biased lens, my other biased lens is education. I think that... Well, one, we- we definitely, we... In the US, you know, we're doing okay with teachers, but, uh, there's a lot of school districts that don't have enough teachers. If you think about the teacher, um, student ratio for, uh, at least public education, um, in some districts, it's crazy. It's like, how can you have learning in that classroom, right? Because you just don't have the human capital. Um, and so if you think about robotics, bringing that in to classrooms as well as the after-school space where they offset some of this lack of resources in certain communities, um, I think that's a good place. And then turning on the other end is using these systems then for workforce retraining and, uh, dealing with some of the things that are going to come out later on, of job loss. Like, thinking about robots, and- and AI systems, for retraining and workforce development. I think that's exciting areas, um, that can be pushed even more, and it would have a huge, huge impact.
- LFLex Fridman
What would you say are some of the open problems, uh, w- in, uh, education? Sort of, what... It's, uh, exciting. So, young kids and the older folks, or just folks of all ages, who, uh, need to be retrained, who need to sort of open themselves up to a whole nother, uh, area of work. What- what are the problems to be solved there? How do you think robots can help?
- AHAyanna Howard
We- we have the engagement aspect, right? So, we can figure out the engagement. That's not a-
- LFLex Fridman
What do you mean by engagement?
- AHAyanna Howard
So, identifying whether a person, um, is focused, is-
- LFLex Fridman
Gotcha.
- AHAyanna Howard
Like, that we can figure out. What we can't figure out, uh, and- and there's some positive results in this, is that personalized adaptation based on any concepts, right? So imagine, I think about, um, I have an agent, uh, and I'm working with a- a kid learning, I don't know, algebra two.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Can that same agent then switch and teach, um, some type of new coding skill to a displaced mechanic?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Like, wh- what does that actually look like?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Right? Like, hardware might be the same, content is different, two different target demographics of engagement. Like, how do you do that?
- LFLex Fridman
How important do you think personalization is in human-robot interaction? And not just the mechanic or student, but, like, literally to the individual human being?
- AHAyanna Howard
I think personalization is really important, but, um, a caveat is that I think we'd be okay if we can personalize to the group, right? And so, um, if I can, uh, label you as, uh, along some certain dimensions-
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
... then even though it may not be you specifically, I can put you in this group. So, the sample size, this is how they best learn, this is how they best engage. Even at that level, it's really important, and it's because... I mean, it's one of the reasons why educating in large classrooms is so hard, right? You teach to, you know, the median.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
But there's these, you know, individuals that are, you know, struggling, and then you have highly intelligent individuals, and those are the ones that are usually, you know-... kind of left out, so highly intelligent individuals may be disruptive and those who are struggling might be disruptive because they're both bored.
- LFLex Fridman
Yeah, and if you narrow the s- the definition of the group or in the size of the group enough, it'll, you'll be able to address their individual-
- AHAyanna Howard
Yes.
- LFLex Fridman
... it's not individual needs, but really the most-
- AHAyanna Howard
Group needs, or-
- LFLex Fridman
... group, most important, group needs.
- AHAyanna Howard
Right.
- LFLex Fridman
Right? And that's kind of what a lot of successful recommender systems do with Spotify and so on. This is sad to believe, but, um, as a music listener, probably in some sort of large group.
- AHAyanna Howard
Yeah. (laughs)
- LFLex Fridman
It's very pred- It's very sadly predictable.
- AHAyanna Howard
You have been labeled.
- 1:15:06 – 1:17:17
Andrew Yang, automation, and job loss
- LFLex Fridman
But on the retraining part, what are your thoughts? There's a candidate, Andrew Yang, running for president-
- AHAyanna Howard
Yes. (laughs)
- LFLex Fridman
... saying that, uh, sort of, uh, AI automation robots are gonna take our jobs.
- AHAyanna Howard
Universal basic income.
- LFLex Fridman
Universal basic income in order to support us as we kind of automation takes people's jobs and allows you to explore and find other means. Like, do you have a concern of society transforming effects of automation and robots and so on?
- AHAyanna Howard
I do. I do know that AI robotics will displace workers. Like, we do know that. But there'll be other workers that will be defined, uh, new jobs. Um, what I worry about is... That's not what I worry about, like, will all the jobs go away? What I worry about is the type of jobs that will come out, right? Like, people who graduate from Georgia Tech will be okay, right? We give them the skills, they will adapt even if their current job goes away. Uh, I do worry about those that don't have that quality of an education, right? Will they have the ability, the background to adapt to those new jobs? That, I don't know. That, I worry about, uh, which will create even more polarization in, in our society, internationally, and everywhere. I worry about that. Um, I also worry about not having equal access to all these wonderful things that AI can do and robotics can do. Uh, I worry about that. Um, you know, people like, people like me from Georgia Tech, from, say, MIT will be okay.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Right? But that's such a small part of the population that we need to think much more globally of, of having access to the beautiful things, whether it's AI in healthcare, AI in education, um, AI in, in politics, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
Um, I, I worry about that.
- LFLex Fridman
And that's part of the thing that you're talking about is people that build the technology have to be thinking about ethics, have to be thinking about access-
- AHAyanna Howard
Yes.
- LFLex Fridman
... and all those things. And not, not just a small, small subset.
- 1:17:17 – 1:25:01
Love, AI, and the movie Her
- LFLex Fridman
Let me ask some philosophical, slightly romantic questions-
- AHAyanna Howard
All right.
- LFLex Fridman
... people that listen to this will be like, "Here he goes again." Okay. Do you think, (laughs) do you think one day we'll build an AI system that we, a person can fall in love with and it would love them back? Like in the movie Her, for example.
- AHAyanna Howard
Oh, yeah. Although she, she kind of didn't fall in love with him, or she fell in love with like a million other people, something like that. So-
- LFLex Fridman
You're, you're the jealous type. I see.
- AHAyanna Howard
(laughs) So-
- LFLex Fridman
We humans are the jealous type.
- AHAyanna Howard
Yes, so I-
- LFLex Fridman
(laughs)
- AHAyanna Howard
... I do believe that we can design systems where, uh, people would fall in love with their robot, with their, um, AI partner, um, that I do believe, um, because it's actually... and I won't, I don't like to use the word manipulate, but as we see, there are certain individuals that can be manipulated if you understand the cognitive science about it, right?
- LFLex Fridman
Right. So I mean, if you could think of all close relationship and love in general as a kind of mutual manipulation, that dance, the human dance, I mean, manipulation has a negative connotation, uh-
- AHAyanna Howard
And that's why I don't like to use that word-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... particularly.
- LFLex Fridman
I, I guess another way to phrase is you're getting at is it could be algorithmatized or something. It could be, uh...
- AHAyanna Howard
The relationship building part can be.
- LFLex Fridman
Yeah, yeah.
- AHAyanna Howard
I mean, just think about it. There... We have, and I, I don't use dating sites, but from what I heard, there are some individuals that have been dating that have never saw each other, right? In fact, there's a show, I think, that tries to, like, weed out fake people, like there's a show that comes out, right?
- LFLex Fridman
Yeah.
- AHAyanna Howard
Because, like, people start faking. Like, what's the difference of that person on the other end being an AI agent, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
And having a communication and you building a relationship remotely, like there, there's no reason why that can't happen.
- LFLex Fridman
In terms of human-robot interaction, so what role... you've kind of mentioned with data emotion being... can be problematic if not implemented well, I suppose? What role does emotion and some other humanlike things, the imperfect things, come into play here for good human-robot interaction and something like-... love?
- AHAyanna Howard
Yeah, so in this case, and you had asked, can a AI agent love a human back? Um, I think they can emulate love back, right? And so, what does that actually mean? It just means that if you think about their programming, they might put the other person's needs in front of theirs in certain situations, right? You look at, think about it as return on investment. Like, was my return on investment, as part of that equation, that person's happiness, you know, has some type of, you know, algorithm weighting to it, and the reason why is because I care about them, right? That's the only reason, right? Um, but if I care about them, and I show that, then, uh, my final objective function is length of time of the engagement, right? So, you can think of how to do this, actually quite easily. And so-
- LFLex Fridman
But that's not love?
- AHAyanna Howard
Well, so that's the thing. Um, it, I think it emulates love because we don't have a classical definition of love.
- LFLex Fridman
Right. But, uh, and we don't have the ability to look into each other's minds to see the algorithm, and w- I mean, I guess what I'm getting at is that, is it possible that especially if that's learned, especially if there's some mystery and black box nature to the system, how is that, you know-
- AHAyanna Howard
How is it any different?
- LFLex Fridman
How is it any different in terms of, sort of, if the system says, "I'm conscious. I'm afraid of death," and it does indicate that it loves you. A- another way to sort of phrase it, I'd be curious to see what you think, do you think there'll be a time when robots should have rights? You've kind of phrased the robot in a very roboticist way, and just a really good way by saying, "Okay, well, there's an objective function, and I can see how you can create a compelling human/robot interaction experience that makes you believe that the robot cares for your needs, and even something like loves you." But what if the robot says, "Please don't turn me off?" What if the robot starts making you feel like there's an entity, a being, a soul there, right? Do you think there'll be a future, hopefully you won't laugh, uh, too much at this, but th- where there's a due, uh, ask for rights?
- AHAyanna Howard
So, I can see a future if we don't address it in the near term, where these agents as they adapt and learn could say, "Hey, this should be something that's fundamental." Um, I hopefully think that we would address it before it gets to that point.
- 1:25:01 – 1:32:22
Why do so many robotics companies fail?
- LFLex Fridman
put. Just out of curiosity, uh, Anki, Jibo, Mayfield Robotics, uh, with the robot Kuri, Cyfi Works, Rethink Robotics, were all these amazing robotics companies led, created by incredible roboticists, and they've all went out of business recently. Why do you think they didn't last longer? Why is it so hard to run a robotics company, especially one like these which are fundamentally HR- R- HRI, human/robot interaction robots?
- AHAyanna Howard
Yeah. Each-
- LFLex Fridman
Or person robots.
- AHAyanna Howard
... each one has a story. Only one of them I don't understand.... um, and that was Anki. That's actually-
- LFLex Fridman
Yeah.
- AHAyanna Howard
... the only one I don't understand.
- LFLex Fridman
I don't understand it either. It's, uh-
- AHAyanna Howard
No, no, I mean, I look, like, from the outside, you know, I've looked at the, their sheets. I've looked, like, the data that's pro-
- LFLex Fridman
Oh, you mean, like, business-wise-
- AHAyanna Howard
Yeah.
- LFLex Fridman
... you don't understand? Oh, gotcha.
- AHAyanna Howard
Uh, uh, yeah. And, like, I, I look at all, I look at that data, and I'm like, "They seem to have, like, product-market fit." Like-
- LFLex Fridman
Yeah.
- AHAyanna Howard
So that's the only one I don't understand. The rest of it was product-market fit.
- LFLex Fridman
What's product-market feat- fit? Just, just out of, like, how do you think about it?
- AHAyanna Howard
Yeah, so although we, we think robotics was getting there, right?
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
But I think it's just the timing. It just, their, their clock just timed out. I think if they had been given a couple more years, it, they would've been okay.
- LFLex Fridman
Mm-hmm.
- AHAyanna Howard
Um, but the other ones were still fairly early by the time they got into the market. And so product-market fit is, um, I have a product that I want to sell at a certain price. Are there enough people out there, the market, that are willing to buy the product at that market price for me to be a functional, viable, profit-bearing company?
Episode duration: 1:39:56
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode J21-7AsUcgM
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome