Lex Fridman PodcastRosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24
EVERY SPOKEN WORD
105 min read · 20,961 words- 0:00 – 15:00
The following is a…
- LFLex Fridman
The following is a conversation with Rosalind Picard. She's a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica. Over two decades ago, she launched the field of affective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence, the vital role emotional communication has to the relationship between people in general, and human-robot interaction. I really enjoyed talking with Roz over so many topics, including emotion, ethics, privacy, wearable computing, and her recent research in epilepsy, and even love and meaning. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now, here's my conversation with Rosalind Picard. More than 20 years ago, you've coined the term affective computing and led a lot of research in this area since then. As I understand, the goal is to make the machine detect and interpret the emotional state of a human being, and adapt the behavior of the machine based on the emotional state. So, how has your understanding of the problem space defined by affective computing changed in the past 24 years? So it's the scope, the applications, the challenges, uh, what's involved. How has that evolved over the years?
- RPRosalind Picard
Yeah. Actually, originally when I defined the term affective computing, it was a bit broader than just recognizing and responding intelligently to human emotion, although those are probably the two pieces that we've worked on the hardest. The original concept also encompassed machines that would have mechanisms that functioned like human emotion does inside them.
- LFLex Fridman
Ah.
- RPRosalind Picard
It would be any computing that relates to, arises from, or deliberately influences human emotion. So, the human-computer interaction part is the part that people tend to see. Like, if I'm, you know, really ticked off at my computer and I'm scowling at it and I'm cursing at it and it just keeps acting smiling and happy like that little paper clip used to do...
- LFLex Fridman
Yeah.
- RPRosalind Picard
... like dancing, winking, um, that kind of thing just makes you even more frustrated, right? And I thought, "That stupid thing needs to see my affect. And if it's gonna be intelligent," which Microsoft researchers had worked really hard on, it actually had some of the most sophisticated AI in it at the time, "If that thing's gonna actually be smart, it needs to respond to me and you."
- LFLex Fridman
Mm-hmm.
- RPRosalind Picard
"And we can, um, send it very different signals."
- LFLex Fridman
So by the way, just a quick interruption. The Clippy, maybe it's in Word, uh, 95, 98...
- RPRosalind Picard
Mm-hmm.
- LFLex Fridman
I don't remember when it was born. But, uh, m- many people... Do you find yourself with that reference, that people recognize what you're talking about still to this point?
- RPRosalind Picard
The... I don't expect the newest students to these days, but I've mentioned it to a lot of audiences, like, "How many of you know this Clippy thing?" And still the majority of people seem to know it.
- LFLex Fridman
So Clippy kind of looks at maybe natural language processing, what you were typing, and tries to help you complete, I think. I, I don't even remember what Clippy wa- was except annoying. (laughs)
- RPRosalind Picard
Yeah. It, it tried... Some people actually liked it.
- LFLex Fridman
Well, I miss it.
- RPRosalind Picard
I would hear those stories. You miss it?
- LFLex Fridman
Well, I miss the... the, the annoyance. It felt like, uh, there's a, there's an element-
- RPRosalind Picard
Someone was there.
- LFLex Fridman
Somebody was there, and we were in it together.
- RPRosalind Picard
Yeah.
- LFLex Fridman
And they were annoying.
- RPRosalind Picard
Yeah.
- LFLex Fridman
It's like a, it's like a puppy that just doesn't get it.
- RPRosalind Picard
Right.
- LFLex Fridman
That keeps ripping up the couch kind of thing.
- RPRosalind Picard
And in fact, they could have done it smarter like a puppy. If they had done, like, if when you yell- yelled at it or cursed at it, if it had put its little ears back and its tail down and shirked off, probably people would have wanted it back, right? But instead, when you yelled at it, what did it do? It smiled, it winked, it danced, right?
- LFLex Fridman
Yeah.
- RPRosalind Picard
If somebody comes to my office and I yell at them and they started smiling, winking, and dancing, I'm like, "I never want to see you again." So Bill Gates got a standing ovation when, uh, he said it was going away-
- LFLex Fridman
(laughs)
- RPRosalind Picard
... 'cause people were so ticked. It was so emotionally unintelligent, right? It was intelligent about whether you were writing a letter, what kind of help you needed for that context. It was completely unintelligent about, "Hey, if you're annoying your customer, don't smile in their face when you do it." So that kind of mismatch was something that developers just didn't think about. And intelligence at the time was really all about, uh, math and language and chess and, you know, games, uh, problems that could be pretty well-defined. Social-emotional interaction is much more complex than chess or Go or any of the games I, that people are trying to solve. And in order to understand that required skills that most people in computer science actually were lacking personally.
- 15:00 – 30:00
Especially when you're tracking…
- RPRosalind Picard
and in certain cases, like characterizing them for a job and other opportunities. So, I'm also... I also think that when we're reading emotion that's predictive around mental health, that that should, even though it's not medical data, that that should get the kinds of protections that our medical data gets. Um, what most people don't know yet is right now, with your smartphone use and if you're wearing a sensor and you wanna learn about your stress and your sleep and your physical activity and how much you're using your phone and your social interaction, all of that non-medical data, when we put it together with machine learning, now called AI even though the founders of AI wouldn't have called it that, uh, that capability cannot only tell that you're calm right now or that you're getting a little stressed, uh, but it can also predict how you're likely to be tomorrow, if you're likely to be sick or healthy, happy or sad, stressed or calm.
- LFLex Fridman
Especially when you're tracking data over time.
- RPRosalind Picard
Especially when we're tracking a week of d- your data or more.
- LFLex Fridman
Do you have an optimism towards... You know, there are a lot of people on our, on our phones are worried about this camera that's looking at us. For the most part, on balance, do you, are you optimistic about the benefits that can be brought from that camera that's looking at billions of us, or-
- RPRosalind Picard
Uh...
- LFLex Fridman
... should we be more worried?
- RPRosalind Picard
I think we should be a little bit more worried about who's looking at us and listening to us. The device sitting on your countertop in your kitchen, whether it's, you know, Alexa or Google Home or Apple Siri, uh, these devices want to listen. While they say ostensibly to help us, and I think there are great people in these companies who do wanna help people. I... L- let me not brand them all bad. Uh, I'm a user of products from all, all of these companies I'm naming, all the A companies. Alphabet, (laughs) Apple-
- LFLex Fridman
Mm-hmm.
- RPRosalind Picard
... Amazon. They are awfully big companies, right? They have incredible power. And, you know, what if, what if China were to buy them, right, and suddenly all of that data were not part of free America, but all of that data were part of somebody who just wants to take over the world and you submit to them? And guess what happens if you so much as smirk the wrong way-
- LFLex Fridman
Mm.
- RPRosalind Picard
... when they say something that you don't like. Well, they have reeducation camps, right?
- LFLex Fridman
Right.
- RPRosalind Picard
That's a nice word for them. By the way, they have a surplus of organs for people who have surgery these days. They don't have an organ donation problem, 'cause they take your blood and they know you're a match. And the doctors are on record of, um, taking organs from people who are perfectly healthy and not, uh, prisoners. They're just simply not the favored ones of the government. Uh, and, you know, that's a pretty freaky, evil society. And we can use the word evil there.
- LFLex Fridman
I was born in the Soviet Union. I can certainly connect to the, uh, to the worry that you're expressing. At the same time, probably both you and I, and you very much so...You know, there's an exciting possibility that you can have a deep connection with a machine. (laughs)
- RPRosalind Picard
Yeah, yeah.
- LFLex Fridman
Right? So, uh...
- RPRosalind Picard
(laughs)
- LFLex Fridman
(laughs)
- RPRosalind Picard
Those of us, um, I've admitted students who say that they, you know, when you list, like, "Who do you most wish you could have lunch with or dinner with," right? And they'll write, like, "I don't like people."
- LFLex Fridman
(laughs)
- RPRosalind Picard
"I just like computers." And one of them said to me once when I had this party at my house, you know, "I want you to know this is my only social event of the year."
- LFLex Fridman
(laughs)
- RPRosalind Picard
"My one social event of the year." (laughs) Like, okay. Now, this is a brilliant machine learning person, right? And we need that kind of brilliance in machine learning. And I love that computer science welcomes people who love people and people who are very awkward around people. I love that this is a field that anybody could join.
- LFLex Fridman
Right.
- RPRosalind Picard
Uh, we need all kinds of people. Um, and you don't need to be a social person. I'm not trying to force people who don't like people to suddenly become social. At the same time, if most of the people building the AIs of the future are the kind of people who don't like people, (laughs) we've got a little bit of a problem.
- LFLex Fridman
Ho- hold on a second. So let me, let me push back on that. So, uh, don't you think a large percentage of the world can... You know, there's loneliness. There is, uh-
- RPRosalind Picard
There's a huge problem with loneliness, and it's growing.
- LFLex Fridman
And so there's a longing for connection. Do you, uh, e-
- RPRosalind Picard
If you're lonely, you're part of a big and growing group. (laughs)
- LFLex Fridman
Yes. So we're- we're- we're in it together, I guess. (laughs) If you're lonely, uh, join a group.
- 30:00 – 45:00
So, in, in this…
- RPRosalind Picard
use AI to en- extend your power and your scale to su- to force people into submission. Uh, if you believe that the human race is better off being given freedom and the opportunity to do things that might surprise you, uh, then you wanna use AI to extend people's ability, to build... You wanna build AI that extends human intelligence, that empowers the weak and helps balance the power between the weak and the strong, not that gives more power to the strong.
- LFLex Fridman
So, in, in this process of, um, empowering people and sensing people and, uh-... what is your sense on emotion in terms of recognizing emotion? The difference between emotion that is shown and emotion that is felt? So, uh, yeah, uh, y- yeah, emotion that is expressed on the surface through your face, your body and, uh, various other things, and what's actually going on deep inside on the biological level, on the neuroscience level, or some kind of cognitive level.
- RPRosalind Picard
Yeah. Yeah. Whoa, no easy questions here. (laughs)
- LFLex Fridman
Well, it's like, yeah, I'm sure there's no-
- RPRosalind Picard
Yeah.
- LFLex Fridman
... an- there's no definitive answer, but what's your sense? How far can we get by just looking at the face? Can you get-
- RPRosalind Picard
We're very limited when we just look at the face, but we can get further than most people think we can get. People think, "Hey, I have a great poker face, therefore all you're ever gonna get from me is neutral." Well, that's naive. We can read with the ordinary camera on your laptop or on your phone, we can read from a neutral face if your heart is racing. We can read from a neutral face if your breathing is becoming irregular and showing signs of stress. We can read, uh, under some conditions that maybe I won't give you details on, um, how your heart rate variability power is changing. That could be a sign of stress even when your heart rate is not necessarily accelerating. So-
- LFLex Fridman
I'm sorry, from physio sensors or from the face?
- RPRosalind Picard
From the color changes-
- LFLex Fridman
The color changes?
- RPRosalind Picard
... that you cannot even see, but the camera can see.
- LFLex Fridman
Mm-hmm. That's amazing. So, so you can get a lot of signal, but-
- RPRosalind Picard
So we get things people can't see-
- LFLex Fridman
Right.
- RPRosalind Picard
... using a regular camera. And from that, we can tell things about your stress. So if, um, you were just sitting there with a blank face thinking, "Nobody can read my emotion," well, you're wrong.
- LFLex Fridman
Right. So, so that's really interesting. But that's from sort of visual information from the face that's almost like cheating your way to th- the physiological state of the body by being very clever with what you can do with vision.
- RPRosalind Picard
With, with signal processing. Yeah.
- LFLex Fridman
With signal processing.
- RPRosalind Picard
Yeah.
- LFLex Fridman
So that's, uh, really impressive. But if you just look at the stuff we humans can see, uh, the-
- RPRosalind Picard
Yeah.
- LFLex Fridman
... the poker f- the smile, the smirks, the- the subtle, all the facial expressions.
- RPRosalind Picard
Right. So then you can hide that on your face for a limited amount of time. Now, if you, if you're just going in for a brief interview and you're hiding it, that's pretty easy for most people. If you are, however, surveilled constantly everywhere you go, then it's gonna say, "Gee, you know, Lex used to smile a lot, and now I'm not seeing so many smiles." And, "Roz used to, um, you know, laugh a lot and smile a lot very spontaneously, and now I'm only seeing these not so spontaneous looking smiles."
- LFLex Fridman
Yeah.
- RPRosalind Picard
"And only when she's asked these questions." You know, that's... something's changed here.
- LFLex Fridman
Probably not getting enough sleep is part of it. (laughs)
- RPRosalind Picard
Um, we could look at that too. So, now, I have to be a little careful too. When I say we, we... you think we can't read your emotion and we can, it's not that binary. What we're reading is more some physiological changes that relate to the, your activation.
- LFLex Fridman
Mm-hmm.
- RPRosalind Picard
Now, that doesn't mean that we know everything about how you feel. In fact, we still know very little about how you feel. Your thoughts are still private. Your, uh, nuanced feelings are still completely private. We can't read any of that. So, there's some relief that we can't read that, even brain imaging can't read that, um, wearables can't read that. However, as we read your body state changes, and we know what's going on in your environment, and we look at patterns of those over time, we can start to, uh, make some inferences about what you might be feeling. And that is where it's not just the momentary feeling, but it's more your stance toward things, and that could actually be a little bit more scary with certain kinds of, uh, governmental, uh, control freak people who want to know more about, are you on their team or are you not?
- LFLex Fridman
And getting that information through over time, so you're saying there's a lot of signal-
- 45:00 – 59:55
Is that system... So…
- RPRosalind Picard
multiple papers in top medical journals. Yeah. We have published peer reviewed top medical journal neurology best results, and that's not good enough for the FDA.
- LFLex Fridman
Is that system... So if we look at the peer review of medical journals, there's flaws, there's strengths. Is the FDA approval process, uh, how does it compare to the peer review process? Does it, uh, have the strength and, uh-
- RPRosalind Picard
I'll take peer review over FDA any day. (laughs)
- LFLex Fridman
But is that a good thing?
- RPRosalind Picard
Yeah.
- LFLex Fridman
Is that a good thing for FDA? Uh, you're saying, does it stop some amazing technology from getting through?
- RPRosalind Picard
Yeah, it does. The, the FDA performs a very important good role in keeping people safe. They keep things, they, they put you through tons of safety testing, and that's wonderful, and that's great. I'm all in favor of the safety testing. Sometimes they put you through additional testing that they don't have to explain why they put you through it, and you don't understand why you're going through it, and it doesn't make sense, and that's very frustrating. Uh, and maybe they have really good reasons, and they just would, it would do people a service to articulate those reasons.
- LFLex Fridman
Be more transparent.
- RPRosalind Picard
Be more transparent.
- LFLex Fridman
So as part of Empatica, we have sensors. So what kind of problems can we crack? What kind of things, uh, from seizures to autism, uh, to I think I've heard you mention depression-
- RPRosalind Picard
Uh-huh.
- LFLex Fridman
... what kind of things can we alleviate, can we detect? What's your hope of what, how we can make-
- RPRosalind Picard
Yeah.
- LFLex Fridman
... the world a better place with this wearable tech?
- RPRosalind Picard
I would really like to see my, you know, fellow brilliant researchers step back and say, you know, what are, what are the really hard problems that we don't know how to solve that come from people maybe we don't even see in our normal life because they're living in the poorer places, they're stuck on the bus, they're, they can't even afford the Uber or the Lyft or the data plan or all these other wonderful things we have that we keep improving on? Meanwhile, there's all these folks left behind in the world, and they're struggling with horrible diseases, with depression, with epilepsy, with diabetes, with just awful stuff that, uh, maybe a little more time and attention hanging out with them and learning what are their challenges in life, what are their needs, how do we help them have job skills, how do we help them have a hope and a future and a chance to have the great life that so many of us building technology have? And then how would that reshape the kinds of AI that we build? How would that reshape the new, you know, apps that we build, or the, maybe we need to focus on how to make things more low cost and green instead of thousand dollar phones. I mean, come on. You know? Why can't we be thinking more about things that do more with less for these folks? Quality of life is not related to the cost of your phone. You know? It's not something that... You know, it's been shown that what, about $75,000 of income and happiness is the same, okay? S- however, I can tell you, you get a lot of happiness from helping other people. You get a lot more than $75,000 buys. So how do we connect up the people who have real needs with the people who have the ability to build the future, and build the kind of future that truly improves the lives of all the people that are currently being left behind?
- LFLex Fridman
So let me return just briefly on a point maybe in movie Her.
- RPRosalind Picard
Mm-hmm.
- LFLex Fridman
So do you think if we look farther into the future, you said so much of the benefit from making our technology more, uh, uh, empathetic to us human beings would make them better tools, eh, empower us, make, make our lives better. But if we look farther into the future, do you think we'll ever create an AI system that we can fall in love with and loves us back on a level that is similar to human-to-human interaction?... like in the movie Her, or beyond?
- RPRosalind Picard
I think we can simulate it in ways that could, you know, sustain engagement for a while. Would it be as good as another person? I don't think so. For, if, if you're used to, like, good people. Uh, now, if you've just grown up with nothing but abuse and you can't stand human beings, can we do something that helps you there, that gives you something through a machine? Yeah. But that's pretty low bar, right, if you've only encountered pretty awful people. Uh, if you've encountered wonderful, amazing people, we're, we're nowhere near building anything like that. And I'm, I would not bet on building it. I would bet instead on building the kinds of AI that helps all, helps kind of raise all boats, that helps all people be better people, helps all people figure out if they're getting sick tomorrow, and helps give them what they need to stay well tomorrow. That's the kinda AI I wanna build, that improves human lives. Not the kinda AI that just walks on The Tonight Show and people go, "Wow, look how smart that is." You know? Really? Like, and then it goes back in a box, you know?
- LFLex Fridman
(laughs) So on that point, if we continue looking a little bit into the future, uh, do you think an AI that's empathetic and, uh, does, uh, improve our lives need to have a physical presence, a body? And even, let me, uh, cautiously say the C-word, consciousness, and, uh, even fear of mortality? So some of those human characteristics. Do you think it needs to have those aspects or can it remain simply a machine learning tool that learns from data of behavior that, uh, that learns to make us, based on previous patterns, uh, feel better? Or does it need those elements of consciousness and-
- RPRosalind Picard
It, it depends on your goals. If you're making a movie, it needs a body. It needs a gorgeous body. It needs to act like it has consciousness. It needs to act like it has emotion, right? Because that's what sells, that's what's gonna get me to show up and enjoy the movie, okay? Um, in real life, does it need all that? Well, if you've read Orson Scott Card, Ender's Game, Speaker for the Dead, you know, it could just be like a little voice in your earing, right? And you could have an intimate relationship, and it could get to know you, and it doesn't need, uh, to be a robot. Uh, but that doesn't make as compelling of a movie, right?
- LFLex Fridman
Right.
- RPRosalind Picard
I mean, we already think it's kinda weird when a guy looks like he's talking to himself on the train-
- LFLex Fridman
Right.
- RPRosalind Picard
... you know, even though it's earbuds. So we have these... Embodied is more powerful. Embodied, when you compare interactions with an embodied robot versus a video of a robot versus no robot, uh, the robot is more engaging. The robot gets our attention more. The robot, when you walk in your house, is more likely to get you to remember to do the things that you asked it to do-
- LFLex Fridman
Mm-hmm.
- RPRosalind Picard
... because it's kinda got a physical presence.
- LFLex Fridman
Mm-hmm.
- RPRosalind Picard
You can avoid it if you don't like it. (laughs) It could see you're avoiding it. There's a lot of power to being embodied. There will be embodied AIs. They have great power, and opportunity, and potential. There will also be AIs that aren't embodied, that just are little software assistants that help us with different things, that may get to know things about us. Will they be conscious? There will be attempts to program them to make them appear to be conscious. We can already write programs that make it look like, "Uh, what do you mean? Of course I'm aware that you're there," right? I mean, it's trivial to say stuff like that. It's, it's easy to fool people. Uh, but does it actually have conscious experience like we do? Nobody has a clue how to do that yet. That seems to be something that is beyond what any of us knows how to build now. Will it have to have that? Uh, I think you can get pretty far with a lot of stuff without it. Um, will we accord it rights? Well, that's more a political game than it is, uh, a question of real consciousness.
- LFLex Fridman
Yeah. Can you go to jail for turning off Alexa is what, uh, is the que- is the question for, uh, an, an election maybe a few, uh, decades from now.
Episode duration: 1:00:11
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode kq0VO1FqE6I
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome