Lex Fridman PodcastCharles Isbell: Computing, Interactive AI, and Race in America | Lex Fridman Podcast #135
EVERY SPOKEN WORD
150 min read · 30,070 words- 0:00 – 2:36
Introduction
- LFLex Fridman
The following is a conversation with Charles Isbell, dean of the College of Computing at Georgia Tech, a researcher, an educator in the field of artificial intelligence, and someone who deeply thinks about what exactly is the field of computing and how do we teach it. He also has a fascinatingly varied set of interests, including music, books, movies, sports, and history that make him especially fun to talk with. When I first saw him speak, his charisma immediately took over the room, and I had a stupid excited smile on my face and I knew I had to eventually talk to him on this podcast. Quick mention of each sponsor, followed by some thoughts related to the episode. First is Neuro, the maker of functional sugar-free gum and mints that I use to give my brain a quick caffeine boost. Second is Decoding Digital, a podcast on tech and entrepreneurship that I listen to and enjoy. Third is MasterClass, online courses that I watch from some of the most amazing humans in history. And finally, Cash App, the app I use to send money to friends for food and drinks. Please check out these sponsors in the description to get a discount and to support this podcast. As a side note, let me say that I'm trying to make it so that the conversations with Charles, Eric Weinstein, and Dan Carlin will be published before Americans vote for president on November 3rd. There's nothing explicitly political in these conversations, but they do touch on something in human nature that I hope can bring context to our difficult time, and maybe, for a moment, allow us to empathize with people we disagree with. With Eric, we talk about the nature of evil. With Charles, besides AI and music, we talk a bit about race in America and how we can bring more love and empathy to our online communication. And with Dan Carlin, well, we talk about Alexander the Great, Genghis Khan, Hitler, Stalin, and all the complicated parts of human history in between, with a hopeful eye toward a brighter future for our humble little civilization here on Earth. The conversation with Dan will hopefully be posted tomorrow, on Monday, November 2nd. If you enjoy this thing, subscribe on YouTube, review it with five stars on Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter @lexfriedman. And now, here's my conversation with Charles Isbell.
- 2:36 – 8:45
Top 3 movies of all time
- LFLex Fridman
You've mentioned that you love movies and TV shows. Let's, uh, ask an easy question, but you have to be definitively, objectively conclusive. What's your top three movies of all time?
- CICharles Isbell
So you're asking me to be definitive and to be conclusive. That's a little hard, and I'm gonna tell you why.
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
It's very simple. It's because, uh, movies is too broad of a category. I gotta pick sub-genres. But I will tell you that of those genres... I'll pick one or two from, from each of the genres, and I'll get us to three, so I'm gonna cheat. So my favorite comedy of all times, but probably my favorite movie of all time, is His Girl Friday, which is probably a movie that you've not ever heard of, but it's based on a play called The Front Page from, I don't know, early 1900s. Uh, and the movie is a fantastic film.
- LFLex Fridman
What's the story? What's the... Independent film?
- CICharles Isbell
No, no, no.
- LFLex Fridman
What are we talking about?
- CICharles Isbell
This is one, this is one of- of the movies that would have been very popular, it's a screwball comedy. You ever see Moonlighting, the TV show?
- LFLex Fridman
No.
- CICharles Isbell
You know what I'm talking about? So you've seen these shows where there's a man and a woman, and they clearly are in love with one another, and they're constantly fighting and always talking over each other.
- LFLex Fridman
Yeah.
- CICharles Isbell
Banter, banter, banter, banter, banter.
- LFLex Fridman
Yeah.
- CICharles Isbell
This was the movie that started all that, as far as- as I'm concerned. It's very much of its time, so it's, um, I don't know, must have come out sometime between 1934 and 1939. I'm not sure exactly when the movie itself came out.
- LFLex Fridman
Oh.
- CICharles Isbell
Uh, it's black and white. Um, it's- it's just a fantastic film and it's hilarious.
- LFLex Fridman
So it's mostly conversation?
- CICharles Isbell
Uh, not entirely, but mostly, mostly. Just a lot of back and forth. There's a story there. Someone's on death row and they're, um, they're, uh, newspaper men, including her. They're all newspaper men. Uh, they were divorced. The editor, or the publisher, I guess, and, uh, uh, the reporter, they were divorced, um, but, you know, they clearly... He's thinking trying to get back together, and there's this whole other thing that's going on. But none of that matters. The plot doesn't matter.
- LFLex Fridman
Yeah.
- CICharles Isbell
What matters is-
- LFLex Fridman
It's just a little play-
- CICharles Isbell
... the literal play.
- LFLex Fridman
... uh, in the conversation.
- CICharles Isbell
It's fantastic. And, uh, I just love everything about the conversation. Because at the end of the day, sort of narrative and conversation are the sort of things that drive me. And so I really, I really like that movie for that reason. Similarly, I'm now gonna cheat and I'm gonna give you two movies as one. Um, and they are Crouching Tiger, Hidden Dragon and John Wick.
- LFLex Fridman
Hmm.
- CICharles Isbell
Both relatively modern. John Wick, of course, is-
- LFLex Fridman
One, two, or three?
- CICharles Isbell
One. Uh, it gets increasingly... I love them all for different reasons, and increasingly more ridiculous.
- LFLex Fridman
Yeah.
- CICharles Isbell
Kind of like loving Alien and Aliens, despite the fact they're two completely different movies. But the reason I put Crouching, uh, Crouching Tiger, Hidden Dragon and John Wick together is 'cause I actually think they're the same movie, or what I like about them-
- 8:45 – 14:27
People are easily predictable
- CICharles Isbell
- LFLex Fridman
(laughs) Okay, that was a rhetorical question. Uh, you've also mentioned that you, um, I think enjoy all kinds of experiments, including on yourself. But, I, I saw a video where you said, uh, you did an experiment where you tracked all kinds of information about yourself.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
And, uh, a few others, the sort of, uh, wiring up your home, and th- this, this little idea that you mentioned in that video which is kinda interesting that you thought that two days worth of data is enough to capture a majority of the behavior of the human being.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
(laughs) First, can you describe (laughs) wh- what the heck you did, uh-
- CICharles Isbell
(laughs)
- LFLex Fridman
... to collect all that data, 'cause it's fascinating, just like little details of how you collect that data, and also what your intuition behind the two days is.
- CICharles Isbell
So, first off, it has to be the right two days. But, I was, I was thinking of a very specific experiment. There's actually a suite of them that I've been a part of, and other people have done this of course. I, I just sort of dabbled in that part of the world. Uh, but to be very clear, the specific thing that I was talking about, uh, had to do with recording all the IR going on in my, uh, infrared going on in my, my house. So, this is a long time ago, so this, everything's being controlled b- by, by pressing buttons, um, on remote controls as opposed to speaking to Alexa or Siri or, or someone like that. And I was just trying to figure out if you could get enough data on people to figure out what they were gonna do with their TVs or their lights. My house was completely wired up at the time, um, which, you know, what I'm about to p- look at a movie, or I'm about to turn on the TV or whatever, and just see what I could predict from it. (laughs) It was kind of surprising, it shouldn't have been, but that's all very easy to do by the way, just capturing all the little stuff is... I mean, it's a bunch of computer systems. It's really easy to capture the, if you know what you're looking for. Um, at Georgia Tech, long before I got there, we had this thing called the Aware Home, uh, where everything was wired up and you saw, you captured everything that was going on. Nothing even difficult, not with video or anything like that, just the way the, the system was just capturing everything. Um, so, uh, it turns out that, and I did this with myself and then I had students and they worked with oth- many other people. And it turns out at the end of the day, people do the same things over and over and over again. So it has to be the right two days, like a weekend. But, it turns out, not only can you predict what someone's going to do next, at the level of what button they're gonna press next-
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
... on a remote control, um, but you can do it with something really, really simple, like a, you don't even need a hidden Markov model. It's like a mar- just simply, I press this, this is my prediction of the next thing, and it turns out you can get 93% accuracy just w- by doing something very simple and stupid and just counting, counting statistics. What was actually more interesting, is that you could use that information. This comes up again and again in my work, um, if you try to represent people or objects, uh, by the things they do, the things you can measure about them that have to do with action in the world, uh, so a distribution over actions, and you try to represent them by the distribution of actions that are done on them, then you do a pretty good job of, sort of understanding how people are, and they cluster remarkably well. Um, in fact, irritatingly so. Uh, and so by clustering people this way, you can, uh, maybe, you know, I got the 93% accuracy of what's the next button you're gonna press, but I can get 99% accuracy, or somewhere thereis about, on the collections of things you might press. And it turns out, the things that you might press, were all related to number, to each other in exactly the ways you would expect. So for example, all the key- all the numbers on a keypad, it turns out, all have the same behavior, with respect to you as a human being. And so you would naturally cluster them together, and dis- you discover that numbers are all, are related to one another in some way, and all these other things. And then, and here's the part that I think is im- important. I mean, you can see this in, in all kinds of things. Every individual is different, but any given individual is remarkably predictable, um, because you keep doing the same things over and over again.And the two things that I've learned in the long time that I've been thinking about this is, people are easily predictable and people hate when you tell them that they're easily predictable.
- LFLex Fridman
(laughs)
- CICharles Isbell
But they are.
- LFLex Fridman
Yeah.
- CICharles Isbell
And there you go.
- LFLex Fridman
Yeah. What about... Let me, uh, play devil's advocate and s- philosophically speaking, is it possible to say that what defines humans is the outlier? So, even though 90... some large percentage of our behaviors, whatever the signal we measure is the same and it would cluster nicely-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... but maybe it's the special moments of when we break out of the routine is the definitive things, and the way we break out of that routine for each one of us might be different?
- CICharles Isbell
It's possible. I would say that the... I would say it a little differently, I think. I would make two things. One is, uh, uh, I'm gonna disagree with the premise, I think. Uh, but that's fine.
- LFLex Fridman
(laughs)
- CICharles Isbell
Uh, I think the way I would put it is, uh, there are people who are very different from lots of other people, but they're not 0%. They're closer to 10%, right? So in fact, even if you do this kind of clustering of people that'll turn out to be the small number of people, they all behave like each other, even if they individually behave very differently from, from, from everyone else. So I think that's kind of important. But what you're really asking, I think, and, and I think this is a really good question, is, you know, what do you do when you're faced with the situation you've never seen before? What do you do when you're faced with an extraordinary situation maybe you've seen others do and you're actually forced to do something, and you react to that very differently? And that is the thing that makes you human. I would agree with that, at least at a philosophical level, that it's the, the times when you are faced with something difficult, a decision that you have to make, uh, where the answer isn't easy, even if you know what the right answer is, that's sort of what defines you as the individual, and I think what defines people, people broadly. It's the hard problem, it's not the easy problem. It's the thing that's gonna hurt you, it's not the thing, um... It's not even that it's difficult, it's just that you know that the outcome is going to be highly suboptimal for you. And I do think that that's a reasonable place to start for the question of what makes us human.
- 14:27 – 26:13
Breaking out of our bubbles
- CICharles Isbell
- LFLex Fridman
So before we talk about, sort of explore the different ideas underlying interactive artificial intelligence, which we are working on, let me just go along this thread, uh, uh, to skip to kind of our world of social media, which is something that, uh, uh, at least on the artificial intelligence side, you think about. There's a popular narrative, I don't know if it's true, but that we have these silos in, in social media and we have these clusterings, as you're kind of mentioning. And the idea is that, you know, uh, along that narrative is that, you know, we wanna, we wanna break each other out of those silos so we can be empathetic to other people, to... if you're a, a Democrat, you're empathetic to the Republican, if you're a Republican, you're empathetic Dem- Democrat. Those are just two silly bins that we seem to be very, uh, excited about, but there's other binningS- ... that we can think about.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
Is there, from an artificial intelligence perspective, 'cause, 'cause you're just saying we cluster along the data-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... but then interactive artificial intelligence is, is referring to throwing agents into that mix, AI systems into that mix, uh, helping us interacting with us humans and maybe getting us out of those silos. Is that something that you think is possible? Do you see a hopeful possibility for artificial intelligence systems in these large networks of people to get us outside of our habits, in at least the idea space, to where we can sort of, um, be empathetic to other people's lived experiences, other people's points of view, you know, all that kind of stuff?
- CICharles Isbell
Yes. And I actually don't think it's that hard. Well, it's, it's not hard in this sense. So imagine that you can, um... well, let, let's just, let's make life simple for a minute. Let's assume that you can do a kind of partial ordering over ideas or clusterings of behavior. It, it doesn't even matter what I mean here, so long as there's some way that this is a cluster, this is a cluster, there's some edge between them, right? And this is kind of... they don't quite touch even, or maybe they come very close. If you can imagine that conceptually, then the way you get from here to here is not by going from here to here. The way you get from here to here is you find the edge-
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
... and you move slowly together, right? And I think that machines are actually very good at that sort of thing once we kind, kinda define the problem, either in terms of behavior or ideas or words or whatever. So it's, it's easy in the sense that if you already have the network and you know the relationships, you know, the edges and sort of the strengths on them and you kinda have some semantic meaning for them, the machine doesn't have to, you do as the designer, uh, then yeah, I think you can kind of move people along and sort of expand them. But it's harder than that. And the reason it's harder than that, um, uh, or sort of coming up with the network structure itself is hard, is because I'm gonna tell you a story that I, I, someone else told me, and I don't... I may get some of the details a little bit wrong, but it's, it's roughly, it roughly goes like this. You take two sets of people from the same backgrounds and you want them to solve a problem. So you separate them up, which we do all the time, right? "Oh, you know, we're gonna break out in the, we're gonna break out groups. You're gonna go over there and you're gonna talk about this, you're gonna go over there and you're gonna talk about this." And then you have them, uh, sort of in this big room, but far apart from one another, and you have them sort of interact with one another. When they come back to talk about what they learned, you wanna merge what they've done together, it can be extremely hard because they don't, they basically don't speak the same language anymore. Like, when you create these problems and you dive into them, you create your own language. So the example this one person gave me, which I, I found kind of interesting 'cause we were in the middle of that at the time, was they're sitting over there and they're talking about this, these rooms that you can see, but you're seeing them from different vantage points depending upon which side of the room you're on. They can see a clock very easily, and so they start referring to the room as the one with the clock.
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
This group over here looking at the same room, they can see the clock, but it's, you know, not in their line of sight or whatever, so they end up, um, referring to it by some other way. When they get back together and they're talking about things, they're referring to the same room, and they don't even realize they're referring to the same room. In fact, this group doesn't even see that there's a clock there, and this group doesn't see whatever it is. The clock on the wall is the thing that stuck with me. So if you create these different silos, the problem isn't that the ideologies disagree, it's that...... you're using the same words and they mean radically different things. The hard part is just getting them to agree on the, well, the, maybe we'd say the axioms in our world, right? But, you know, just get them to agree on some basic definitions, because right now they talk- they're talking past each other, just completely talking past each other. That's the hard part. Getting them to meet, getting them to interact, that may not be that difficult. Getting them to see where their language is leading them to lead past one another, that's, that's the hard part.
- LFLex Fridman
It's a really interesting question to me. It could be on the layer of language, but it feels like there's multiple layers to this. Like, it could be worldview. It could be... I mean, it all boils down to empathy, being able to put yourself in the shoes of the other person-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... to learn the language, to learn, like, visually how they see the world, to learn, like, the, I mean, I- I experience this now with, with trolls, uh, the- the degree of humor in that world. For example, I talk about love a lot. I'm very, like, I've... Uh, I'm really lucky to have this amazing community of loving people, but whenever I encounter trolls, they always roll their eyes at the idea of love because it's so-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... quote unquote "cringe".
- CICharles Isbell
Yeah.
- LFLex Fridman
Uh, so, so they commu- they show love by, like, derision, I would say. And, uh, I think about, on the human level, that's a whole 'nother discussion, that's psychology, that's so- sociology, so on. But I wonder if AI systems can help somehow and j- bridge the gap of, "What is this person's life like?" Encourage me to just ask that question, to put myself in their shoes, to experience the agitations, the fears, the hopes they have, the, to experience, you know, the, to, even just to think about what- what was their upbringing like, like having a- a- a single parent home or a shitty education or all those kinds of things, just to put myself in that mind space. It- it feels like that's really important for us to, for, to- to bring those clusters together, to find that similar language. But it's unclear how AI can help that because it seems like AI systems need to understand both parties first.
- CICharles Isbell
So, you know, the word understand there is doing a lot of work, right?
- LFLex Fridman
Yeah.
- CICharles Isbell
So-
- LFLex Fridman
Yes.
- CICharles Isbell
... do you have to understand it or do you just simply have to note that there is something similar as a point to touch, right?
- LFLex Fridman
Yeah.
- CICharles Isbell
So, you know, (laughs) you- you- you use the word empathy, and- and I like that word for a lot of reasons. I think you're right in the way that you're using it and the way that you're describing it, but let's separate it from sympathy, right? So, you know, sympathy is feeling sort of for someone. Empathy is kind of understanding where they're coming from and how they, how they feel, right? And for most people, uh, those things go hand in hand. For some people, some are very good at empathy and very, very bad at sympathy. Some people cannot experi- Well, my observation would be, I'm not a psychologist. My observation (laughs) would be that some people seem incapable of feeling sympathy unless they feel empathy first. You can understand someone, understand where they're coming from and still think, "No, I- I- I can't support that."
- LFLex Fridman
Yeah.
- CICharles Isbell
Right? It doesn't mean that the only way, uh, because if that, if that isn't the case, then what it requires is that you, um, you must... The only way that you can... To understand someone means you must agree with everything that they do, which isn't right.
- LFLex Fridman
Right.
- CICharles Isbell
Right? And- and- and if the only way I can feel for someone is to completely understand them and make them like me in some way, well then we're lost, right? Because we're not all exactly like each other. I don't have to understand everything that you've gone through. It helps, clearly. But they're separable ideas, right? Even though they get clearly- clearly tangled up in one another. So what I think AI could help you do, actually, is if, and, you know, I'm- I'm being quite fanciful, as it were, but if you, if you think of these as kind of I understand how you interact, the words that you use, the dist- you know, the actions you take, I have some way of doing this. Let's not worry about what that is. Um, but I can see you as a kind of distribution of experiences and acts- actions taken upon you, things you've done and so on. And I can do this with someone else, and I can find the places where there's some kind of commonality, um, a mapping, as it were, even if it's not total, you know? The- If you- If I think of you as a distribution, right, then, you know, I can take the cosine of the angle between you, and if it's, you know-
- LFLex Fridman
(laughs)
- 26:13 – 32:45
Interactive AI
- LFLex Fridman
Your broad set of research interests fall under interactive AI, as, uh, I mentioned, which is a fascinating set of ideas, and you have some concrete things that you're particularly interested in, but maybe could you talk about how you think about the field of interactive artificial intelligence?
- CICharles Isbell
Sure. So let me say upfront that, uh, if you look at, certainly my early work, but even if you look at most of it, uh, I'm a machine learning guy, right? I do machine learning. First paper I ever published was in NIPS. Um, back then it was NIPS, now it's NRIPS.
- LFLex Fridman
Yeah.
- CICharles Isbell
Uh, it's a long story there. Anyway, that's another thing.
- LFLex Fridman
(laughs)
- CICharles Isbell
But, so, so I'm a machine learning guy, right? I believe in data, I believe in statistics and, and all those kind of things.
- LFLex Fridman
Yes.
- CICharles Isbell
And the reason I'm bringing that up is even though I'm a newfangled statistical machine learning guy and al-... and have been for a very long time, the problem I really care about is AI.
- LFLex Fridman
Yeah.
- CICharles Isbell
Right? I care about artificial intelligence. I care about building, um, some kind of intelligent artifact, however that gets expressed, uh, that would be intelli- at least as intelligent as humans, um, and as interesting as humans, perhaps on their, on their, their sort of, in their own way.
- LFLex Fridman
So that's the deep underlying love and dream is the bigger, bigger AI.
- CICharles Isbell
Yeah.
- LFLex Fridman
It's the bigger-
- CICharles Isbell
It's the AI.
- LFLex Fridman
... whatever the heck that is.
- CICharles Isbell
Yeah, the machine learning in some ways is a means to the end. It is not the end. And I don't understand how y- one could be intelligent without learning, so therefore I gotta figure out how to do that, right? So it's important. But machine learning, by the way, is also a tool. I said statistical because that's what most people think of themselves, machine learning people. That's how they think. Think it's that Pat Langley might disagree, or at least 1980s Pat Langley might disagree, uh, with what it takes to do, to do machine learning. But I care about the AI problem, which is why it's interactive AI and not just interactive ML. I think it's important to, to understand that, that there's a long-term goal here, which I will probably never live to see but I would love to have been a part of, which is building something truly intelligent, um, outside of, outside of ourselves.
- LFLex Fridman
Can we take a tiny tangent or am I interrupting?
- CICharles Isbell
Sure.
- LFLex Fridman
Which is, is there something you can say, uh, concrete about the mysterious gap between the subset ML and the bigger AI?
- CICharles Isbell
Oh.
- LFLex Fridman
What's missing? What's, what do you think... I mean, obviously, it's a to- totally unknown, not totally, but in part unknown at this time, but is it something like with Pat Langley? Is, is it knowledge, like expert system reasoning type of kind of thing?
- CICharles Isbell
So, uh, AI is bigger than ML, but ML is bigger than AI.
- LFLex Fridman
(laughs)
- CICharles Isbell
Th- this is kind of the real, the, the, the real problem here is that they're really overlapping things that are really interested in slightly different problems. I tend to think of ML, and there are many people out there who are gonna be very upset at me about this, but I tend to think of ML being much more concerned with the engineering of solving a problem-
- LFLex Fridman
Right.
- CICharles Isbell
... um, and AI about the sort of more philosophical goal of, of true intelligence. And that's the thing that motivates me, even if I end up finding myself living in this kind of engineering-ish space.
- LFLex Fridman
Got it.
- CICharles Isbell
I've now made, made Michael Jordan upset.
- LFLex Fridman
(laughs)
- CICharles Isbell
But, you know, it's, it's, it's... To me, they just feel very different. You're just measuring them differently. Your, your, your, your sort of goals of where you're trying to be are, are somewhat different.
- 32:45 – 41:12
Lifelong machine learning
- CICharles Isbell
over time.
- LFLex Fridman
On, on the topic of adaptive modeling, and y- you talk about lifelong learning, which is a, I think a topic that's under-studied, or maybe 'cause nobody knows (laughs) what to do with it. Uh, but like, you know, if you look at Alexa or m- most of our artificial intelligence systems that are primarily machine learning based systems, or dialogue systems, all those kinds of things, they know very little about you.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
In the sense of the lifelong learning sense that, uh, we learn as humans, we l- learn a lot about each other, not in the quantity effects, but like, the temporally rich set of information that seems to like pick up the cr- crumbs along the way that somehow seems to capture a person pretty well.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
Do you have any ideas how, how to, uh, how to do lifelong learning? Um, because it seems like most of the machine learning community does not.
- CICharles Isbell
Yeah. Oh, well, by the way, not only does the machine learning community not, not spend a lot of time on lifelong learning, they don't, I don't think they spend a lot of time on, um, learning period, in the sense that they tend to be very task-focused. Everybody is over-fitting to whatever problem is they happen to have. They're over-engineering their solutions to the task. Even the people, and I think these people do, um, are trying to solve a hard problem of transfer learning, right? "I'm gonna learn on one task, I'm gonna learn on the other task." I- it's, you still end up creating the task. You know, it's like looking for your keys where the light is 'cause that's where the light is, right? It's not because the keys have to be there. I mean, and one could argue that we tend to do this in general, we tend to kind of do, as a group, we tend to hill climb and get stuck in local optima. Um, and I think we do this in the small, the small as well. I think it's very hard to do. Um, because... (laughs) So, look, here's the hard thing about AI, right? The hard thing about AI is it keeps changing on us, right? You know, what is AI? AI is the, you know, the art and science of making computers act the way they do in the movies, right? That, that's what it is, right?
- LFLex Fridman
(laughs) That's a good definition.
- CICharles Isbell
And, but, but (laughs) , but beyond that, it's-
- LFLex Fridman
And they keep coming out with new movies, it's the problem. (laughs)
- CICharles Isbell
Yes, and they just, and it just... Right, exactly. We are driven by this kind of need to, uh, this sort of ineffable quality of who we are, which means that the moment you understand something, it's no longer AI, right? Well, like, we understand this, that's just, you take the derivative and you divide by two and then you, you average it out over time, and the window, so therefore, that's no longer AI. So the problem is unsolvable because it keeps kind of going away. This creates a kind of illusion, which I don't think is an entire illusion, of either there's very simple task-based things you can do very well and over-engineer, there's all of AI, and there's like nothing in the middle. Like, it's very hard to get from here to here, and it's very hard to see how to get from here to here. And I don't think that we've done a very good job of it, because we get stuck trying to solve the small problem that's in front of us, myself included. I'm not gonna pretend that I'm better at this than anyone else. Um, and of course, all the incentives in academia, um, and in industry, are set to make that very hard, 'cause you, you have to get the next paper out, you have to get the next product out, you have to solve this problem. And it's very sort of naturally incremental, and none of the, the incentives are set up to allow you to take a huge risk unless you're already so well-established you can take that big risk. Um, and if you're that well-established that you can take that big risk, then you have probably spent much of your career taking these little risks, relatively speaking. And so, you have got a lifetime of experience telling you not to take that particular big risk, right? So the whole system's set up to make progress very slow. And that's fine, it's just the way it is. But it does make this gap seem really big. Which is my long way of saying I don't have a great answer to it, except that, stop doing N equals one. At least try to get N equal two, and maybe N equal seven, so that you can say, "I'm gonna..." Or maybe T is a better variable here. I'm gonna not just solve this problem, I'm gonna solve this problem and another problem. I'm not gonna learn just on you, I'm gonna keep living out there in the world and just seeing what happens, and that we'll learn something as designers, and our machine learning algorithms, um, and our AI algorithms, um, can learn as well. But unless you're willing to build a system which you're gonna have live for months at a time in an environment that is messy and chaotic, you cannot control, uh, then you're never going to make progress in that direction. So, I guess my answer to you is yes. My idea is that you should... It's not no, it's yes, you should be deploying these things and making them live for months at a time, and be okay with the fact that it's gonna take you five years to do this. Not re-running the same experiment over and over again, and refining the machine so it's slightly better at whatever, but actually having it out there, and living in the chaos of the world, um-... and seeing what its learning algorithm, say, can learn, what data structure it can build and, and how it can go from there. Without that, you're gonna be stuck ultimately.
- LFLex Fridman
What do you think about the possibility of N equals one growing, this is probably a crude approximation, but growing like if we look at language models like GPT-3.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
If you just make it big enough, it'll swallow the world. Meaning like, it'll solve all your T to infinity by just growing in size, uh, of this. Taking the small over-engineered solution and just pumping it full of steroids in terms of compute, in terms of size of training data, in the Yann LeCun style self-supervised, or open AI self-supervised.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
Just throw all of YouTube, uh, at it and it will learn how to reason, how to paint, how to create music, how to love, all of that by watching YouTube videos. (laughs)
- CICharles Isbell
I mean, I can't think of a more terrifying world to live in than a world that is-
- LFLex Fridman
(laughs)
- CICharles Isbell
... based on YouTube videos. But, yeah, I think, I think the answer th- I, I just kind of don't think that'll quite... Well, it won't work that easily, right? You will get somewhere and you will learn something, which means it's probably worth it, but you won't get there. You won't solve the pr- You know, here's the thing. We build these things and we say we want them to learn, but what actually happens, and let's say they do learn, I mean, certainly every paper I've gotten published the thing's learned, I don't know about anyone else.
- LFLex Fridman
(laughs)
- CICharles Isbell
Um, but they actually change us, right? We react to it differently, right? So we keep redefining what it means to be successful, both in the negative in the other case, but also in the positive, in that, "Oh, well this is a, this is a, this is then a accomplishment." I'll give you an example, which is like the one you just described with G2. Let's, let's get completely out of machine learn- well, not completely, but mostly out of machine learning. Think about Google. People were trying to solve information retrieval, the ad hoc information retrieval problem forever. I mean, first major book I ever read about it was, what, um, '71, I think was when it came out? Anyway, it's, you know, we'll treat everything as a vector and we'll do, we'll do these vector space models and whatever, and that was all great, and we made very little progress. I mean, we made some progress. And then Google comes and makes the ad hoc problem seem pretty easy. I mean, it's not. There's lots of computers and databases involved, but, you know, and there's some brilliant algorithmic stuff behind it too, and some systems building. Um, but the problem changed, right? If you've got a world that's that connected so that you have, you know, there are 10 million answers, quite literally, to the question that you're asking, then the problem wasn't, "Give me the things that are relevant." The problem is, "Don't give me anything that's irrelevant, at least in the first page, because nothing else matters." So Google is not solving the information retrieval problem, at least not on this webpage. Google is minimizing false positives, which is not the same thing as getting an answer.
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
It turns out it's good enough for what it is we wanna use Google for, but it also changes what the problem was we thought we were trying to solve in the first place. You thought you were trying to find an answer, but you're not... Or you're trying to find the answer. But it turns out, you're just trying to find an answer. Now, yes, it is true, it's also very good at finding you exactly that webpage. Of course, you trained yourself to figure out what the keywords were to get you that webpage. Um, but in the end, by having that much data, you've just changed the problem into something else. You haven't actually learned what you set out to learn. Now, the counter to that would be, maybe we're not doing that either, we just think we are, uh, because, you know, we're in our own heads. Maybe we're, we're, we're learning the wrong problem in the first place. But I don't think that matters. I think the point is, is that Google has not solved information retrieval. Google has done amazing service. I have nothing bad to say about what they've done. Lord knows, my entire life is better because Google exists, um, and for, for Google Maps I don't think I'd have ever found this place. (laughs)
- LFLex Fridman
(laughs) Where is this?
- CICharles Isbell
Like, 95... I see 110 and I see... But where did, where- where'd 95 go?
- LFLex Fridman
(laughs)
- CICharles Isbell
Um, you know, so I'm very, I'm very grateful for Google. But, you know, they just have to make certain the first five things are right.
- LFLex Fridman
Yeah.
- CICharles Isbell
And everything after
- 41:12 – 48:47
Faculty hiring
- CICharles Isbell
that is wrong. Look, now we're going off on a totally different-
- LFLex Fridman
Let's go.
- CICharles Isbell
... topic here, but, but think about the way we hire faculty.
- LFLex Fridman
(laughs)
- CICharles Isbell
It's exactly the same thing.
- LFLex Fridman
Now you're getting controversial. (laughs)
- CICharles Isbell
I'm not getting controversial. Um, it's exactly the same problem, right? It's minimizing false positives.
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
We say things like, "We wanna find the best person to be an assistant professor at MIT."
- LFLex Fridman
Yes.
- CICharles Isbell
In the new College of Computing-
- LFLex Fridman
Yes.
- CICharles Isbell
... which I will point out was founded 30 years after the College of Computing I'm a part of.
- LFLex Fridman
Oh, uh-
- CICharles Isbell
Both of my, both are my alma mater, both, both-
- LFLex Fridman
Them are fighting words. (laughs)
- CICharles Isbell
I'm just saying, I appreciate all that they did, um, and all that they're doing. Um-
- LFLex Fridman
(laughs)
- CICharles Isbell
... anyway. So we're gonna-
- LFLex Fridman
It's true. (laughs)
- CICharles Isbell
We're gonna try to, we're gonna try to hire the best professor. That's what we say. The best person for this job. But that's not what we do at all, right? Do you know which percentage of faculty in the top four earn their PhDs from the top four? Say in 2017, for which we have, which is the most recent years for which I have data?
- LFLex Fridman
Maybe a large percentage.
- CICharles Isbell
It was about 60%.
- LFLex Fridman
60.
- CICharles Isbell
60% of the faculty in the top four earned their PhDs in the top four. This is computer science-
- LFLex Fridman
Yeah.
- CICharles Isbell
... for which there is no top five. There's only a top four, right? 'Cause they're all tied for one.
- LFLex Fridman
For people who don't know, by the way, that would be MIT, Stanford, Berkeley, CMU?
- CICharles Isbell
Yep. That's exactly right.
- LFLex Fridman
Uh, Geo- Georgia Tech, uh-
- 48:47 – 56:15
University rankings
- CICharles Isbell
US News & World Report, every time they change their formula for determining rankings moved entire universities to behave differently, because rankings matter.
- LFLex Fridman
Can you talk trash about, uh, those rankings for a second? Not... I'm joking about talking trash. I actually... it's, it's so funny how, from my perspective, from a very shallow perspective, how dogmatic... like, how much I trust those rankings. They're, they're almost ingrained in my head.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
I mean, at MIT, everybody kinda... uh, (laughs) it's a, it's a propagated, uh, mutually agreed-upon, like, idea that those rankings matter, and I don't think anyone knows what they're, uh... like most people don't know what they're based on.And what are they exactly based on, and what are the flaws in that? 'Cause-
- CICharles Isbell
Well, (sighs) so it depends on which rankings you're talking about. Do you want to talk about computer science, or do you want to talk about universities?
- LFLex Fridman
Computer science, US News, isn't that the main one?
- CICharles Isbell
Yes, US News.
- LFLex Fridman
Or do you just-
- CICharles Isbell
The only one that matters is US News, nothing else matters.
- LFLex Fridman
Yeah.
- CICharles Isbell
Sorry, csrankings.org, but nothing else matters but, uh, but US News. So US News has formula that it uses for many things, but not for computer science, because computer science is considered a science, which is absurd. So the rankings for computer science-
- LFLex Fridman
Yeah.
- CICharles Isbell
... is 100% reputation. So two people at each department, it's not really a department, but whatever, at each department, basically rank everybody. Slightly more complicated than that, but whatever, they rank everyone. And then those things are put together, and then somehow-
- LFLex Fridman
Oh, no.
- CICharles Isbell
... the rankings come out.
- LFLex Fridman
So that means, how do you improve reputation? How do you move up and down the space of reputation?
- CICharles Isbell
Yes, that's exactly the question.
- LFLex Fridman
Twitter? (laughs)
- CICharles Isbell
(laughs) It can help. I can tell you how Georgia Tech did it, or at least how I think Georgia Tech did it-
- LFLex Fridman
Yeah.
- CICharles Isbell
... because, um, I th- 'cause Georgia Tech is actually the case to look at. Not just because I'm at Georgia Tech, but because Georgia Tech is the only computing unit that was not in the top 20 that has made it into the top 10. It's also the only one in the last two decades, I think, um, that moved up in the top 10, as opposed to having someone else move down. So we used to be number 10, and then we became number nine because UT Austin went down slightly, and now we were tied for ninth, 'cause that's how rankings work. Uh, and we moved from nine to eight because our raw score moved up a point. So Georgia, something, something, something about Georgia Tech, computer science, or computing anyway. Um, I think it's because we have shown leadership at every crisis level, right? So we created a college, first public university to do it, second college, second university to do it after CMU was number one. I also think it's no, um, accident that CMU is the largest and we're, depending upon how you count, and depending on exactly where MIT ends up with its final college of computing, second or third largest. I don't think that's an accident. We've been doing this for a long time. Uh, but in the 2000s, when there was a crisis about undergraduate education, Georgia Tech took a big risk and succeeded at rethinking undergrad education in computing. Um, I think we, we created these schools at a time when most public universities anywhere were afraid to do it. We did the online masters, um, and that mattered because people were trying to figure out what to do with MOOCs and so on. I think it's about being observed by your peers, that having an impact. So I mean, that is what reputation is, right? So the way you move up in the reputation rankings is by doing something that makes people turn and look at you and say, "That's good. They're better than I thought."
- LFLex Fridman
Yeah.
- CICharles Isbell
Beyond that, it's just inertia. I mean, there's huge history in this, in the system, right? Like, I mean, there was these, I can't remember this, this is maybe apocryphal, but the, you know, there were, there's a, a major or a department that, like, MIT was ranked number one in, and they didn't have it, right?
- LFLex Fridman
(laughs)
- CICharles Isbell
It's just about what you... I don't know if that's true-
- LFLex Fridman
Yeah.
- CICharles Isbell
... but someone said that to me anyway.
- LFLex Fridman
(laughs)
- CICharles Isbell
Um, but it's, it's a, it's a, it's a thing, right? It's all about reputation. Of course MIT is great, because MIT is great. It's always been great.
- LFLex Fridman
Yeah.
- 56:15 – 1:05:39
Science communicators
- LFLex Fridman
why is there not more people who just, like, play with whatever that narrative is, have fun with it, have like, excite the world, whether it's in a Carl, Carl Sagan style-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... of like that calm, sexy voice of explaining the stars and all the romantic stuff, or, or the Elon Musk, dare I even say Donald Trump, where you're like trolling and shaking up the system and just saying controversial things. Um, I, that like, I, I talked to Lisa Feldman Barrett who's a neuroscientist who just enjoys playing the, the controversy, things like con- like finds the counterintuitive ideas in a particular science-
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
... and throws them out there and sees how they play in the public discourse. Like, why don't we see more of that? And what doesn't academia attract like an Elon Musk type?
- CICharles Isbell
Well, tenure is a powerful thing that allows you to do whatever you want. But getting tenure-
- LFLex Fridman
Yeah.
- CICharles Isbell
... typically requires you to be relatively narrow, right?
- LFLex Fridman
Yeah.
- CICharles Isbell
Because people are judging you. Well, I think the answer is we, we have told ourselves a story, a narrative, that that is vulgar. What you just described as vulgar. It's certainly unscientific, right? And it is easy to convince yourself that in some ways you're the mathematician, right?
- LFLex Fridman
Mm-hmm.
- CICharles Isbell
The fewer there are in your major, the more that proves your purity, right?
- LFLex Fridman
(laughs) Yeah.
- CICharles Isbell
So-
- LFLex Fridman
(laughs) Yes.
- CICharles Isbell
... once you tell yourself that story, then it is beneath you to do that kind of thing, right? I think that's wrong. I think that... And by the way, everyone doesn't have to do this. Everyone's not good at it and everyone, even if they would be good at it, wouldn't enjoy it.
- LFLex Fridman
Yeah.
- CICharles Isbell
So it's fine. But I do think you need some diversity in the way that people choose to relate to the world as academics, because I think the great universities are ones that engage with the rest of the world. It is a home for public intellectuals.
- LFLex Fridman
Yes.
- CICharles Isbell
And in 2020, being a public intellectual probably means being on Twitter. Uh, whereas, of course, that wasn't true 20 years ago, 'cause, well, Twitter wasn't around 20 years ago. And if it was, it wasn't around in a meaningful way. I don't actually know how long Twitter's been around. As I get older, I find that my, my notion of-
- LFLex Fridman
(laughs) Yeah.
- CICharles Isbell
... time is, has gotten worse and worse. Like, Google really has been around that long? Anyway, the point is that, um, I think that, I think that we sometimes forget that a part of our job is to impact the people who aren't in the world that we're in.
- LFLex Fridman
Yeah.
- CICharles Isbell
And that that's the point of being at a great place and being a great person, frankly.
- LFLex Fridman
There's an interesting force in terms of public intellectuals in t- you know, forget Twitter, we could look at just online courses that are public-facing in some part.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
Like, there is a kind of force that pulls you back. I, I would... Let me just call it out 'cause I don't give a damn at this point. Um, it's, there's a little bit of, all of us have this, but certainly faculty have this, which is jealousy.
- CICharles Isbell
Mm-hmm.
- LFLex Fridman
It's whoever's popular at being a good communicator, exciting the world with their science. And of course when you excite the world with the science, it's not peer-reviewed clean. It's, it's ki- it all sounds like bullshit (laughs) . It's like a TED Talk. (laughs)
- CICharles Isbell
Mm-hmm.
- 1:05:39 – 1:14:39
Hip hop
- LFLex Fridman
So, can you educate a Soviet-born Russian about this thing called hip hop? Like, if you were to, uh, give me... Like, you know, if we went on a journey together and you were trying to educate me about especially, um, the, you know, the past couple of decades and the '90s about hip hop or funk, what records or artists would you, uh, w- would you introduce me to? Would you, uh, uh, would you tell me about, or maybe what influenced you in your journey, or what you just love? Like when, when, when the family's gone and you just sit back and just blast some stuff these days, (laughs) uh, what do you listen to?
- CICharles Isbell
Well, so I listen to a lot, but I will tell you... Well, first off, all great music was made when I was 14.
- LFLex Fridman
Right.
- CICharles Isbell
And that statement is true for all people, no matter how old they are or where they live, but, uh, for me, the first thing that's worth pointing out is that hi- hip hop and rap aren't the same thing. So depending on who you talk to about this, and there are people who feel very strongly about this, much more strongly than I do.
- LFLex Fridman
Well, you're offending everybody in this conversation, so this is great. Let's keep going. (laughs)
- CICharles Isbell
(laughs) Hip hop is a culture.
- LFLex Fridman
Yeah, okay, gotcha.
- CICharles Isbell
It's a whole set of things, of which rap is a part.
- LFLex Fridman
Gotcha.
- CICharles Isbell
So tagging is a part of hip hop. I don't know why that's true, but people tell me it's true and I'm willing to go along with it 'cause they get very angry about it. But hip hop-
- LFLex Fridman
Tagging is like graffiti?
- CICharles Isbell
Tagging is like graffiti.
- LFLex Fridman
Oh.
- CICharles Isbell
Uh, and there's all these, including the popping and the locking and all the dancing and all those things, that's all a part of hip hop.
- LFLex Fridman
Yep.
- CICharles Isbell
It's a way of life. Uh, which I think is true. And then there's rap, which is this particular...
- LFLex Fridman
It's the music part.
- CICharles Isbell
Yes, er, a music part.
- LFLex Fridman
A subset, yeah.
- CICharles Isbell
Um, I mean, you wouldn't call the stuff that DJs do, the s- scratching, that's not rap, right? But it's a part of hip hop, right? So, given that we understand that hip hop is this whole thing, what are the rap albums that best touch that for me? Well, if I were gonna educate you, I would try to figure out what you liked and then I would work you there? Uh...
- LFLex Fridman
Lynyrd Skynyrd, um...
- CICharles Isbell
Oh my God. Well, then I-
- LFLex Fridman
Yes. (laughs)
- CICharles Isbell
... I, I would probably start with... (laughs) Um.
- LFLex Fridman
Uh. (laughs) Led Zeppelin? (laughs)
- CICharles Isbell
There's a fascinating ex-
- LFLex Fridman
Sorry.
- CICharles Isbell
No, it's okay. There's a fascinating exercise one can do by watching old episodes of, uh, I Love the '70s, I Love the '80s, I Love the '90s, um, with a bunch of friends, and just see where people come in and out of pop culture. So, if you're talking about, um, those people, then I would actually start you with, um, where I would hope to start you with anyway, which is Public Enemy, particularly It Takes a Nation of Millions to Hold Us Back, which is clearly the best album ever.... produced, um, and certainly the best hip-hop album ever produced, um, in part because it was so much of what was great about the time. Uh, fantastic lyrics, 'cause to me it's all about the lyrics. Um, amazing music that was coming from Rick Rubin was the, was the producer of that, and he did a lot of very kind of heavy metal-ish, at least in the '80s sense, uh, at the time. Uh, and it was focused on, um, uh, politics, uh, in the 1980s, which was what made hip-hop so great then. I would start you there, then I would move you up through things that are, been happening more recently. I'd probably get you to someone like a Mos Def. I would give you a history lesson, basically.
- LFLex Fridman
Mos Def, yeah.
- CICharles Isbell
Mos Def's amazing.
Episode duration: 2:23:50
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode LAyZ8IYfGxQ
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome