Lex Fridman PodcastFrançois Chollet: Measures of Intelligence | Lex Fridman Podcast #120
EVERY SPOKEN WORD
150 min read · 30,253 words- 0:00 – 5:04
Introduction
- LFLex Fridman
The following is a conversation with Francois Chollet, his second time on the podcast. He's both a world-class engineer and a philosopher in the realm of deep learning and artificial intelligence. This time, we talk a lot about his paper titled On the Measure of Intelligence that discusses how we might define and measure general intelligence in our computing machinery. Quick summary of the sponsors: Babbel, MasterClass, and Cash App. Click the sponsor links in the description to get a discount and to support this podcast. As a side note, let me say that the serious, rigorous, scientific study of artificial general intelligence is a rare thing. The mainstream machine learning community works on very narrow AI with very narrow benchmarks. This is very good for incremental and sometimes big incremental progress. On the other hand, the outside-the-mainstream, renegade, you could say, AGI community works on approaches that verge on the philosophical and even the literary without big public benchmarks. Walking the line between the two worlds is a rare breed, but it doesn't have to be. I ran the AGI series at MIT as an attempt to inspire more people to walk this line. DeepMind and OpenAI for a time, and still on occasion, walk this line. Francois Chollet does as well. I hope to also. It's a beautiful dream to work towards and to make real one day. If you enjoy this thing, subscribe on YouTube, review it with five stars on Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter @LexFridman. As usual, I'll do a few minutes of ads now and no ads in the middle. I try to make these interesting, but I give you timestamps so you can skip. But still, please do check out the sponsors by clicking the links in the description. It's the best way to support this podcast. This show is sponsored by Babbel, an app and website that gets you speaking in a new language within weeks. Go to babbel.com and use code LEX to get three months free. They offer 14 languages, including Spanish, French, Italian, German, and yes, Russian. Daily lessons are 10 to 15 minutes, super easy, effective, designed by over 100 language experts. Let me read a few lines from the Russian poem, (Russian) , by Alexander Blok, that you'll start to understand if you sign up to Babbel. Ночь. Улица. Фонарь. Аптека. Бессмысленный и тусклый свет. Живи еще хоть четверть века. Все будет так. Исхода нет. Now, I say that you'll start to understand this poem because Russian starts with a language and ends with vodka. Now, the latter part is definitely not endorsed or provided by Babbel and will probably lose me the sponsorship, although it hasn't yet. But once you graduate with Babbel, you can enroll in my advanced course of late-night Russian conversation over vodka. No app for that yet. So get started by visiting babbel.com and use code LEX to get three months free. This show is also sponsored by MasterClass. Sign up at masterclass.com/lex to get a discount and to support this podcast. When I first heard about MasterClass, I thought it was too good to be true. I still think it's too good to be true. For $180 a year, you get an all-access pass to watch courses from, to list some of my favorites, Chris Hadfield on space exploration; hope to have him on this podcast one day. Neil deGrasse Tyson on scientific thinking and communication. Neil too. Will Wright, creator of SimCity and Sims on game design. Carlos Santana on guitar. Garry Kasparov on chess. Daniel Negreanu on poker and many more. Chris Hadfield explaining how rockets work and the experience of being launched to space alone is worth the money. By the way, you can watch it on basically any device. Once again, sign up at masterclass.com/lex to get a discount and to support this podcast. This show, finally, is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to send and receive money digitally, let me mention a surprising fact related to physical money. Of all the currency in the world, roughly 8% of it is actually physical money. The other 92% of the money only exists digitally, and that's only going to increase. So again, if you get Cash App from the App Store or Google Play and use code LEXPODCAST, you get 10 bucks. And Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Francois Chollet.
- 5:04 – 6:23
Early influence
- LFLex Fridman
What philosophers, thinkers, or ideas had a big impact on you growing up? And today?
- FCFrançois Chollet
So one author that had a big impact, uh, on me when I, I read, uh, his books as a teenager was Jean Piaget, who is a Swiss psychologist, is considered, uh, to be the father of developmental psychology. And he has a large body of work about, um, basically how intelligence, uh, develops, uh, in children. And so, uh, it's very old work, like most of it is from the 1930s, 1940s. Uh, so it's not quite up-to-date. Uh, it's actually superseded by many, uh, newer developments in developmental psychology. But to me, it was, it was very, uh, very interesting, very striking, and actually shaped the early ways in which, uh, I started to think about the mind and the development of intelligence as a teenager.
- LFLex Fridman
His actual ideas or the way he thought about it or just the fact that you could think about the developing mind at all?
- FCFrançois Chollet
I guess both. Jean Piaget is, uh, the author that's really introduced me to the notion that intelligence and the mind is something that you construct throughout, throughout your life and that you, uh, that children, uh, uh, construct it in stages. And I thought that was a very interesting idea which is, you know, of course, very relevant, uh, to AI, to building artificial minds....
- 6:23 – 12:50
Language
- FCFrançois Chollet
another book that I, I read around the same time that had a big impact on me, uh, and, and there was actually a, a little bit of overlap with Jean Piaget as well, and I read it around the same time, uh, is, uh, Jeff Hawkins' On Intelligence-
- LFLex Fridman
Hmm.
- FCFrançois Chollet
... which is a classic. And he, he has this vision of the mind as a multi-scale hierarchy of temporal prediction modules, and these ideas really resonated with me, like the, the notion of, uh, a, a modular hierarchy, um, of, you know, potentially, um, of compression functions or prediction functions. I thought it was really, really interesting, uh, and it sh- shaped, uh, uh, the way I started thinking about how to build minds.
- LFLex Fridman
The hi- the hierarchical nature, the which aspect? Also, he's a neuroscientist, so he was thinking-
- FCFrançois Chollet
Yes.
- LFLex Fridman
... of actual... He was basically talking about how our mind works.
- FCFrançois Chollet
Yeah, the notion that cognition is prediction was an idea that was kinda new to me at the time, and that I- that I really loved at the time. And, yeah, and the notion that, uh, th- that there are multiple scales of processing, uh, in the brain.
- LFLex Fridman
The hierarchy?
- FCFrançois Chollet
Yes. Uh-
- LFLex Fridman
This was before-
- FCFrançois Chollet
... hierar-
- LFLex Fridman
... deep learning?
- FCFrançois Chollet
These ideas of hierarchies in AI have been around for a long time. Uh, even before On Intelligence, I mean, they've, they've been around since the 1980s. Um, and yeah, that was before deep learning, but of course, I, I think these ideas really found, uh, uh, their practical implementation in deep learning.
- LFLex Fridman
What about the memory side of things? I think he was talking about knowledge representation. Do you think about memory a lot? One way you could think of neural networks as a, is a kind of memory. You're memorizing things, but it doesn't seem to be the kind of memory that's in our brains, or it doesn't have the same rich complexity and long-term nature that's in our brains.
- FCFrançois Chollet
Yes. The brain is more of a sparse access memory, so that you can actually retrieve, um, very precisely, like bits of your experience.
- LFLex Fridman
The retrieval aspect, you can like introspect, you can ask yourself questions, I guess.
- FCFrançois Chollet
Yes. You can program your own memory, and language is actually, uh, the tool you use to do that. I think language is a kind of a operating system for the mind, and you use language, uh... Well, one of the uses of language is as a query that you run over your own memory. You use words as keys to retrieve specific experiences or specific concepts, specific thoughts. Like language is a way you store thoughts, not just in writing in the, in the physical world, but also in your own mind, and it's also how you retrieve them. Like imagine if you didn't have language, then you would have to... You would not have really have, um, a self internally triggered, uh, way of retrieving past thoughts. You would have to rely on external experiences. For instance, you, you see a specific sight, you smell a specific smell, and that brings up memories, but you would not really have a, a, a way to deliberately, deliberately access these memories with our language.
- LFLex Fridman
Well, the interesting thing you mentioned is you can also program th- the memory. You can change it probably with language.
- FCFrançois Chollet
Yeah, using language. Yes.
- LFLex Fridman
Well, let me ask you a Chomsky question, which is like... First of all, do you think language is like fundamental? Like, uh, there's turtles. What's at the bottom of the turtles? They don't go... It can't be turtles all the way down. Is language at the bottom of cognition of everything? Is like language the fundamental aspect of like what it means to be a thinking thing?
- FCFrançois Chollet
No, I don't think so. I think language is-
- LFLex Fridman
You disagree with Noam Chomsky?
- FCFrançois Chollet
Yes.
- LFLex Fridman
(laughs)
- FCFrançois Chollet
I think language is a layer on top of cognition. So-
- LFLex Fridman
Okay.
- FCFrançois Chollet
... it i- it is fundamental to cognition in the sense that, to, to use a computing metaphor, I see language as the operating system, uh, of the brain, of the-
- LFLex Fridman
Gotcha.
- FCFrançois Chollet
... human mind.
- LFLex Fridman
Yeah.
- 12:50 – 23:42
Thinking with mind maps
- FCFrançois Chollet
special metaphors. And when you think about things, uh, I consider myself very much as a, as a visual thinker. You, you often, uh, express these thoughts, um, by using things like, uh, visualizing concepts, um, in, uh, in 2D space, or like, you solve problems by imag- ima- imagining yourself, uh, navigating, uh, a, a, a concept space. I, I don't know if, if you have this sort of experience.
- LFLex Fridman
You said visualizing concept space, so like... So I certainly think about... I certainly meth- I certainly visualize mathematical concepts, but you mean like co- in concept space? Visually, you're embedding ideas (laughs) into som- into three-dimensional space you didn't explore with your mind, essentially?
- FCFrançois Chollet
Usually more like 2D, but yeah.
- LFLex Fridman
2D?
- FCFrançois Chollet
Yeah.
- LFLex Fridman
(laughs) You're a flatlander. You're, um... Okay. No. I d- I, I... I do not. I always have to, uh... Before I jump from concept to concept, I have to put it back down (laughs) on pa- uh, it has to be on paper. I can only travel on, uh, 2D paper, not inside my mind.
- FCFrançois Chollet
Mm-hmm.
- LFLex Fridman
You're able to move inside your mind.
- FCFrançois Chollet
But even if you're writing, like, a paper for instance, don't you have, like, a, a spatial representation of your paper? Like, you, you visualize where ideas lie topologically in relationship to other ideas, kind of like a subway map of the ideas in your paper?
- LFLex Fridman
Yeah. That's true. I mean, there, there is, um... In papers, I don't know about you, but, uh, there feels like there's a destination. Um, there's a, there's a key idea that you want to arrive at, and a lot of it is in, in the fog, and you're trying to kind of... It's almost like, um, um... What's that called when, um, you do a path planning search from both directions, from the start and from the end? (laughs) And then you find, you do like shortest path but like, uh, you know, in game-playing you do this with like A star from both sides.
- FCFrançois Chollet
Mm-hmm.
- LFLex Fridman
And that's-
- FCFrançois Chollet
And you see where, where they join.
- LFLex Fridman
Yeah. So you kind of do, uh, at least for me, I think, like, first of all just exploring from the start, from like, uh, first principles. What do I know? Uh, what can I start proving from that, right? And then from the destination, if, uh, you start backtracking, like if, if I want to show some kind of sets of ideas, what would it take to show them? And you kind of backtrack. But like, yeah, I, I don't think I'm doing all that in my mind though. Like, I'm putting it down on paper.
- FCFrançois Chollet
Do you use mind maps to organize your ideas?
- LFLex Fridman
No. No.
- FCFrançois Chollet
Yeah. I like mind maps. I, I'm that kind of person.
- LFLex Fridman
Okay. So let's get into this 'cause it's... I've, I've been so jealous of people, I haven't really tried it. I've been jealous of people that seem to, like... They get like this fire of passion in their eyes, 'cause everything starts making sense. It's like, uh, Tom Cruise in the movie when he's like moving stuff around. Uh, some of the most brilliant people I know use mind maps. I haven't tried really. Can you explain what the hell a mind map is?
- FCFrançois Chollet
I guess a mind map is a way to make kind of like the mess inside your mind to just put it on paper, so that you gain more control over it. It's a way to organize things on paper and, uh, as, as, as kind of like a consequence for organizing things on paper, you start being more organized inside, inside your own mind.
- LFLex Fridman
So what, what does that look like? You put... Like, do you have an example? Like, what, what do, what do you... What's the first thing you write on paper? What's the second thing you write?
- FCFrançois Chollet
I mean, typically, uh, you, you draw a mind map to organize the way you think about a topic. So you would start by writing down, like, the, the key concept about that topic. Like, you would write intelligence or something, and then you would start adding, uh, associative connections. Like, what do you think about when you think about intelligence? What do you think are the key elements of intelligence? So maybe you would have language, for instance, and you'd have motion. And so you would start drawing nodes with these things. And then you would see, what do you think about when you think about motion? And so on. And, and you would go like that, like a tree.
- LFLex Fridman
It's, it's, uh, is it a tree or... A tr- a tree mostly, or is it a graph too? Like, a tree. I guess it's a tree.
- FCFrançois Chollet
Oh, it's, it's more of a graph than a tree. And, um, and it's not limited to just, you know, uh, uh, writing down words. You can also, uh, uh, draw things. And it's not, it's not supposed to be purely hierarchical, right? Like, you can, um... The point is that you can start... Once, once you start writing it down, you can start reorganizing it so that it makes more sense, so that it's, uh, connected in a more effective way.
- LFLex Fridman
See, but I- I'm so OCD that... You just mentioned intelligence, and then language and motion. I would start becoming paranoid that the categorization is imperfect. Like, that I, I would become paralyzed with the mind map that like, this may not be... So like, the... Even though you're just doing associative kind of connections, there's an implied hierarchy that's emerging, and I would start becoming paranoid that it's not the proper hierarchy. So you're not just... One way to see mind maps is you're putting thoughts on paper. It's like a stream of consciousness. But then, you can also start getting paranoid, well, if this is the right hierarchy.
- FCFrançois Chollet
Sure.
- LFLex Fridman
Like-
- FCFrançois Chollet
Which... But it's a mind map. It's your mind map. You're free to draw anything you want. You're free to draw any connection you want, and you can just make a different mind map if, if you think the central node is not the right node.
- LFLex Fridman
Yeah.
- FCFrançois Chollet
So-
- LFLex Fridman
I suppose there's a fear of being w- wrong.
- 23:42 – 42:24
Definition of intelligence
- FCFrançois Chollet
- LFLex Fridman
Yeah. Well, let me, uh, let me zoom out for a second. Uh, let's get into your paper on The Measure of Intelligence that, uh, did you put out in 2019?
- FCFrançois Chollet
Yes.
- LFLex Fridman
Okay. Yeah.
- FCFrançois Chollet
November.
- LFLex Fridman
November. Yeah. Remember 2019? That was a, it was a different time.
- FCFrançois Chollet
Yeah.
- LFLex Fridman
(laughs)
- FCFrançois Chollet
I remember. I still remember.
- LFLex Fridman
(laughs) It feels like a different, uh, different, different world.
- FCFrançois Chollet
You could travel. You could, you know, actually go outside and see friends.
- LFLex Fridman
Yeah. Let me ask the most absurd question. I think, uh, there's some non-zero probability that there'll be a textbook one day, like 200 years from now, on artificial intelligence or it will be called, like, just Intelligence 'cause humans will already be gone, and it'll be your picture with a quote. This is ... you know, one of the early, uh, biological systems would consider the nature of intelligence, and there'll be, like, a definition of how they thought about intelligence, which is one of the things you do in your paper on Measure of Intelligence is to ask, like-... well, what is intelligence and, and, uh, how to test for intelligence and so on. So, is there a spiffy quote about what is intelligence? What is the definition of intelligence, a- according to François Chollet?
- FCFrançois Chollet
Yeah. So, d- do you think-
- LFLex Fridman
(laughs)
- FCFrançois Chollet
... the, the, the super intelligent AIs of the future will want to remember us the way we remember humans from the past and do you think they would be, you know, they won't be ashamed of having a biological origin?
- LFLex Fridman
Uh, no. I, I think it will be a niche topic, it won't be that interesting, but it'll be, it'll be like the people that study in certain contexts, like historical civilization that no longer exists, the Aztecs and so on, that- that's how it'll be seen. And it'll be a study in the c- also the context on social media, there'll be hashtags about the atrocity committed to human beings, um, when, when the, when the robots finally got rid of them. Like, it was a mistake. It'll be seen as a, as a giant mistake, but ultimately, eh, in the name of progress and it created a better world because humans were, uh, over-consuming the resources and ult- they were not very rational and were destructive in the end, in terms of productivity and, uh, putting more love in the world. And so within that context, there'll be a chapter about these biological systems.
- FCFrançois Chollet
You seem to have a very detailed vision of that future. You should write, uh, a sci-fi novel about it.
- LFLex Fridman
I s- I'm, I'm working, I'm, uh, I'm working on a, on a sci-fi novel currently, yes. (laughs)
- FCFrançois Chollet
Yeah, so-
- LFLex Fridman
Self-published, yeah.
- FCFrançois Chollet
The definition of intelligence. So, intelligence is the efficiency with which, uh, you acquire new skills at tasks that you did not previously, uh, know about, that you did not prepare for. All right? So, it is not... Intelligence is not skill itself. It's not what you know, it's not what you can do. It's how well and how efficiently you can learn new things.
- LFLex Fridman
New things?
- FCFrançois Chollet
Yes.
- LFLex Fridman
I- the idea of newness there seems to be fundamentally important.
- FCFrançois Chollet
Yes. So, you would see intelligence, uh, on display, for instance, uh, whenever you see, uh, a human being or, you know, an AI creature adapt to a new environment that it di- it has not seen before, that its creators did not anticipate. Uh, when you see adaptation, when you see improvisation, when you see generalization, that's intelligence. Uh, in reverse, if you have a system that when you put it in a slightly new environment, it cannot adapt, it cannot improvise, it cannot deviate from what's it's, it's hardcoded to do or, um, what, uh, what it has, uh, been trained to do, um, that is a system that is not intelligent. There's actually a quote from, uh, uh, Einstein that captures, uh, uh, this idea, which is, "The measure of intelligence is the ability to change." I, I like that quote. I think it, uh, captures at least part of this idea.
- LFLex Fridman
You know, there might be something interesting about the difference between your definition and Einstein's. I mean, he's just being Einstein (laughs) and clever. But acquisition of, um, new ability to deal with new things versus ability to just change, what's the difference between those two things? So, just change in itself, do you think there's something to that? Just being able to change?
- FCFrançois Chollet
Yes, being able to adapt. So not, not change, but certainly, uh, uh, change this direction, being able to adapt yourself to your environment.
- LFLex Fridman
Whatever the environment is.
- FCFrançois Chollet
That's, that's a big part of intelligence, yes. And intelligence is more precisely, you know, how efficiently you're able to adapt, how efficiently you're able to basically master your environment. How efficiently, uh, you can acquire new skills. And I think there's a, there's a big distinction to be drawn between, uh, intelligence, which is a process, and the output of that process, which is skill. Um, so for instance, if you have, uh, a very smart human programmer, uh, that considers the game of chess and that writes down, uh, a static program that can play chess, then the intelligence is the process of developing that program. But the program itself is just encoding, um, the output artifact of that process.
- LFLex Fridman
Right.
- 42:24 – 53:07
GPT-3
- LFLex Fridman
GPT-3, similar to GPT-2 actually, have captivated some part of the imagination of the public. There's just a bunch of hype of different kind that's... I would say it's emergent, it's not artificially manufactured, it's just like people just get excited for some strange reason. In- in the case of GPT-3 which is funny that there's I believe a couple of months delay from release to hype. Maybe I'm not, um, uh, historically correct on that, but it feels like there was a little bit of a- a lack of hype and then there's a phase shift into- into hype. But nevertheless, there's a bunch of cool applications that seem to captivate the imagination of the public about what this language model that's trained in unsupervised way without any fine-tuning is able to achieve. So what do you make of that? What are your thoughts about GPT-3?
- FCFrançois Chollet
Yeah. So I think what's interesting about GPT-3...... is the idea that it may be able to learn new tasks in, um, after just being shown a few examples. So, I think if it's actually capable of doing that, that's novel and that's very interesting and that's something we should investigate. Uh, that said, I must say I'm not entirely convinced that we have shown it's, uh, it's capable of doing that. Uh, it's very likely, given the amount of data that the model is trained on, that what it's actually doing is pattern matching, uh, a new task you give it with a task that it's been, uh, exposed to in its training data. It's just recognizing the task instead of just developing a model of the task, right? Um-
- LFLex Fridman
But there's, um... Sorry to interrupt. There, there's parallels to what you said before, which is it's possible to see GPT-3 as, like, the prompts it's given as a kind of SQL query into this thing that it's learned, similar to what you said before, which is language is used to query the memory.
- FCFrançois Chollet
Yes.
- LFLex Fridman
So, is it possible that a neural network is a giant memorization thing, but then it'll... if it gets sufficiently giant, it'll memorize sufficiently large amounts of things in the world where it becomes... where intelligence becomes a querying machine?
- FCFrançois Chollet
Mm-hmm. I think it's possible that, uh, a significant chunk of intelligence is this giant associative memory. Uh, I definitely don't believe that intelligence is just a giant associative memory, but it may well be, uh, a big component.
- LFLex Fridman
So, do you think GPT-3, 4, 5, GPT-10 will eventually... Like, what do you think... where's the ceiling? Do you think it'll be able to reason? Um... No, that's a bad question. Uh, (laughs) like, what is the ceiling is the better question. W- w- what-
- FCFrançois Chollet
How well is it gonna scale? How good is GPT-N going to be?
- LFLex Fridman
Yeah.
- FCFrançois Chollet
So, I believe GPT-N is gonna-
- LFLex Fridman
GPT-N. (laughs)
- FCFrançois Chollet
... is gonna improve on the strength, uh, of GPT-2 and 3, which is it, it will be able to generate, you know, uh, ever-more plausible text, uh, in context-
- LFLex Fridman
Just monotonically increasing-
- FCFrançois Chollet
... given a prompt.
- LFLex Fridman
... performance. (laughs)
- FCFrançois Chollet
Um, yes. If you train, if you train a bigger model on more data, then, uh, uh, your text will be, uh, increasingly more, uh, context aware and increasingly more, uh, plausible in the same way that GPT-3 is, is much better at generating plausible texts compared to GPT-2. Um, but that said, I don't think just scaling up, uh, the model to more transformer layers and more training data is gonna address the flaws of GPT-3, which is that it can generate plausible texts, but that text is not constrained by anything else other than plausibility. So, in particular, it's not constrained by, uh, factualness, uh, or even consistency, which is why it's very easy to get GPT-3 to, to generate statements that are, uh, uh, factually untrue, uh, or to generate statements that are even self-contradictory, right? Uh, because it's, uh, it's, it's only goal is plausibility, and it has no other constraints. It's not constrained to be self-consistent, for instance, right? And so, for this reason, one thing that I thought was very interesting with GPT-3 is that you can predetermine the answer it will give you by asking the question in a specific way, because it's very responsive to the way you ask the question since it has-
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
... no understanding of the content of the question.
- LFLex Fridman
Right. The-
- FCFrançois Chollet
And if you, if you ask the same question in two different ways that are basically adversarially, uh, engineered to produce a certain answer, you will get, uh, uh, two different answers, two contradictory answers.
- LFLex Fridman
It's very susceptible to adversarial attacks, essentially.
- FCFrançois Chollet
Potentially, yes. So, in, in general, the problem with these models, these generative models, is that they, they're very good at generating plausible text, but that's just, that's just not enough, right? Um, uh, you need, uh... W- I think one, one, one avenue that would be very, uh, interesting to make progress is to make it possible, uh, to write programs over the latent space that these models operate on, so that you would rely on these, uh, uh, uh, self-supervised models to generate a sort of, like, pool, uh, of knowledge and concepts and common sense. And then you would be able to write explicit, uh, uh, reasoning programs over it. Uh, because the, the current problem with GPT-3 is that you... it's, it's... it can be quite difficult to get it to do what you want to do. Uh, if you want to, uh, turn GPT-3 into products, you need to put constraints on it, uh, you need to, um, force it to obey certain rules. So, you need a way to program it explicitly.
- LFLex Fridman
Yeah. So if you look at its ability to do program synthesis, it generates, like you said, something that's plausible.
- FCFrançois Chollet
Yeah. So if you, if you try to make it generate programs, it will perform well, uh, for any program that, that it has seen in its, in its training data. But because, uh, program space is not interpretive, right, um, uh, it's not gonna be able to generalize to problems it hasn't seen before.
- LFLex Fridman
Now, that's currently... Do you think sort of an absurd but I think useful, um, I guess, intuition builder is, uh, you know, the, the GPT-3 has 175 billion parameters. The human brain has a hundred... has about a thousand times that or, or more in terms of number of synapses. Do you think, um... obviously, very different kinds of things, but there is some degree of, uh, similarity. Do you think... what, what do you think GPT will look like when it has 100 trillion parameters?You- you think our conversation might be-
- FCFrançois Chollet
So-
- LFLex Fridman
... in nature, different? Like, the... 'cause you've criticized-
- FCFrançois Chollet
In nature-
- LFLex Fridman
... GPT-3 very effectively now. Do you think...
- FCFrançois Chollet
No, I don't think so. So, the, the... t- to begin with, the bottleneck with scaling up GPT-3 and GPT models, uh, uh, generative pre-trained transformer models, is not gonna be the size of the model or how long it- it takes to train it. The bottleneck is gonna be the training data because OpenAI's already training GPT-3 on a crawl of basically the entire web. Right? And that's a lot of data. So, you could imagine training on more data than that. Like, Google could train on more data than that.
- 53:07 – 57:22
Semantic web
- FCFrançois Chollet
- LFLex Fridman
Do you have a hope for, um... I don't know if you're familiar with the idea of a semantic web? Is, so semantic web, just for people who are not familiar, and... is, uh, is the idea of being able to convert the internet (laughs) or- or be able to attach like semantic meaning to the words on the internet, the s- the sentences, the paragraphs, to be able to contr- convert information on the internet or some fraction of the internet into something that's interpretable by machines. That was kind of a dream for, um, I think, the- the semantic web papers in the '90s.
- FCFrançois Chollet
Mm-hmm.
- LFLex Fridman
It's kind of the dream that, you know, the internet is full of rich, exciting information. Even in just looking at Wikipedia, we should be able to use that as data for machines.
- FCFrançois Chollet
Yeah.
- LFLex Fridman
And so far-
- FCFrançois Chollet
And that information is not, is not really in a format that's available to machines. So, no, I don't think the semantic web will ever work simply because it would be a lot of work, right, to make, uh... to provide that information in structured form. And there is not really any incentive for anyone to provide that work. Uh, so I think the- the way forward to make, um, the knowledge on the web available to machines is actually, uh, uh, something closer to unsupervised deep learning.
- LFLex Fridman
Yeah.
- FCFrançois Chollet
So, GPT-3 is actually a bigger step in the direction of making the knowledge of the web available to machines than the semantic web was.
- LFLex Fridman
Yeah. Perhaps. In- in a human-centric sense, it- it feels like GPT-3 hasn't learned anything that could be used to reason. But, uh, that might be just the early days.
- FCFrançois Chollet
Yeah. I- I think that's correct. I think the forms of reasoning that you- that you see it perform are basically just reproducing patterns that it has seen in string data. So, of course, if you train on, uh, the entire web, then you can produce an illusion of reasoning in many different situations, but it will break down if it's presented with- with a novel, uh, situation.
- LFLex Fridman
That's the open question between the illusion of reasoning and actual reasoning, yeah.
- FCFrançois Chollet
Yes. The power to adapt to something that is genuinely new, because the thing is, even... imagine you had, uh, um... you could train on every bit of data ever generated, uh, in the history of humanity.Uh, it remains that, that model would be capable of, of, uh, uh, anticipating, uh, many different possible situations. But it remains that the future is gonna be something different. Like, um, for instance, if you train a GPT-3 model, uh, on, on data from the year 2002 for instance, and then use it today, it's gonna be missing many things. It's gonna be missing many common sense facts about the world. It's even gonna be missing vocabulary and so on.
- LFLex Fridman
Yeah, it's interesting that, uh, GPT-3 even doesn't have, I think any information about the coronavirus. Uh- (laughs)
- FCFrançois Chollet
Yes. Which is why, you know, uh, uh, a system that's, uh, you, you tell that a system is intelligent when it's capable to adapt. So intelligence is gonna require a small amount of continuous learning, but it's also gonna require some amount of improvisation. Like it's not enough to assume that what you're gonna be asked to do is something that you've seen before or something that is a simple interpolation of things you've seen before.
- LFLex Fridman
Yeah.
- FCFrançois Chollet
In fact, that model breaks down for, uh, even, even very, tasks that look relatively simple from a distance, like L5 self-driving for instance. Google had a, had a paper, uh, uh, a couple years back showing that something like 30 million, uh, different road situations were actually completely insufficient to train a driving model. It wasn't even L2, right? And that's a lot of data. That's a lot more data than the, the 20 or 30 hours of driving that a human needs, uh, to learn to drive given the, the knowledge they've already accumulated.
- 57:22 – 1:09:30
Autonomous driving
- LFLex Fridman
Well, let me ask you on that topic, uh, Elon Musk, Tesla Autopilot. It's one of the only companies I believe is really pushing for a learning-based approach. Are you, you're skeptical that that kind of network can achieve level four?
- FCFrançois Chollet
L4, uh, is probably achievable. L5 probably not.
- LFLex Fridman
What's the distinction there? Is L5 is completely, you can just fall asleep?
- FCFrançois Chollet
Yeah, L5 is basically human level.
- LFLex Fridman
Well, with driving we have to be careful saying human level 'cause like that's the most-
- FCFrançois Chollet
Yeah, there are all kinds of drivers. (laughs)
- LFLex Fridman
(laughs) Yeah, that's the clearest example of like, you know, cars will most likely be much safer than humans in situ- in many situations where humans fail. It's the vice versa question.
- FCFrançois Chollet
So I, I, I'll tell you, you know, the thing is the, the amounts of training data you would need to anticipate for pretty much every possible situation you, you run into in the real world, uh, is such that it's not entirely unrealistic to think that at some point in the future we'll develop a system that's trained on enough data, especially, uh, uh, provided that we can, uh, simulate a lot of that data. We don't necessarily need actual, uh, actual cars on the road, uh, for everything. Uh, but it's a massive effort, and it turns out you can create a system that's much more adaptative, uh, that can generalize much better if you just add, um, uh, explicit models of the surroundings of the car, uh, and if you use deep learning for what it's good at, which is to provide perceptive information. So, in general, deep learning is, is, uh, a way to encode perception and a way to encode intuition. But it's not a good medium for, uh, any sort of, uh, explicit reasoning, and, uh, in AI systems today, uh, strong generalization tends to come from, um, explicit models, tend to come from abstractions in the human mind that are encoded in program form, uh, by human engineer, right?
- LFLex Fridman
Yeah.
- FCFrançois Chollet
These are the abstractions that can actually generalize, not the sort of, uh, weak abstraction that is learned by a neural network.
- LFLex Fridman
Yeah, and the question is how much, uh, how much reasoning, how much strong abstractions are required to solve particular tasks like driving? That's, that's the question. Or human life, existence, how much, how much strong obs- uh, abstractions does existence require? But more specifically on hu- on driving, that's, that seems to be, that seems to be a coupled question about intelligence is like, uh, how much intelligence, like how do you build an intelligent system and, uh, the coupled problem, how hard is this problem? How much intelligence does this problem actually require? So we're, um, we get to cheat, right, 'cause we get to look at the problem. Like it's not like you get to close your eyes and completely new to driving. We get to do what we do as human beings which is, uh, for the majority of our life before we ever learn quote unquote "to drive" we get to watch other cars and other people drive. We get to be in cars, we get to watch, we get to g- again, see movies about cars. We get to, you know, we get to observe all that stuff, and that's similar to what neural networks are doing is getting a lot of data and the, the, the question is, yeah, how much is, uh, how many leaps of reasoning genius is required (laughs) to be-
- FCFrançois Chollet
Well-
- LFLex Fridman
... able to actually effectively drive.
- FCFrançois Chollet
I think this example of driving, I mean, sure, um, uh, you've seen a lot of cars, uh, in your life before you learned to drive. But let's say you've learned to drive in Silicon Valley and now, uh, you rent a car in Tokyo. Well, now everyone is driving on the other side of the road and the signs are different and the roads are more narrow and so on. So it's a very, very different environment and, uh, a smart human, even an average human should be able to just zero shot it, to just be operational-
- LFLex Fridman
Zero shot.
- FCFrançois Chollet
... in this, in this very different environment-
- LFLex Fridman
Yeah.
- FCFrançois Chollet
... right away despite having had no contact with the novel complexity that is contained in this environment.... right? And that is novel complexity. It's not just, uh, um, interpolation, uh, over the situations that you've encountered previously, like learning to drive in the US, right?
- LFLex Fridman
I, I would say, the reason I ask is, one of the most interesting tests of intelligence we have today, actively, which is driving, in terms of having an impact on the world. Like, when do you think we'll pass that test of intelligence?
- FCFrançois Chollet
So, I, I don't think driving is that much of a test of intelligence because again, there is no task for which skill at that task demonstrates intelligence, unless, uh, it's a kind of meta-task that involves acquiring, uh, new skills. So, I don't think... I think you can actually solve driving without having, uh, uh, any- any real amount of intelligence. For instance, if you really did have infinite training data, um, you could just literally train an end-to-end deep learning model at just driving, provided infinite training data. The only problem, uh, with the whole idea is, um, collecting datasets that's sufficiently comprehensive that covers the very long tail of possible situations you might encounter. And it's really just a scale problem. So, I think the- there's nothing fundamentally wrong, uh, uh, with this plan, with this idea. It's just that, um, it strikes me as a fairly inefficient thing to do, because you run into this, uh, this, uh, uh, scaling issue with diminishing returns. Whereas if instead you took a more, uh, manual engineering approach where you, uh, uh, use deep learning modules, uh, in combination, uh, with, um, engineering and explicit model of the surrounding of the cars and you, and you bridge the two in a clever way, your model will actually start generalizing much earlier and more effectively than the end-to-end deep learning model. So, why would you not go with the more manual engineering oriented approach? Like, even if- if you created, uh, that system, either the end-to-end deep learning model system that's around infinite data or, uh, the slightly, uh, uh, uh, more human system, I- I don't think achieving L5 would demonstrate, uh, a general intelligence or intelligence of any generality at all. Again, the only possible test, uh, of generality in AI would be a test that looks at skill acquisition over unknown tasks.
- LFLex Fridman
But then-
- FCFrançois Chollet
For instance, you could take your L5-
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
... uh, driver and ask it to- to learn to- to pilot a- a- a commercial airplane, for instance. And then you would look at how much human involvement is required and how much-
- LFLex Fridman
Right.
- FCFrançois Chollet
... training data is required, uh, for the system to learn to pilot an airplane. And, uh, that- that gives you a measure of how intelligent that system really is.
- LFLex Fridman
Yeah. Well, I mean, that- that's a big leap. I get you. But, uh, I'm more interested as a problem, I would see... To me, driving is a black box that can generate novel situations at some rate, like what people call edge cases. Like, so it does have newness that keeps being... Like, we're confronted, let's say, once a month.
- FCFrançois Chollet
It is a very long tail. Yes.
- LFLex Fridman
It's a long tail.
- FCFrançois Chollet
That doesn't mean you cannot solve it, uh, just by- by training a statistical model on lots of data.
- 1:09:30 – 1:13:59
Tests of intelligence
- LFLex Fridman
So, one of the other big things you talk about in- in the paper, we've talked about it a little bit already but let's talk about it some more, is, uh, the actual tests of intelligence. So, if we look at, like, human and machine intelligence, do you think tests of intelligence should be different for humans and machines, or how we think about testing of intelligence? Are these fundamentally the same kind of, uh, intelligences that we're after and therefore the tests should be similar?
- FCFrançois Chollet
So if your goal is to create, uh, uh, AIs that are, uh, more human-like, then it will be super valuable, obviously, to have a test, uh, that's- that's universal, that applies to both, uh, AIs, uh, and humans so that you can- you could establish a comparison, uh, between the two that you could tell exactly how, uh, intelligent, in terms of human intelligence, uh, a given system is. So that said, the constraints, uh, that apply to artificial intelligence and to human intelligence are very different, and your test should, uh, uh, account, uh, for this difference, um, because if yo- if you get artificial systems, it's always possible, uh, for an experimenter to buy, uh, arbitrary levels of skill at arbitrary tasks, either by, um, injecting hardcoded prior knowledge, uh, into the system via rules, uh, uh, and so on, uh, that come from the human mind, from the minds of the programmers, and also, um, buying, uh, higher levels of skill just by training on more data.
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
Uh, for instance, you could generate an infinity of different Go games and you could train a- a- a Go-playing system, uh, that way, but you could not directly compare it to human Go-playing skills because a human that plays Go had to develop that skill in a very constrained environment. They had a limited amount of time, they had a limited amount of energy, uh, and of course, uh, uh, this started from a different set of priors. This started from, uh, um, you know, innate, uh, uh, human priors. Um, so I think if you want to compare the intelligence of two systems, like the intelligence of an AI and the intelligence, uh, of a human, you have to, um, control for priors. You have to, uh, start from the- the- the same set of knowledge priors about the task and you have to control for- for experience, uh, that is to say for training data.
- LFLex Fridman
So prior- what's priors?
- FCFrançois Chollet
So prior is whatever information, uh, you have about a given task before you start learning about this task.
- LFLex Fridman
And how's that different from experience?
- FCFrançois Chollet
Well, experience is acquired, right? So for instance, if you're- if you're trying to play Go, uh, your experience with Go is all the Go games you've played or- or- or you've seen or you've simulated in your mind, let's say. And, uh, your priors are things like, well, Go- Go is a game on a- on a 2D grid, uh, and we have lots of hardcoded, uh, priors about, uh- uh- uh, the organization of 2D space.
- LFLex Fridman
And r- so rules of how the- the dynamics of this, the physics of this game in this 2D space?
- FCFrançois Chollet
Yes.
- LFLex Fridman
And the idea that you have ... that what winning is. (laughs)
- FCFrançois Chollet
Yes, exactly.
- LFLex Fridman
So like-
- FCFrançois Chollet
And all ot- other board games can also share some similarities to Go, and if you've played these board games then, uh, with respect to the game of Go, that would be part of your priors about the game.
- LFLex Fridman
Well, it's interesting to think about the game of Go as how many priors are actually brought to the table. When you look at, um, self-play reinforcement learning based mechanisms that do learning, it seems like the number of priors is pretty low.
- FCFrançois Chollet
Yes.
- LFLex Fridman
But you're saying you should be ex- you should be-
- FCFrançois Chollet
There is a 2D special prior though, in the covenant.
- LFLex Fridman
Right. But you should be clear at making those priors explicit.
- FCFrançois Chollet
Yes. Uh, so in particular, I think if your- if your goal is to measure a human-like form of intelligence-... then you should clearly establish that you want, uh, the AI you're testing to start from, uh, the same set of priors that humans start with.
- LFLex Fridman
Right. So,
- 1:13:59 – 1:27:18
Tests of human intelligence
- LFLex Fridman
I mean, uh, t- to me personally but I think to a lot of people, the human side of things is very interesting. So, testing intelligence for humans, what, um, what do you think is a good test of human intelligence?
- FCFrançois Chollet
Well, that's the question that psychometrics is- is interested in. And-
- LFLex Fridman
What's-
- FCFrançois Chollet
... there's an entire subfield of psychology, uh, that deals with this question.
- LFLex Fridman
So, what's psychometrics?
- FCFrançois Chollet
So, psychometrics is the subfield of psychology that- that tries to, uh, measure or quantify aspects of the human mind. So, in particular, cognitive abilities, intelligence, and personality traits as well.
- LFLex Fridman
So, uh, like, what are... it might be a weird question, but what are, like, the first principles of the- of psychometrics that is- operates on, you know... (laughs) what- what- what are the priors it brings to the table?
- FCFrançois Chollet
So, uh, it's a field with a- with a fairly long history. Um, it's- s- so, you know, psychology sometimes gets a- a bad reputation for not having very reproducible, uh, results and so on. Uh, psychometrics is actually some fairly solidly reproducible results. So, the ideal goals, uh, of the field is, you know, a test should be- be reliable, which is a- a- a notion tied to reproducibility. It should be valid, uh, meaning that it should actually measure what you says- what you say it measures. Um, so for instance, if you're- if you're saying that you're measuring intelligence, then your test results should be correlated with things that y- you expect to be correlated with intelligence, like success in school or success in the workplace and so on. Should be standardized, meaning that, uh, you can administer your test to many different people in the same conditions. Uh, and it should be free from bias, meaning that, for instance, uh, if your- if- if your test involves, uh, the English language, then you have to be aware that, uh, this creates a bias, uh, against people who have English as their second language or people who can't speak English at all. Um, so of course, these, uh, these principles for creating psychometric tests are, um, very much 90-year-old. I don't think every psychometric test is, uh- is really either reliable, uh, um, valid, or are free from bias. But at least through- the field is aware, uh, of these weaknesses and is- is trying to address them.
- LFLex Fridman
So, it's kind of interesting. Um, ultimately, you're only able to measure, like you said previously, the skill, but you're trying to f- do a bunch of measures of different skills that correlate-
- FCFrançois Chollet
Yes.
- LFLex Fridman
... as you mentioned strongly with some general concept of cognitive ability.
- FCFrançois Chollet
Yes. Yes.
- LFLex Fridman
So, what's the g-factor?
- FCFrançois Chollet
So, right. There are many different kinds of- of tests is- tests of, uh, intelligence and, uh, each of them is interested in- in, uh, uh, different aspects of intelligence, you know. Some of them will deal with language, some of them will deal with, uh, uh, uh, spatial vision, maybe mental rotations, numbers and so on. When you run these very different tests at scale, what you start seeing is that there are clusters of correlations, uh, among test results. So, for instance, if you look at, uh, uh, homework at school, um, you will see that people who do well at math are also likely, statistically, to do well in physics.
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
And what's more, uh, they are- they are also people who do well at math and physics are also statistically likely to do well in things that sound completely unrelated, like writing an- an English essay, for instance.
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
And so when you see clusters of correlations, uh, uh, in s- in statistical terms, you would explain them with a latent variable, and the latent variable that would, for instance, explain, uh, the relationship between being good at math and being good at physics would be, uh, cognitive ability, right?
- LFLex Fridman
Mm-hmm.
- FCFrançois Chollet
And the g-factor is the- the latent variable that explains, uh, the fact that every test of intelligence that you can come up with results on that- on- on- on this test end up being correlated. So, there is some, uh, single, uh, uh, unique variable, uh, that- that explains this correlation, so that's the g-factor. So, it's a statistical construct. It's not really something you can directly measure, for instance, in a- in a person. Um-
- LFLex Fridman
But it's there.
- FCFrançois Chollet
But it's there. It's there, it's there at scale, and that's also one thing I want to, uh, mention about psychometrics. Like, you know, when- when you talk about measuring intelligence in- in humans, for instance, some people get a little bit worried. They will say, "No, that sounds dangerous. Maybe that sounds potentially discriminatory," and so on. And they're not wrong. And the thing is... so personally, I'm not interested in psychometrics, uh, as a way to characterize one, uh, individual person. Like if, uh- if I get your psychometric personality assessment or your IQ, I don't think that actually tells me much, uh, about you as a person. I think psychometrics is most useful, uh, as a statistical tool, so it's most useful at scale. Uh, it's most useful when you start getting test results for a large number of people and you start, uh, cross-correlating these test results because that gives you information about the structure, uh, of the human mind, in particular about the structure of human cognitive abilities. So, at scale, psychometrics paints a certain picture of the human mind and that's interesting, and that's what's driven to AI, the structure of human cognitive abilities.
- LFLex Fridman
Yeah. It g- gives you an insight into... I mean, to me, I remember when I learned about g-factor, it seemed, um...... it, it seemed like it would be impossible for i- even- i- it to be real, even as a statistical variable. Like, it felt, uh, kind of like astrology. Like, it's like wishful thinking among psychologists. But, uh, the more I learned, uh, I realized that there is some... I mean, I'm not sure what to make about human beings, the fact that the g factor is a thing, that there's a commonality across all of the human species-
- FCFrançois Chollet
Yes.
- LFLex Fridman
... that there does seem to be a, a strong correlation between cognitive abilities.
- FCFrançois Chollet
Yeah.
- LFLex Fridman
That's kind of fascinating, actually.
- FCFrançois Chollet
Yeah. So, human cognitive abilities have, uh, a structure. Like the, the most mainstream theory of the structure of cognitive abilities is called, uh, uh, CHC Theory. It's, uh, Cattell, Horn, Carroll is the name of, uh, the three psychologists who contributed key pieces of it. And it describes, uh, cognitive abilities as a hierarchy with three levels. And at the top, you have the g factor. Then you have broad cognitive abilities, uh, for instance, fluid intelligence, right, um, that, that encompass, um, uh, a broad set of possible, uh, kinds of tasks that are all k- uh, uh, related. And then you have, uh, narrow cognitive abilities at the last level, which is, uh, closer to task-specific skill. And, uh, th- there are actually different theories, uh, of the structure of cognitive abilities, they just emerge from different statistical analysis of, uh, IQ test results. Uh, but they all describe, uh, a hierarchy with, uh, a kind of g factor, uh, at the top. And you're right that the g factor is ... it's not quite real in the sense that it's not something you can observe and measure like your height, for instance. But it's real in the sense that you, you see it in, in a statistical analysis of the data, right? One thing I want to mention is that the fact that there is a g factor does not really mean that human intelligence is, uh, general in a strong sense. It does not mean human intelligence can, can be applied to any problem at all and that someone w- who has a high IQ is gonna be able to solve any problem at all. That's not quite what it means. I think, um, one, uh, one, uh, popular analogy to understand it is the sports, uh, analogy. Uh, if you consider the concept of, uh, physical fitness, it's a concept that's very similar to intelligence because it's a useful concept, it's, uh, something you can intuitively understand. Some people are, are fit, uh, maybe like you. Some people are not as fit, maybe like me. Um-
- LFLex Fridman
But none of us can fly. (laughs)
- FCFrançois Chollet
Absolutely. So-
- 1:27:18 – 1:35:59
IQ tests
- LFLex Fridman
You know, it's a kind of exciting topic for people, even out, you know, outside of artificial intelligence is IQ tests. They're, I think it's Mensa, whatever, there's different degrees of difficulty for questions. We talked about this offline a little bit too, about sort of difficult questions. You know, what makes a question on an IQ test, um, more difficult or less difficult, do you think?
- FCFrançois Chollet
So, the, the thing to keep in mind is that there's no such thing as a question that's intrinsically difficult. It has to be difficult with respects to the things you already know and the things you can already do, right? So, in, uh, in terms of, uh, an IQ test question, typically we'd have, uh, it would be structured, for instance, as a set of demonstration input and output pairs, right? And then you would be given, uh, a test input, a prompt, and you w- you, you would need to recognize or produce the corresponding output. And in that narrow context, you could say a difficult question, uh, is a question where, um, the input prompt is very surprising and unexpected given the, the training examples.
- LFLex Fridman
Just even the nature of the patterns that you're observing in the input prompt.
- FCFrançois Chollet
Yes. For instance, let's say you have a, a, a rotation problem. You must rotate a shape by 90 degrees. If I give you two examples and then I give you one, one, uh, prompt, which is actually one of the two training examples, then there is zero generalization difficulty for the task. It's actually a trivial task. You, you just recognize that it's one, one of the training examples and you produce the same answer. Now, if it's, uh, if it's a more complex shape, there is, you know, a little bit more generalization, but it remains that you are still doing the same thing at test time as you were, uh, being demonstrated at, at training time. A difficult task is a task that will require some amount of, uh, uh, test time adaptation, some amount of, uh, improvisation. Right? So, uh, consider, I don't know, uh, you're, you're teaching a class on like quantum physics or something. Um, if, uh, if you wanted to kinda test the understanding that students have of the material, you would come up with, uh, an exam, uh, that's very different from anything they've seen, like on the internet when they were cramming. On the other hand, if you wanted to make it easy, you would just give them something that's, uh, very similar to the, the mock exams that, that they've, that they've taken. Something that's just a simple interpolation of questions that they've al- they've already seen. And so, that would be an easy exam. It's very similar to what you've been trained on. And a difficult exam is one that truly probes your understanding because it forces you, uh, to improvise. It forces you to do things, uh, that are different from what you were exposed to before. So, that said, it doesn't mean that the exam that requires improvisation is intrinsically hard, right? Because maybe you're, you're a quantum physics expert, so when you take the exam, this is actually stuff that despite being u- new to the students, it's not new to you, right? Uh, so it can only be difficult with respect to what the test taker already knows and with respect to the information that the test taker has about the task. So, that's what I mean by controlling for priors, what you, the information you bring to the table, right?
Episode duration: 2:34:20
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode PUAdj3w3wO4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome