Skip to content
Lex Fridman PodcastLex Fridman Podcast

Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation | Lex Fridman Podcast #376

Stephen Wolfram is a computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, a company behind Wolfram|Alpha, Wolfram Language, and the Wolfram Physics and Metamathematics projects. Please support this podcast by checking out our sponsors: - MasterClass: https://masterclass.com/lex to get 15% off - BetterHelp: https://betterhelp.com/lex to get 10% off - InsideTracker: https://insidetracker.com/lex to get 20% off EPISODE LINKS: Stephen's Twitter: https://twitter.com/stephen_wolfram Stephen's Blog: https://writings.stephenwolfram.com Wolfram|Alpha: https://www.wolframalpha.com A New Kind of Science (book): https://amzn.to/30XoEun Fundamental Theory of Physics (book): https://amzn.to/30XbAoT Blog posts: A 50-Year Quest: https://bit.ly/3NQbZ2P What Is ChatGPT doing: https://bit.ly/3VOwtuz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:33 - WolframAlpha and ChatGPT 21:14 - Computation and nature of reality 48:06 - How ChatGPT works 1:47:48 - Human and animal cognition 2:01:07 - Dangers of AI 2:09:27 - Nature of truth 2:30:49 - Future of education 3:06:51 - Consciousness 3:15:50 - Second Law of Thermodynamics 3:39:23 - Entropy 3:52:23 - Observers in physics 4:09:15 - Mortality SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Stephen WolframguestLex Fridmanhost
May 9, 20234h 14mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:33

    Introduction

    1. SW

      You know, I can tell ChatGPT, "Create a piece of code," and then just run it on my computer. And I'm like, you know, that- that sort of personalizes for me the what could, what could possibly go wrong, so to speak.

    2. LF

      Was that exciting or scary, that possibility?

    3. SW

      It was a little bit scary actually, because it's kind of like, if you do that, right?

    4. LF

      Yeah.

    5. SW

      What is the sandboxing that you should have? And that's sort of a, that's a- a version of- of that question for the world. That is, as soon as you put the AIs in charge of things, you know, how much, how many constraints should there be on these systems before you put the AIs in charge of all the weapons and all these, you know, all these different kinds of systems?

    6. LF

      Well, here's the fun part about sandboxes, is, uh, the AI knows about them and has the tools to, uh, crack them. The following is a conversation with Stephen Wolfram, his fourth time on this podcast. He's a computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, a company behind Mathematica, Wolfram Alpha, Wolfram Language, and the Wolfram Physics & Metamathematics projects. He has been a pioneer in exploring the computational nature of reality. And so, he's the perfect person to explore with together the new quickly evolving landscape of large language models as human civilization journeys towards building super intelligent AGI. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Stephen Wolfram.

  2. 1:3321:14

    WolframAlpha and ChatGPT

    1. LF

      You announced the integration of ChatGPT and Wolfram Alpha and Wolfram Language. So, let's talk about that integration. What are the key differences, from the high philosophical level, maybe the technical level, between the capabilities of, broadly speaking, the two kinds of systems, large language models and this computational, gigantic computational system infrastructure that is Wolfram Alpha?

    2. SW

      Yeah. So what does something like ChatGPT do? It's- it's mostly focused on make language like the language that humans have made and put on the web and so on.

    3. LF

      Yeah.

    4. SW

      So, you know, its- its primary sort of underlying technical thing is you've given a prompt, it's trying to continue that prompt in a way that's somehow typical of what it's seen based on a trillion words of text that humans have written on the web. And the way it's doing that is with something which is probably quite similar to the way we humans do the first stages of that, using a neural net and so on, and just saying, "Given these, given this piece of text, let's ripple through the neural net one wo- and- and get one word at a time of output." And, uh, it's kind of a- a shallow computation on a large amount of kind of training data that is what we humans have put on the web. That's a different thing from sort of the computational stack that I spent the last, I don't know, 40 years or so building, which has to do with what can you compute many steps, potentially a very deep computation? It's not sort of taking the statistics of what we humans have produced and trying to continue things based on that statistics. Instead, it's trying to take kind of the formal structure that we've created in our civilization, whether it's from mathematics or whether it's from kind of systematic knowledge of all kinds, and use that to do arbitrarily deep computations to figure out things that- that aren't just, "Let's match what's already been kind of said on the web," but, "Let's potentially be able to compute something new and different that's never been computed before." So as a, as a practical matter, you know, the- the, um, what we're, you know, the- Our goal is to have made as much as possible of the world computable in the sense that if there's a question that in principle is answerable from some sort of expert knowledge that's been accumulated, we can, uh, compute the answer to that question. And we can do it in a sort of reliable way that's- that's the best one can do given what the expertise that our civilization has accumulated. It's a very, it's a- it's a much more sort of labor-intensive on the side of kind of being, creating kind of the- the computational system to do that. Um, obviously the, in- in the, the kind of the ChatGPT world, it's like take things which were produced for quite other purposes, namely the, all the things we've written out on the web and so on, and sort of forage from that things which were, are like what's been written on the web. So I think as, you know, as a practical point of view, I- I view sort of the ChatGPT thing as being wide and shallow and what we're trying to do with sort of building out computation as being this sort of deep, also broad, but- but, uh, most importantly, kind of deep type of thing. I think another way to think about this is if you go back in human history, you know, I don't know, 1,000 years or something, and you say, "What- what can the typical person, what's the typical person going to figure out?" Well, the answer is there are certain kinds of things that we humans can quickly figure out. That's sort of what- what our, uh, you know, at our neural architecture and the kinds of things we learn in our lives let us do. But then there's this whole layer of kind of formalization that got developed in which is, you know, the kind of whole sort of story of intellectual history and whole kind of depth of learning. That formalization turned into things like logic, mathematics, science, and so on. And that's the kind of thing that allows one to kind of build these towers of- of, uh, uh, of- of, uh, sort of towers of things you work out.

    5. LF

      Mm-hmm.

    6. SW

      It's not just, "I can immediately figure this out." It's, "No, I can use this kind of formalism to go step by step and work out something which was not immediately obvious to me." And that's kind of the story of what- what we're trying to do computationally, is to be able to build those kind of tall towers of what implies what implies what and so on. Um, and, uh, as opposed to kind of the, "Yes, I can immediately figure it out. It's just like what I saw somewhere else in something that I heard or remembered," or something like this.

    7. LF

      What can you say about the kind of formal structure or the kind of formal foundation you can build such a formal structure on, about the kinds of things you would start on in order to build this kind of, uh, deep computable knowledge?

    8. SW

      ... trees. So the question is sort of how do you, how do you think about computation? And there's, there's a couple of points here. One is, what computation intrinsically is like, and the other is, what aspects of computation we humans with our minds and with the kinds of things we've learnt can sort of relate to in that computational universe. So if we start on the kind of what can computation be like, it's something I've spent some big chunk of my life studying, is imagine that you're, you know, we, we, usually write programs where we kind of know what we want the program to do and we carefully write many lines of code and we hope that the program does what we, what we intended it to do. But the thing I've been interested in is, if you just look at the kind of natural science of programs. So you just say, "I'm gonna make this program. It's a really tiny program. Maybe I even pick the pieces of the program at random." But it's really tiny and by really tiny, I mean, you know, less than a line of code type thing. You say, "What does this program do?" And you run it. And big discovery that I made in the early '80s is that even extremely simple programs when you run them can do really complicated things. Really surprised me. It took me several years to kind of realize that that was a thing, so to speak. But that, that realization that even very simple programs can do incredibly complicated things that we very much don't expect, that discovery, I mean, I realized that that's very much I think how nature works. That is, nature has simple rules, but yet does all sorts of complicated things that we might not expect. You know, as a big thing of the last few years has been understanding that that's how the whole universe and physics works. But that's a, a quite separate topic. But so there's this whole world of programs and what they do, and very rich sophisticated things that these programs can do. But when we look at many of these programs, we look at them and say, "Well, that's kind of... I don't really know what that's doing." It's not a very human kind of thing. So on the one hand, we have sort of what's possible in the computational universe. On the other hand, we have the kinds of things that we humans think about, the kinds of things that are developed in kind of our intellectual history. And that's, uh... And the, the... Really, the challenge to sort of making things computational is to connect what's computationally possible out in the computational universe with the things that we humans sort of typically think about with our minds. Now, that's a complicated kind of moving target because the things that we think about change over time. We've learnt more stuff. We've invented mathematics. We've invented various kinds of ideas and structures and so on. So it's, it's gradually expanding. We're kind of gradually colonizing more and more of this kind of intellectual space of possibilities. But the, the real thing, the real challenge is, how do you take what is computationally possible? How do you take... How do you encapsulate the kinds of things that we think about in a way that kind of plugs into what's computationally possible? And, and actually the, the, uh, the big sort of idea there is this idea of kind of symbolic programming, symbolic representations of things. And so the, the question is, when you look at sort of everything in the world and you kind of, you know, you take some visual scene or something you're looking at and you say, "Well, how do I turn that into something that I can kind of stuff into my mind?" You know, there are lots of pixels in my visual scene, but the things that I remembered from that visual scene are, you know, there's a, there's a chair in this place. It's a kind of a symbolic representation of the visual scene. There are two chairs and a table or something, rather than there are all these pixels arranged in all these detailed ways. And so the question then is how do you take sort of all the, all the things in the world and make some kind of representation that corresponds to the types of ways that we think about things? And, and human language is, is sort of one form of representation that we have. We talk about chairs. That's a word in human language and so on. How do we, how do we take... But human language is not in and of itself something from... that plugs in very well to sort of computation. It's not something from which you can immediately compute consequences and so on. And so you have to kind of find a way to take sort of the, the, the stuff we understand from human language and make it more precise. And that's really the s- story of, of symbolic programming. And, you know, what, what that turns into is something which I didn't know at the time it was going to work as well as it has, but back in the 1979 or so, I was trying to build my first big computer system and trying to figure out, you know, how should I represent computations at a high level? And I kind of invented this idea of using kind of symbolic expressions, you know, structured as, it's kind of like a, a function and a bunch of arguments, but that function doesn't necessarily evaluate to anything. It's just a, a thing that sits there representing a structure. And so building up that structure, and it's turned out that structure has been extremely... It's a, it's a good match for the way that we humans... It seems to be a good match for the way that we humans kind of conceptualize higher level things. And it's been for the last, I don't know, 45 years or something, it's served me remarkably well. So building up that structure using this kind of symbolic representation. But what can you say about abstractions here? Because you could just start with your physics project. You could start at a hypergraph at a very, very low level and build up everything from there. But you don't. Right. You take shortcuts. Right. Uh, you, you take the highest level of abstraction, convert that, uh, the kind of abstraction that's convertible to something computable using symbolic representation. And then th- that's your new foundation for that little piece of knowledge. Yeah. Somehow all of that is integrated. Right. So the, the sort of... A very important phenomenon that, that is kind of a thing that I've sort of realized is just, it's one of these things that sort of in the, in the future of kind of everything is going to become more and more important is this phenomenon of computational irreducibility. And the, the question is, if you know the rules for something, you have a program, you're gonna run it, you might say, "I know the rules. Great. I know everything about what's gonna happen."Well, in principle you do, because you can just run those rules out and just see what they do. You might run them a million steps, you see what happens, et cetera. The question is, can you, like immediately jump ahead and say, "I know what's gonna happen after a million steps-"

    9. NA

      Yeah.

    10. SW

      ... and the answer is 13 or something.

    11. NA

      Yes.

    12. SW

      And the- the- one of the very critical things to realize is, if you could reduce that computation, there is, in a sense, no point in doing the computation.

    13. NA

      Yeah.

    14. SW

      The place where you really get value out of doing a computation is when you had to do the computation to find out the answer. But this phenomenon that you have to do the computation to find out the answer, this phenomenon of computational irreducibility seems to be tremendously important for thinking about lots of kinds of things. So one of the things that happens is, okay, you've got a model of the universe, at the low level, in terms of atoms of space, and hypergraphs, and rewriting of hypergraphs, and so on. And it's happening, you know, 10 to the 100 times every second, let's say. Well, you say, "Great, then we- we've nailed it. We- we know how the universe works." Well, the problem is, the universe can figure out what it's gonna do. It does those 10 to the 100, you know, steps. But for us to work out what it's gonna do, we have no way to reduce that computation. The only way to do the computation, to see the result of the computation is to do it. And if we're operating within the universe, we're kind of... There's no, there's no opportunity to do that, 'cause the universe is doing it as fast as the universe can do it.

    15. NA

      Mm-hmm.

    16. SW

      And that's, you know, that's what's happening. So what we're trying to do, and a lot of the story of science and a lot of other kinds of things, is finding pockets of reducibility.

    17. NA

      Mm-hmm.

    18. SW

      That is, you could have a situation where everything in the world is full of computational irreducibility. We never know what's gonna happen next. The only way we can figure out what's gonna happen next is just let the system run and see what happens. So in a sense, the story of, of most kinds of science, inventions, a lot of kinds of things, is the story of finding these places where we can locally jump ahead.

    19. NA

      Mm-hmm.

    20. SW

      And one of the features of computational irreducibility is there are always pockets of reducibility. There are always places, there are always an infinite number of places where you can jump ahead. There's no way where you can jump completely ahead, but there are little, little patches, little places where you can jump ahead a bit.

    21. NA

      Mm-hmm.

    22. SW

      And I think, you know, we can talk about physics project and so on, but I think the thing we realize is we kind of exist in a slice of all the possible computational irreducibility in the universe. We exist in a slice where there's a reasonable amount of predictability. And in a sense, as we try and construct these kind of higher levels of, of abstraction, symbolic representations and so on, what we're doing is, we're finding these lumps of reducibility that we can kind of attach ourselves to, and about which we can kind of have fairly simple narrative things to say. Because in principle, you know, I say, "What's gonna happen in the next few seconds?" You know, oh, there are these molecules moving around in the air in this room, and oh gosh, it's an incredibly complicated story, um, and that's a whole computationally irreducible thing, most of which I don't care about.

    23. NA

      Mm-hmm.

    24. SW

      And most of it is, well, you know, the air's still gonna be here and nothing much is going to be different about it. And that's a kind of reducible fact about what is ultimately a- at an underlying level, a computationally irreducible process.

    25. NA

      (sighs) And, uh, life would not be possible if we didn't have a, uh, large number of such reducible pockets. Uh...

    26. SW

      Yes.

    27. NA

      ... pockets amenable to, uh, reduction into something symbolic.

    28. SW

      Yes, I think so.

    29. NA

      Okay.

    30. SW

      I mean, life in, in the way that we experience it, that, I mean, you know, one might, you know, depending on what we mean by life, so to speak, the- the- the experience that we have of sort of consistent things happening in the world, the idea of space, for example, where there's, you know, we can just say, "You're here, you move there," it's kind of the same thing. It's still you in that different place, even though you're made of different atoms of space and so on. This is, this idea that it's, uh, that there's sort of this level of predictability of what's going on, that's us finding a slice of reducibility in what is underneath this computationally irreducible kind of system. And I think that's- that's sort of the- the thing which is actually my favorite discovery of the last few years, is the realization that it is sort of the interaction between the sort of underlying computational irreducibility and our nature as kind of observers who sort of have to key into computational reducibility.

  3. 21:1448:06

    Computation and nature of reality

    1. SW

    2. LF

      ...so, I mean, this word observer, it, it means something in quantum mechanics, it means something in a lot of places. It means something to us humans-

    3. SW

      Right.

    4. LF

      ...as conscious beings, so what, what's the importance of the observer? What is the observer and what's the importance of the observer in the computational universe?

    5. SW

      So this question of, what is an observer, what's the general idea of an observer, is actually one of my next projects, which got somewhat derailed by the, the current sort of AI mania. But, um-

    6. LF

      Is there a connection there or is that, uh, do you, do you think the observer is primarily a physics phenomena?

    7. SW

      Is it related to the whole AI thing?

    8. LF

      Yes.

    9. SW

      Yes, it is related. So one question is, what is a general observer?

    10. LF

      Mm-hmm.

    11. SW

      So, you know, we know, we have an idea of what is a general computational system. We think about Turing machines, we think about other models of computation. There's a question, what is a general model of an observer? And the- there's kind of observers like us, which is kind of the observers we're interested in.

    12. LF

      Mm.

    13. SW

      You know, we could imagine an alien observer that deals with computational irreducibility and it has a mind that's utterly different from ours and, and completely incoherent with what, what we're like. But the fact is that, that, you know, if we are talking about observers like us, the, one of the key things is this idea of kind of taking all the detail of the world and being able to stuff it into a mind.

    14. LF

      Mm.

    15. SW

      Being able to take all the detail and kind of, you know, extract out of it a smaller set of, of kind of degrees of freedom, a smaller number of, of elements that will sort of fit in our minds. And I think this, this question, so I've been interested in trying to characterize, what is the general observer? And the general observer is, I think, in part, there are many... L- let me give an example of a-

    16. LF

      Mm-hmm.

    17. SW

      ...you know, you have a gas, it's got a bunch of molecules bouncing around, and the thing you're measuring about the gas is its pressure.

    18. LF

      Mm-hmm.

    19. SW

      And the only thing you as an observer care about is pressure. And that means you have a piston on the side of this box and the piston is being pushed by the gas, and there are many, many different ways that molecules can hit that piston.

    20. LF

      Mm.

    21. SW

      But all that matters is the kind of aggregate of all those molecular impacts, 'cause that's what determines pressure.

    22. LF

      Mm-hmm.

    23. SW

      So there's a huge number of different configurations of the gas which are all equivalent. So I think one key aspect of observers is this equivalencing of many different configurations of a system, saying, "All I care about is this aggregate feature, all I care about is this, this overall thing." And that's, that's sort of one, one aspect. And w- we see that l- in lots of different... Uh, again, it's the same story over and over again, that there's, there's a lot of detail in the world, but what we are extracting from it is something, a sort of a thin, uh, a thin summary of that, of that detail.

    24. LF

      I- is that thin summary nevertheless true? Is, can it be, uh, a crappy approximation-

    25. SW

      Sure.

    26. LF

      ...that on average is, is correct? I mean, if we look at the observer that's the human mind, it seems like there's a lot of very, um, as represented by natural language for example, there's a lot of really crappy approximation.

    27. SW

      Sure.

    28. LF

      And that could be maybe a feature of it.

    29. SW

      Well, yes, but-

    30. LF

      That there's ambiguity.

  4. 48:061:47:48

    How ChatGPT works

    1. SW

      Um, 'cause I think, you know, the, the real question is why does ChatGPT work? How is it possible to encapsulate, you know, to successfully reproduce all these kinds of things in natural language, um, you know, with a, you know, a comparatively small, he says, you know, couple hundred billion, you know, weights of neural net and so on. And I think that, uh, you know, that, that relates to kind of a fundamental fact about language, which, uh, you know, the, the main, the main thing is that I think there's a structure to language that we haven't kind of really explored very well as kind of this semantic grammar I- I'm talking about, about, um, about language. I mean, we, we kind of know that when we, when we set up human language-... we know that it has certain regularities. We know that it has a certain grammatical structure, you know, noun followed by verb followed by noun, adjectives, et cetera, et cetera, et cetera. That's its kind of grammatical structure. But I think the thing that ChatGPT is showing us is that there's an additional kind of regularity to language, which has to do with the meaning of the language beyond just this pure, you know, part of speech combination type of thing. And I, I think the, uh, the, the kind of, the, the one example of that, that we've had in the past is logic. And, you know, I, I think my, my sort of, uh, uh, uh, kind of picture of how was logic invented, how was logic discovered, uh, it really was the thing that was discovered in its original conception. It was discovered, presumably by Aristotle, who kind of listened to a bunch of people, orators, you know, giving speeches. And this one made sense, that one doesn't make sense, this one... And, you know, you see these patterns of, you know, if the, uh, you know, I don't know what, you know, if the, uh, if the Persians do this, then this does that-

    2. LF

      Mm-hmm.

    3. SW

      ... et cetera, et cetera, et cetera. And what, what Aristotle realized is there's a structure to those sentences, there's a structure to that rhetoric that doesn't matter whether it's the Persians and the Greeks, or whether it's the cats and the dogs.

    4. LF

      Mm-hmm.

    5. SW

      It's just, you know, P and Q. You can abstract from this, the, the details of these particular sentences. You can lift out this kind of formal structure, and that's what logic is.

    6. LF

      Th- that's a heck of a discovery, by the way, logic. You're making me realize now.

    7. SW

      Yeah.

    8. LF

      It's not obvious.

    9. SW

      The fact that there is an abstraction from natural language that has where you can fill in any word you want-

    10. LF

      Yeah.

    11. SW

      ... is a very interesting discovery. Now, it took a long time to mature. I mean, Aristotle had this idea of syllogistic logic, where there were these particular patterns of how you could argue things, so to speak.

    12. LF

      Mm-hmm.

    13. SW

      And, you know, in the Middle Ages, part of education was you memorized the syllogisms. I forget how many there were, uh, 15 of them or something. And they all had names, they all had mnemonics. Like, I think Barbara and Celarent were two of the, the mnemonics for the, the syllogisms. And people would kind of, "This is a valid argument 'cause it follows the Barbara syllogism," so to speak. And, and it took until 1830, um, you know, with, uh, George Boole to kind of get beyond that and kind of see that there was a, a level of abstraction that was beyond the, this particular template of a sentence, so to speak. Um, and that's... You know, and what, what's interesting there is, in a sense, you know, G- y- you know, ChatGPT is operating at the Aristotelian level. It's essentially dealing with templates of sentences. By the time you get to Boole and Boolean algebra and this idea of, you know, you can have arbitrary depth-nested collections of ands and ors and nots, and you can resolve what they mean, um, that's the kind of thing that's a computation story. That's, you know, you've gone beyond the pure sort of templates of natural language to something which is an arbitrarily deep computation. But the thing that I think we realized from, from ChatGPT is, you know, Aristotle stopped too quickly.

    14. LF

      Mm-hmm.

    15. SW

      And there was more that you could have lifted out of language as formal structures. And I think there's, you know, in a sense, we've captured some of that in, in, you know, some of what, what is in language that there's, there's a, there's a lot of kind of little calculi, little algebras of, of what you can say, what language talks about. I mean, whether it's, I don't know, if you say, "Uh, I go from place A to place B, place B to place C," then I know I've gone from place A to place C. If A is a friend of B and B is a friend of C, it doesn't necessarily follow that A is a friend of C. These are things that are, uh, you know, there, there are... If, if you go from, from place A to place B, place B to place C, it doesn't matter how you went. Like logic, it doesn't matter whether you flew there, walked there, swam there, whatever, you still... This transitivity of, of where you go is still valid. And there are, there are many kinds of, kind of features, I think, of the way the world works, uh, that are captured in these aspects of, of language, so to speak. And I think what, what ChatGPT effectively has found, just like it discovered logic, you know, people are really surprised it can do these, these logical inferences, it discovered logic the same way Aristotle discovered logic, by looking at a lot of sentences effectively and noticing the patterns in those sentences.

    16. LF

      But it feels like it's discovering something much more complicated than logic. So this kind of semantic grammar, I think you wrote about this. Uh, maybe we can call it the laws of language, I believe you call, or which I like, the laws of thought.

    17. SW

      Yes. That was the title that George Boole had for his, his Boolean algebra back in 1830. But yes, uh-

    18. LF

      Laws of thought?

    19. SW

      Yes. That was what he said.

    20. LF

      Whew.

    21. SW

      (laughs)

    22. LF

      All right.

    23. SW

      So he thought, he thought he nailed it with Boolean algebra.

    24. LF

      Yeah.

    25. SW

      There's more to it.

    26. LF

      I think it's a good question, how much more-

    27. SW

      Right.

    28. LF

      ... is there to it? And it seems like one of the reasons, as you, uh, imply that the reason GPT works, ChatGPT works is that, uh, there's a finite num- ber of things to it.

    29. SW

      Yeah. I mean, it's-

    30. LF

      Like, it's discovering the laws... In some sense, GPT is discovering these laws of semantic grammar that underlies language.

Episode duration: 4:14:33

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode PdE-waSx-d8

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome