Lex Fridman PodcastJuergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11
EVERY SPOKEN WORD
115 min read · 22,838 words- 0:00 – 15:00
The following is a…
- LFLex Fridman
The following is a conversation with Jurgen Schmidhuber. He's the co-director of atia Swiss AI lab and the co-creator of long short-term memory networks. LSTMs are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting out of the box ideas on meta-learning, adversarial networks, computer vision, and even a formal theory of, quote, "Creativity, curiosity, and fun." This conversation is part of the MIT course on Artificial General Intelligence and the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridspelled F-R-I-D. And now here's my conversation with Jurgen Schmidhuber. Early on, you dreamed of AI systems that self-improve recursively. When was that dream born?
- JSJürgen Schmidhuber
When I was a baby. No, that's not true.
- LFLex Fridman
(laughs)
- JSJürgen Schmidhuber
When I was a teenager.
- LFLex Fridman
And what was the catalyst for that birth? What was the thing that first inspired you?
- JSJürgen Schmidhuber
When I was a boy, I'm, I was thinking about what to do in my life and then I thought the most exciting thing is to solve the riddles of the universe and, and that means you have to become a physicist. However, then I realized that there's something even grander, you can try to build a machine that isn't really a machine any longer that learns to become a much better physicist than I could ever hope to be. And that's how I thought maybe I can multiply my tiny little bit of creativity into infinity.
- LFLex Fridman
But ultimately, that creativity will be multiplied to understand the universe around us? That's, that's the, the curiosity for that mystery that, that drove you?
- JSJürgen Schmidhuber
Yes. Uh, so if you can build a machine that learns to solve more and more complex problems and more and more general problem solver, then you basically ha-have, um, solved all the problems. At least all the solvable problems.
- LFLex Fridman
So how do you think, what is the mechanism for that kind of general solver look like? Because obviously we don't quite yet have one or know how to build one, but we have ideas and you have had throughout your career several ideas about it. So how do you think about that mechanism?
- JSJürgen Schmidhuber
So in the '80s, I thought about how to build this machine that learns to solve all these problems that I cannot solve myself and I thought it is clear it has to be a machine that not only learns to solve this problem here and this problem here, but it also has to learn to improve the learning algorithm itself.
- LFLex Fridman
Right.
- JSJürgen Schmidhuber
So it has to have the learning algorithm in, um, representation that allows it to inspect it and modify it, such that it can come up with a better learning algorithm. So I call that meta-learning, learning to learn and recursive self-improvement, um, that is really the pinnacle of that where you then not only learn, um, how to improve on that problem and on that, but you also improve the way the machine improves, and you also improve the way it improves the way it improves itself. And that was my 1987 diploma thesis which was all about that, hierarchy of meta-learners that have no computational limits except for the well-known limits, uh, that Goedel identified in 1931, and, uh, for the limits are physics.
- LFLex Fridman
Mm-hmm. In the recent years, meta-learning has gained popularity in a v- in a specific kind of form. You've talked about how that's not really meta-learning w-with neural networks, that's more m- basic transfer learning. Can you talk about the difference between the big general meta-learning-
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
... and a more narrow sense of meta-learning the way it's used today, the way it's talked about today?
- JSJürgen Schmidhuber
Let's take the example of a deep neural network that has, uh, learned to classify images and maybe you have trained that, um, network on 100 different databases of images.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
And now a new database comes along and you want to quickly learn the new thing as well. So one simple way of doing that is you take the network which already knows 100 types of databases and then you just take the top layer of that and you retrain that, uh, using the new label data that you have in the new image database. And then it turns out that it really, really quickly can learn that too.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
One shot basically.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
Because from the first 100 datasets, it already has learned so much about, about computer vision that it can reuse that and that is then almost good enough to solve the new task except you need a little bit of, um, adjustment on the top. So that is transfer learning and it has been done in principle for many decades. People have done similar things for decades. Meta-learning, true meta-learning is about having the learning algorithm itself open to introspection by the system that is using it.... and also open to modification, such that the learning system has an opportunity to modify any part of the learning algorithm and then evaluate the consequences of that modification and then learn from that, uh, to create a better learning algorithm and so on recursively. So that's a very different animal where you are opening the space of possible learning algorithms to the learning system itself.
- LFLex Fridman
Right. So you've, uh, like in the 2004 paper, you described, uh, Gödel machines and programs that rewrite themselves.
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
Right? Philosophically and even in your paper mathematically, these are really compelling ideas. But practically, do you see these self-referential programs being successful in the near term to having an impact where sort of it demonstrates to the world that th- this direction is a, is a good one to pursue in the near term?
- JSJürgen Schmidhuber
Yes. We had these two different types of, um, fundamental research, um, how to build a universal problem solver. One basically exploiting proof search and things like that that you need to come up with asymptotically optimal, theoretically optimal self-improvers and problem solvers. However, one has to admit that through this proof search comes in an additive constant, an overhead, an additive overhead that vanishes in comparison to, uh, what you have to do to solve large problems.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
However, for many of the small problems that we want to solve in our everyday life, we cannot ignore this constant overhead and that's why we also have been, um, doing other things, non-universal things such as recurrent neural networks which are trained by gradient descent-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... and local search techniques which aren't universal at all, which aren't provably optimal at all like the other stuff that we did, but which are much more practical as long as we only want to solve the small problems that we are typically trying to solve in this environment here. Yeah, so the universal problem solvers-
- 15:00 – 30:00
You said that an…
- JSJürgen Schmidhuber
- LFLex Fridman
You said that an AGI system will ultimately be a simple one. Uh, a general intelligence system will ultimately be a simple one, maybe a pseudocode of a few lines will be able to describe it. Can you talk through your intuition behind this idea, why you feel that a s- at its core, intelligence is a simple algorithm?
- JSJürgen Schmidhuber
Experience tells us that the stuff that works best is really simple. So the asymptotically optimal ways of solving problems, if you look at them, they're just a few lines of code. It's really true. Although they have these amazing properties, just a few lines of code. Then the most, mm, promising and most useful practical things maybe don't have this proof of optimality associated with them. However, they are also just a few lines of code. The most successful, mm, recurrent neural networks, you can write them down in five lines of pseudocode.
- LFLex Fridman
Th- that's a beautiful, almost poetic idea, but w- what you're describing there is the s- the lines of pseudocode are sitting on top of layers and layers of abstractions in a sense.
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
So y- you're saying at the very top-
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
... it will be a beautifully written sort of, uh, algorithm.
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
But do you think that there's many layers of abstractions we have to first learn to construct?
- JSJürgen Schmidhuber
Yeah. Of course, we are building on all these, um, great abstractions that people have invented over the millennia such as matrix multiplications and real numbers and basic arithmetics and calculus and derivations of, um, arrow functions and derivatives of arrow functions and stuff like that. So without that language, that greatly simplifies our way of thinking about these problems, we couldn't do anything. So in that sense as always, we are standing on the shoulders of the giants who, in the past, um, simplified the problem of problem-solving so much that now we have a chance to do the final step.
- LFLex Fridman
(laughs) So the final step will be a simple one. Uh, w- if we, if we take a step back through all of human civilization and just the universe in general (laughs) , uh, w- how do you think about evolution and what if creating a universe is required to achieve this final step?
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
What if going through the very painful and inefficient process of evolution is needed-
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
... to come up with this set of abstractions that ultimately lead to intelligence? Do you think there's a shortcut or do you think we have to create something like our universe in order to create something like human level intelligence?
- JSJürgen Schmidhuber
Hmm. So far the only example we have, uh-
- LFLex Fridman
(laughs)
- JSJürgen Schmidhuber
... is this one.
- LFLex Fridman
Yeah.
- JSJürgen Schmidhuber
This universe, um, in which we are living.
- LFLex Fridman
Do you think it can do better?
- JSJürgen Schmidhuber
Maybe not.
- LFLex Fridman
(laughs)
- JSJürgen Schmidhuber
But, um, we are part of this whole process.
- LFLex Fridman
Right.
- JSJürgen Schmidhuber
So...... apparently, so it might be the case that the code that runs the universe is really, really simple. Everything points to that possibility because gravity and other basic forces are really simple laws that can be easily described also in just a few lines of code basically. And, uh, and then there are these other, um, events that... the apparently random events in the history of the universe which, as far as we know at the moment, don't have a compact code, but who knows? Maybe somebody in the near future is going to figure out the pseudo-random generator which is, um, which, um, is computing whether the measurement of that, um, spin up or down thing here is, um, going to be positive or negative.
- LFLex Fridman
Underlying quantum mechanics?
- JSJürgen Schmidhuber
Yes. So-
- LFLex Fridman
Do you ultimately think quantum mechanics is a, a pseudo-random number gen- so it's all deterministic? There's no randomness in our universe? Does God play dice?
- 30:00 – 45:00
Mm-hmm. …
- JSJürgen Schmidhuber
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
W- and since they are trying to maximize the, um, rewards they get, they are suddenly motivated to come up with new action sequences, with new experiments that have the property that the data that is coming in as a consequence of these experiments has the property that they can learn something about, see a pattern in there which they hadn't seen yet before.
- LFLex Fridman
S- there's an idea of power play that you've described-... uh, uh, training a general problem solver in this kind of way of looking for the unsolved problems.
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
Can you describe that idea a little further?
- JSJürgen Schmidhuber
Yeah. It's another very simple idea. So normally, what you do in computer science, you have, um, you have some guy who gives you a problem, and then there is a- a huge, uh, search space of potential solution candidates. And you somehow try them out and, um, you have more or less sophisticated ways of, mm, moving around in that search space until you finally found a solution, uh, which you consider satisfactory.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
That's what most of computer science is about. Power play just goes one little step further and says, "Let's not only search for solutions to a given problem, but let's search two pairs of problems and their solutions where the system itself has the opportunity to phrase its own problem." So we are looking suddenly at pairs of problems and their solutions or, uh, modifications of the problem solver that is supposed to generate a solution to that, um, new problem. And- and this additional, um, degree of freedom allows us to build courier systems that are like scientists in the sense that they not only try to solve and try to find answers to existing questions, no, they are also free to, um, pose their own questions. So if you want to build an artificial scientist, we have to give it that freedom and power play is exactly doing that.
- LFLex Fridman
So that's- that's a dimension of freedom that's important to have, but how do you th- how hard do you think that, how multi-dimensional and difficult the space of then coming up with your own questions is?
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
So it's w- as, it's one of the things that as human beings we, uh, consider to be the thing that makes us special-
- JSJürgen Schmidhuber
Mm.
- LFLex Fridman
... the intelligence that makes us special is that brilliant insight-
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
... that can create something totally new.
- JSJürgen Schmidhuber
Yes. So now let's look at the extreme case. Let's look at the set of all possible problems that you can formally describe-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... which is infinite, which should be the next problem that a scientist or power play is going to solve. Well, it should be the easiest problem that goes beyond what you already know. So it should be the simplest problem that the current problem solver that you have, which can already solve 100 problems, that he cannot solve yet by just generalizing. So it has to be new, so it has to require a modification of the problem solver such that the new problem solver can solve this new thing, but the old problem solver cannot do it. And in addition to that, we have to make sure that the problem solver doesn't forget any of the previous solutions.
- LFLex Fridman
Right.
- JSJürgen Schmidhuber
And so by definition, power play is now trying always to search in this pair of-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... in- in- in the set of pairs of problems and problem solver modifications for a combination that, uh, minimize the time-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... to achieve these criteria. So it's always trying to find the problem which is easiest to add to the repertoire.
- LFLex Fridman
So just like grad students and, uh, academics and researchers can spend their whole career in a local minima-
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
... stuck trying to, uh, come up with interesting questions-
- JSJürgen Schmidhuber
Mm-hmm.
- LFLex Fridman
... but ultimately doing very little, do you think it's easy w- in this approach of looking for the simplest unsolvable problem to get stuck in a local minima and not never really discovering new, uh, you know, really jumping outside of the 100 problems that you've already solved-
- 45:00 – 1:00:00
Mm-hmm. …
- JSJürgen Schmidhuber
use this model of the network, uh, of the world, this model network of the world, this predictive model of the world, to plan ahead and say-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... "Let's not do this action sequence. Let's do this action sequence instead because it leads to more predicted rewards."
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
And whenever it's waking up these little subnetworks that stand for itself, then it's thinking about itself. Then it's thinking about itself and it's exploring mentally the consequences of its own actions. And, and now you tell me what is still missing, um, uh-
- LFLex Fridman
Missing the next, the, the gap to consciousness.
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
Uh, there, there isn't. That's a really beautiful idea that, um, you know, if, if life is a collection of data and, and life is a process of compressing that data to act, uh, efficiently, uh, in that data, you yourself appear very often. (laughs) So it's useful to, uh, form compressions of yourself. I mean, it's a really beautiful formulation of what consciousness is, is a necessary side effect. It's, uh, actually quite c- compelling to me. You've described RNNs, developed, uh, LSTMs, long short-term memory networks that are, they're a type of recurrent neural networks that have gotten a lot of success recently. So these are networks that model the temporal aspects in the data, t- temporal patterns in the data, and you've called them the deepest of the neural networks, right? So what do you think is the value of depth in the models that we use to learn?
- JSJürgen Schmidhuber
Yeah. Since you mentioned the long short-term memory and the LSTM, um, I have to mention the names of the brilliant students who made that possible.
- LFLex Fridman
Yes, of course, of course.
- JSJürgen Schmidhuber
Um, first of all, my first student ever, Sepp Hochreiter, who had fundamental insights already in his diploma thesis. Then Felix Gears, who had additional important contributions. Alex Gray is a guy from Scotland who, um, uh, is mostly responsible for this, uh, CTC algorithm which is now often used to, to train, uh, the LSTM to do the speech recognition on all the Google Android phones and whatever, um, and CRV and so on. So, um, uh, these guys, without these guys, um-
- LFLex Fridman
Yeah.
- JSJürgen Schmidhuber
... I would be nothing.
- LFLex Fridman
It's a lot of incredible work.
- JSJürgen Schmidhuber
What is now the depth? Uh, what is the importance of depth? Well, um, most problems in the real world are deep in the sense that, um, the current input doesn't tell you all you need to know about the environment.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
So instead, um, you have to have a memory of what happened in the past. And often important parts of that memory are dated. They are pretty old.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
So, um, when you're doing speech recognition, for example, and somebody says, "11,"-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... then that's about half a second or something like that-
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
... which means it's already 50 timesteps.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
And another guy or the same guy sa- says, "Seven," so the ending is the same, -even.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
But now the system has to see the distinction between seven and 11, and the only way it can see the difference is it has to store that, uh, 50 steps ago, there was an S or an L, 11 or a 7.... so there, you have already a problem of depth 50 because for each time step, you have something like a virtual, uh, layer in the expanded unrolled version of this recurrent network which is doing the speech recognition. So these long time lags, they translate into problem depth and most problems in this world are such that you really, um, have to look far back in time to understand what is the problem and to solve it.
- LFLex Fridman
But just like with LSTMs, you don't necessarily need to, when you look back in time, remember every aspect. You just need to r- remember the important aspects.
- JSJürgen Schmidhuber
That's right. The network has to learn to put the important stuff in- into memory and to ignore the unimportant noise.
- LFLex Fridman
So, but in that sense, deeper and deeper is better or is there a limitation? Is- is there... I mean, LSTM is- is one of the great examples of architectures that, uh, do something beyond just deeper and deeper networks. Uh, there's clever mechanisms for filtering data, for remembering and forgetting. Uh, so do you think th- that kind of thinking is necessary? If you think about LSTMs as a leap, a big leap forward over traditional vanilla RNNs, what do you think is the next leap-
- 1:00:00 – 1:15:00
So I, um, b-…
- LFLex Fridman
- JSJürgen Schmidhuber
So I, um, b- became aware of all of that in the '80s, and back then a logic program, logic programming was a huge thing.
- LFLex Fridman
Was it inspiring to you yourself? Did you find it compelling? Because most, a lot of your work was, uh, not so much in that realm, right? It was more in the learning systems.
- JSJürgen Schmidhuber
Yes and no, but we did all of that. Uh-
- LFLex Fridman
You did.
- JSJürgen Schmidhuber
... so we, we... My first, um, publication ever actually was, um-... 1987 was, uh, the implementation of, um, genetic algorithm of a genetic programming system in Prolog. So, Prolog, that's what you learned back then, which is a logic programming language. And the Japanese, they had this huge fifth generation AI project, which was mostly about, uh, logic pro- programming back then. Although neural networks ex- existed and were well-known back then, and deep learning has existed since, um, 1965, um, since this guy in the Ukraine, um, Ivakhnenko, started it. But, um, uh, the Japanese and many other people, uh, they focused really on this logic programming, and I was influenced to the extent that I said, "Okay, let's take these biologically inspired algorithms like evolution-"
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
"... uh, programs, uh, and, um, and, and, mm, and implement that in the language which I know, which was Prolog, for example, back then."
- LFLex Fridman
Yeah.
- JSJürgen Schmidhuber
And then, um, in, in many ways this came back later because, uh, the Gödel machine, for example, has a proof searcher on board, and without that, it would not be optimal. While Marcus Hutter's, uh, universal algorithm for solving all well-defined problems has a proof search on board. So, that's very much logic programming.
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
Without that, it would not be asymptotically optimum. But then on the other hand, because we are very pragmatic guys also, um, we, we focused on recurrent neural networks and, and, and suboptimal, uh, stuff such as gradient-based search and program space, rather than provably optimal things.
- LFLex Fridman
The logic programming does it ha- certainly has a usefulness in, uh, when you're trying to construct something provably optimal or provably good or s- something like that, but is it useful for, for practical problems?
- JSJürgen Schmidhuber
It's really useful for theorem proving.
- LFLex Fridman
Theorem. (laughs)
- JSJürgen Schmidhuber
The best theorem provers today are not, uh, neural networks.
- LFLex Fridman
Right.
- JSJürgen Schmidhuber
No. They are, uh, logic programming systems, and they are much better theorem provers than most, uh, math students in their first or second semester.
- LFLex Fridman
Mm-hmm. But for reasoning, to, for playing games of Go or chess, or for robots, autonomous vehicles that operate in the real world-
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
... or, uh, object manipulation-
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
... you think learning...
- JSJürgen Schmidhuber
Yeah, as long as the problems have little to do, um, with, um, with, um, theorem proving-
- LFLex Fridman
Yeah.
- JSJürgen Schmidhuber
... uh, themselves, then, um, as long as that is not the case, um, you, you just want to have better pattern recognition. So, to build a self-driving car, you want to have better pattern recognition and, um, and, uh, pedestrian recognition and all these things, and you want to, uh, you mini- you want to minimize the number of false positives, which is currently slowing down self-driving cars in many ways. And, um, and all of that has very little to do with logic programming. Yeah.
- LFLex Fridman
What are you most excited about in terms of directions of artificial intelligence at this moment in the next few years in your own research and in the broader community?
- JSJürgen Schmidhuber
So, I think in the not-so-distant future, we will have for the first time little robots that learn like kids. And I will be able to say to the robot, um, "Look here, robot, we are going to assemble a smartphone."
- LFLex Fridman
Mm-hmm.
- JSJürgen Schmidhuber
"Let's take this slab of plastic, um, and the screwdriver, and let's screw in the screw like that," you know? No, not like that, like that.
- 1:15:00 – 1:20:04
Do you think we…
- JSJürgen Schmidhuber
now we realize that the universe is still young. It's only 13.8 billion years old, and it's going to be a thousand times older than that. So there's plenty of time to conquer the entire universe and to fill it with intelligence and senders and receivers such that AIs can travel the way they are traveling in our labs today, which is by radio from sender to receiver. And let's call the current age of the universe one eon. One eon. Now, it will take just a few eons from now and the entire visible universe is going to be full of that stuff. And let's look ahead to a time when the universe is going to be 1,000 times older than it is now. They will look back and they will say, "Look, almost immediately after the Big Bang, only a few eons later, the entire universe started to become intelligent." Now to your question, how do we see whether anything like that has already happened or is already in a more advanced stage in some other part of the universe, of the visible universe? We are trying to look out there and nothing like that has happened so far, or is that true?
- LFLex Fridman
Do you think we would recognize it? Well, how do we know it's not among us?
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
How do we know planets aren't, in themselves, intelligent beings?
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
How do we know, uh, ants, um, seen as a collective are not much greater intelligence-
- JSJürgen Schmidhuber
Yeah.
- LFLex Fridman
... than our own? These kinds of ideas.
- JSJürgen Schmidhuber
Yeah. When I was a boy, I was thinking about these things and I thought, hmm, maybe it has already happened because back then I know, I knew, I learned from popular physics books that, um, the structure, the large-scale structure of the universe is not homogeneous. And you have these clusters of galaxies, and then in between there are these huge empty spaces. And I thought, hmm, maybe they aren't really empty. It's just that in the middle of that, some AI civilization already has expanded and then has covered a bubble of a billion light years diameter and is using all the energy, uh, of all the stars within that bubble for its own unfathomable purposes. And so it already has happened, and we just, um, fail to interpret the signs. But then I learned, uh, that gravity by itself explains the large-scale structure of the universe and that this is not a convincing explanation. And then I thought maybe, maybe it's the dark matter because as far as we know today, 80% of the measurable matter is invisible. And we know that because otherwise our galaxy or other galaxies would fall apart. They would... They are rotating too quickly. And, um, then the idea was maybe all of these AI civilizations that are already out there, they, they, they are just invisible because they're really efficient in using the energies of their own, uh, local systems and that's why they appear dark to us. But this is also not a convincing explanation because then the question becomes why is there... Are there still any, uh, visible stars left in our own galaxy, which also must have a lot of dark matter? So that is also not a convincing thing. And today, (laughs) I like to, um, think it's quite plausible that maybe we are the first, at least in our local light cone, within, um, the few hundreds of millions of light years that we can reliably observe.
- LFLex Fridman
Observe. Is that exciting to you, that we might be the first?
- JSJürgen Schmidhuber
And, um, it would make us much more important because if we mess it up through a nuclear war, then, then maybe this will have an effect on the, on the, on the development on, of the entire universe.
- LFLex Fridman
So let's not mess it up.
- JSJürgen Schmidhuber
Let's not mess it up.
- LFLex Fridman
Jürgen, thank you so much for talking today. I really appreciate it.
- JSJürgen Schmidhuber
It's my pleasure.
Episode duration: 1:19:58
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 3FIo6evmweo
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome