Skip to content
Dwarkesh PodcastDwarkesh Podcast

Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence

Asked Ilya Sutskever (Chief Scientist of OpenAI) about: * time to AGI * leaks and spies * what's after generative models * post AGI futures * working with MSFT and competing with Google * difficulty of aligning superhuman AI Hope you enjoy as much as I did! 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/p/ilya-sutskever * Apple Podcasts: https://apple.co/42H6c4D * Spotify: https://spoti.fi/3LRqOBd * Follow me on Twitter: https://twitter.com/dwarkesh_sp 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Time to AGI 00:05:57 - What’s after generative models? 00:10:57 - Data, models, and research 00:15:27 - Alignment 00:20:53 - Post AGI Future 00:26:56 - New ideas are overrated 00:36:22 - Is progress inevitable? 00:41:27 - Future Breakthroughs

Ilya SutskeverguestDwarkesh Patelhost
Mar 27, 202347mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:005:57

    Time to AGI

    1. IS

      ... but I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions.

    2. DP

      Are you worried about spies?

    3. IS

      I'm really not worried about the way it's being

    4. NA

      (laughs)

    5. IS

      ... leaked. We will all be able to become more enlightened because we interact with an AGI that will help us see the world more correctly. Like, imagine talking to the best meditation teacher in history. Microsoft has been a very, very good partner for us. So I challenge the claim that next token prediction cannot surpass human performance. If your base neural net is smart enough, you just ask it, like, "What could a person with, like, great insight, and wisdom, and capability do?"

    6. DP

      Okay. Today, I have the pleasure of interviewing Ilya Sutskever, who is the co-founder and chief scientist of OpenAI. Ilya, welcome to The Lunar Society.

    7. IS

      Thank you. Happy to be here.

    8. DP

      Uh, first question, and no humility allowed. There's many scientists, or maybe not that many scientists, who will make a big breakthrough in their field. There's far fewer scientists who will make multiple independent breakthroughs that define their field, uh, throughout their career. What is the difference? What, like, what, what distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in your field?

    9. IS

      Well, thank you for the kind words. It's hard to answer that question. I mean, I try really hard. I give it everything I've got. And that worked so far. I think that's all there is to it.

    10. DP

      Got it. Um, what's the explanations for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers or something?

    11. IS

      I mean, maybe they haven't really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. Certainly, I imagine they'd be taking some of the open source models and try and use them for that purpose. Like, I'm sure I would expect this would be something they'd be interested in, in the future.

    12. DP

      It's, like, technically possible. They just haven't thought about it enough?

    13. IS

      Or haven't, like, done it at scale using their technology, or maybe it's happening. We just don't know it.

    14. DP

      Would you be able to track it if it was happening?

    15. IS

      I think large-scale tracking is possible, yes. I mean, this requires a small special operation, but it's possible.

    16. DP

      Mm-hmm. Um, now, there's some window in which, uh, AI is very economically valuable, on the scale of airplanes, let's say. But we haven't reached AGI yet. How big is that window?

    17. IS

      I mean, I think this window... It's hard to give you a precise answer, but it's definitely going to be, like, a good multi-year window. It's also a question of definition because AI, before it becomes AGI, is going to be increasingly more valuable year after year. I'd, I'd say in an exponential way. So at some... In some sense, it may feel like, especially in hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already, last year, there'd been a fair amount of economic value produced by AI, and next year is going to be larger and larger after that. So I think, like, that there's going to be a good multi- multi-year chunk of time where that's going to be true. I would say from now until AGI, pretty much.

    18. DP

      Okay. Well, I... 'Cause I'm curious if there's a startup that's using your models, right? Um, at some point, if you have AGI, there's only one business in the world, right? It, it, it's OpenAI. How, how much window do they have, does any business have, where they're actually producing something that AGI can't produce?

    19. IS

      Yeah. Well, I mean, it's the same, it's the s- it's the same question as asking, "How long until AGI?"

    20. DP

      Yeah.

    21. IS

      I think it's a hard question to answer. I mean, I hesitate to give you a number. Also because there is this thing where, effect where people who are optimistic people, who are working on the technology, tend to underestimate the time it takes to get there. But I think that the way I ground myself is by thinking about a self-driving car. In particular, there is an analogy where if you look at the... So I have a Tesla, and if you look at the self-driving behavior of it, it, like, it looks like it does everything. Like, it does everything.

    22. DP

      Right.

    23. IS

      But it's also clear that there is still a long way to go in terms of reliability. And we might be in a similar place with respect to our models, where it also looks like we can do everything. And at the same time, it would be... We'll need to do some more work until we really iron out all the issues and make it really good, and really reliable, and robust, and well-behaved.

    24. DP

      By 2030, what percent of GDP is AI?

    25. IS

      Oh, gosh, hard to answer that question. Very hard to answer that question.

    26. DP

      And, and give me an over/under.

    27. IS

      Like, the problem is that my error bars are in log scale. So I could imagine...

    28. DP

      (laughs)

    29. IS

      Like, I could imagine, like, a huge percentage, I could imagine a really disappointingly small percentage at the same time.

    30. DP

      Okay, so let's take the counterfactual where it is a small percentage. Let's say it's 2030, and, you know, not that much economic value has been created by these LLMs. As unlikely as you think this might be, what is, what would be your best explanation right now of why something like this might happen?

  2. 5:5710:57

    What’s after generative models?

    1. DP

      what's after generative models? Right? So before, you were working on reinforcement learning. Is this, is this basically it? Is this a paradigm that gets us to AGI, or is there something after this?

    2. IS

      I mean, I think this paradigm is gonna go really, really far, and I would not underestimate it.I think it's quite likely that this exact paradigm is not going to be the, quite the AGI form factor. I mean, I hesitate to say precisely what the next paradigm will be, but I think it will probably involve integration of all the different ideas that came, that came in the past.

    3. DP

      Is there s- some specific one you're referring to or...

    4. IS

      I mean, it's hard to be specific.

    5. DP

      So you could argue that next token prediction can only help us match human performance, uh, o- and maybe not surpass it. What would it take to surpass human performance?

    6. IS

      So I challenge the claim that next token prediction cannot surpass human performance. It looks like on the surface it cannot.

    7. DP

      Mm-hmm.

    8. IS

      Looks on the surface if you just learn to imitate, to predict what people do, it means that you can only copy people. But the, here is a counter argument for why it m- might not be quite so if your neural net is, if your base neural net is smart enough, you just ask it like, "What would a, what would a person with great insight, and wisdom, and capability do?" Maybe such person doesn't exist, but there's a pretty good chance that the neural net will be able to extrapolate how such a person would behave. Do you see what I mean?

    9. DP

      Yes. Although, where would it get that sort of insight about what this person would do, uh, if not from...

    10. IS

      From the data of regular people, because like if you think about it, what does it mean to predict the next token well enough? What does it mean actually? It's actually, it's a much, it's a deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics. Like, it is statistics, but what is statistics? In order to un- to understand those statistics, to compress them, you need to understand what is it about the world that creates this, those statistics. And so then you say, "Okay, well, I have all those people. What is it about people that creates their behaviors?" Well, they have, you know, they, they have thoughts, and they have feelings, and they have ideas, and they do things in certain ways. All of those would be deduced from next token prediction. And I'd argue that this should make it possible, not indefinitely, but to a, to a pretty decent degree to say, "Well, can you guess what you'd, what you'd do if you took a person with, like, this characteristic and that characteristic?" Like, such a person doesn't exist, but because you're so good at predicting the next token, you should still be able to guess what that person would do, this hypothetical imaginary person-

    11. DP

      Mm-hmm.

    12. IS

      ... with far greater mental ability than the rest of us.

    13. DP

      Um, when, when we're doing reinforcement learning on these models, how long before most of the data for the reinforcement learning is coming from AIs and not humans?

    14. IS

      I mean, already most of the data for reinforcement learning is coming from AIs.

    15. DP

      Oh.

    16. IS

      Yeah. Well, it's like, the humans are being used to train the reward function, but then the, but, but then the reward function inter- and its, its interaction with the model is automatic, and all the data that's generated in the, during the process of reinforcement learning is created by AI. So like, if you look at the current, I would say, technique paradigm which has been getting some significant attention because of ChatGPT, reinforcement learning from human feedback, there is human feedback. The human feedback is being used to train the reward function, and then the reward function is being used to create the data which trains the model.

    17. DP

      Got it. And is there any hope of just removing the human from the loop and have it improve itself and sort of alpha go away?

    18. IS

      Yeah, definitely. I mean, I feel like in some sense our hopes for, like, our pla- like... Very much so. The, the thing you really want is for the human teachers that tell y- that teach the AI, for them to collaborate with an AI. You might wanna think ab- about it, um, you, you might wanna think of it as being in a world where the human teachers do 1% of the world, and the, uh, work, and the AI do 99% of the work. You don't want it to be 100% AI, but you do want it to be a human-machine collaboration which teaches the next machine.

    19. DP

      So currently, I mean, I've had a chance to play around with these models. They seem, uh, bad at multi-step reasoning, and they have been getting better. But what does it take to really surpass that barrier?

    20. IS

      I mean, I think s- dedicated training will get us there, more, more improvements to the base models will get us there.

    21. DP

      Okay.

    22. IS

      But... Like, fundamentally I also don't feel like they're that bad at multi-step reasoning. I actually think that they are bad at mental multi-step reasoning but they're not allowed to think out loud. But when they are allowed to think out loud, they're quite good. And I expect this to improve significantly both with better models and with special training.

    23. DP

      Hmm.

  3. 10:5715:27

    Data, models, and research

    1. DP

      A- are you running out of reasoning tokens on the internet? Are there enough of them?

    2. IS

      I mean, you know, e- so, okay, so for, for context on this question, like, there is, the- there are claims that indeed at some point we'll run out of tokens in general to train those models. And yeah, I think this will happen one day, and we'll, by the time that happens, we need to have other ways of training models, other ways of productively improving their capabilities and sharpening their behavior, making sure they are doing exactly, precisely what we want without more data.

    3. DP

      Well, I, m- you haven't run out of data yet? There's more...

    4. IS

      Yeah. I would say, I would say the data situation is still quite good. There is still lots to go. But at some point, yeah, at some point data will run out.

    5. DP

      Okay. Where, what is the most valuable source of data? Is it Reddit, Twitter, books? W- w- what would you trade many other tokens of other varieties for?

    6. IS

      Generally speaking, you'd like tokens which are, speaking about smarter things-

    7. DP

      Mm-hmm.

    8. IS

      ... tokens which are, like, more interesting.

    9. DP

      Yeah.

    10. IS

      So I mean, all the, all the sources which you mentioned, they're valuable.

    11. DP

      Okay. So maybe not Twitter. But, um, (laughs) uh, d- do we need to go multimodal to get more tokens? Or do we still have enough text tokens left?

    12. IS

      I mean, I think that you can still go very far in text only, but going multimodal seems like a very fruitful direction.

    13. DP

      Mm-hmm. If you're comfortable talking about this, like, where is the place where we haven't scraped the tokens yet?

    14. IS

      Oh, I mean-Yeah, obviously. I mean, I can't answer that question for us, but I'm sure, I'm sure that for, for everyone there's a different answer to that question.

    15. DP

      How m- how many orders of magnitude improvement can we get just, not from scale or not from data, but just from algorithmic improvements?

    16. IS

      Hard to answer, but I'm sure there is some.

    17. DP

      Is, is some a lot or is some a little?

    18. IS

      I mean, there's o- only one way to find out.

    19. DP

      Okay. Let me get your, like, quick-fire opinions about these different research directions. Retrieable transformers, so just like somehow storing the data outside of the model itself and retrieving it somehow.

    20. IS

      Seems promising.

    21. DP

      Well, do you, but do you see that as a path forward or...

    22. IS

      Uh, I think it seems promising.

    23. DP

      Uh, robotics. Was it the right step for OpenAI to leave that behind?

    24. IS

      Yeah, it was. Like, back then, it really wasn't possible to continue working in robotics because there was so little data. Like, back then if you wanted to do a robot... If you wanted to work on robotics, you needed to become a robotics company. You needed to really have a giant group of people working on building robots and maintaining them, and having... And even then, like if you only, if you gotta have 100 robots, it's a giant operation already, but you're not gonna get that much data. So in a world where most of the progress comes from the combination of compute and data, right? That's where we've been, where it was the combination of compute and data that drove the progress. There was no path to data from robotics. So back in the day when we made a decision to stop working in robotics, there was no path forward.

    25. DP

      Is there one now?

    26. IS

      So I'd say that now it is possible to create a path forward, but one needs to really commit to the, to the ta- to the task of robotics. You really need to say, "I'm going to build, like, many thousands, tens of thousands, hundreds of thousands of robots and somehow collect data from them and find a gradual path where the robots are doing something slightly more useful, and then the data that they get from these ro- and then the data that, that is obtained and used to train the models, and they do something slightly more useful." So you could imagine this kind of gradual path of improvement where you build more robots, they do more things, you collect more data and so on. But you really need to be committed to this path. If you say, "I want to make robotics happen," that's what you need to do. I believe that there are companies who are thinking about such, doing exactly that, but I think that you need to really love robots and need to be ve- really willing to solve all the physical and logistical problems of dealing with them. It's not the same as software at all. So I think one could make progress in robotics today with enough motivation.

    27. DP

      Uh, what ideas are you excited to try but you can't because they don't work well on current hardware?

    28. IS

      I don't think current hardware is a limitation.

    29. DP

      Okay.

    30. IS

      I think it's just not the case.

  4. 15:2720:53

    Alignment

    1. IS

    2. DP

      Let's talk about alignment. Do you think we'll ever have a mathematical definition of alignment?

    3. IS

      Mathematical definition, I think is unlikely.

    4. DP

      Uh-huh.

    5. IS

      Like I do, I do think that we will instead have multiple... Like, like rather than, rather than achieving one mathematical definition, I think we'll achieve multiple definitions that look at alignment from different aspects. And I think that this is how we will get the assurance that we want, and by which I mean you can look at the behavior. You can look at the behavior in various tests, control m- um, in various adversarial stress situations. You can look at how the neural net operates from the inside. I think you could have to look at all, several of these factors at the same time.

    6. DP

      And how sure do you have to be before you release a model into the wild? Is it 100%? 95%?

    7. IS

      Well, it depends how capable the model is. The more capable the model is, the more li- the more, the higher the, the more confident you need to be.

    8. DP

      Okay. So just, just say it's something that's almost AGI. Where is AGI?

    9. IS

      Well, it depends what your AGI can do. Keep in mind that AGI is an ambiguous term also.

    10. DP

      Yeah.

    11. IS

      Like, like your average college undergrads is an AGI, right?

    12. DP

      Today it will, yeah. (laughs)

    13. IS

      But, but you, you, you see what I mean? There is significant ambiguity in terms of what is meant by AGI.

    14. DP

      Mm-hmm.

    15. IS

      So depending on where you put this mark, you need to be more or less confident.

    16. DP

      Well, uh, you mentioned a few of the paths towards alignment earlier. What, what is the one you think is most promising at this point?

    17. IS

      Like, I think that it will be a combination. I really think that you will not want to have just one approach.

    18. DP

      Mm-hmm.

    19. IS

      I think we will want to have a combination of approaches where we, we could spend a lot of compute to adversarially probe it to find any mismatch between the behavior that we want it to teach and the behavior that it exhibits. We look inside, into the neural net using another neural net to understand how it, how it operates on the inside. I think all of them will be necessary. Every approach like this reduces the probability of misalignment, and you also want to be in a world where your degree of alignment keeps of increasing faster than the capability of the models. I would say that right now our understanding of our models is still quite rudimentary. We made some progress but much more progress is possible, and so I would expect that ultimately the thing that will really succeed is when we will have a small neural net that is well-understood that's b- given the task to study the behavior of a large neural net that is not understood to verify it.

    20. DP

      By what point is most of the AI research being done by AI?

    21. IS

      I mean, so today when you use Copilot, right? What fraction? How, how, how do you do, do the... How do you divide it up? So I expect at some point you ask your, you know, descendant of ChatGPT. You say, "Hey. Like, I'm thinking about this and this. Can you suggest fruitful ideas I should try?" And you would actually get fruitful ideas.

    22. DP

      Mm-hmm.

    23. IS

      ... right? And the thing that's going to make it possible for you to solve problems you couldn't solve before.

    24. DP

      Got it. But it's somehow just telling the humans, giving them ideas to... faster or something. It's not-

    25. IS

      That's one-

    26. DP

      ... itself interacting with the-

    27. IS

      ... one example. I mean, you could, you c- you could slice it in, in a variety of ways. But I think the bottleneck there is good ideas, good insights, and that's something ... the neural nets could help with this.

    28. DP

      Mm-hmm. And if you could design some- like a billion dollar prize for some sort of alignment, research, result, uh, or product, what is, like the concrete criteria you would set for that billion dollar prize? Is there something that makes sense for such a prize?

    29. IS

      I- it's, it's funny that you ask this. I was actually thinking about this exact question. I haven't, I haven't come up with an exact criteria yet. Maybe something that with the benefit of like... maybe a prize where we could say that two years later or three year- or five years later, we'll look back and say it like, "That was the main result." So rather than say that there is a prize committee that decides right away, we wait for five years and then award it retroactively.

    30. DP

      But there's no concrete thing we can identify yet and say like, "You solved this particular problem and you're- you made a lot of progress."

  5. 20:5326:56

    Post AGI Future

    1. DP

      Um, l- let's talk about what like a post-AI future looks like. So are people like you, you know, I'm guessing you're working like 80-hour weeks towards n- this grand goal that you're really obsessed with. Are you gonna be satisfied in a world where you're basically living in an AI retirement home or like what, what is a qu- what, what is like your... what are you concretely doing after AGI comes?

    2. IS

      I think the question of what, what I'll be doing or what people will be doing after AGI comes is a very tricky question. You know, I think where, where will people find meaning? But I think, I think that that's something that AI could help us with. Like, one thing I imagine is that we'll all be able to become more enlightened because we'd interact with an AGI that will help us see the world more correctly or become better on the inside as a result of interacting. Like imagine talking to the best meditation teacher in history. I think that will be a helpful thing. But I also think that because the world will change a lot, it will be very hard for people to understand what is happening precisely and how to g- and how to really contribute. One thing that I think some people will choose to do is to become part AI in order to really expand their minds and understanding and to really be able to solve the hardest problems that society will face then.

    3. DP

      Are you gonna become part AI?

    4. IS

      Very tempting. It is tempting.

    5. DP

      Well, do you think there'll be physically embodied humans in, uh, 3000?

    6. IS

      3000, oh. How do I know what's gonna happen in 3000?

    7. DP

      But like what, what does it look like? Are there still like humans walking around on Earth or have, have you guys thought concretely about what you actually want this world to look like?

    8. IS

      3000... Well, I mean, that, that, that... the thing is... he- here's the thing. Like, let me, let me describe to you what I think is not quite right about the question. Like it implies like, oh, like we get to decide how we want the world to look like. I don't think that picture is correct. I think change is the only constant. And so of course, even after AGI is built, it doesn't mean that the world will be static. The world will continue to change. The world will continue to evolve, and it'll go through all kinds of transformations, and I really have no... I don't think anyone has any idea of how the world will look like in 3000. But I do hope that there will be a lot of descendants of human beings who will live happy, fulfilled lives where they're free to do as their wish, as they see fit, where they are the ones who are solving their own problems. Like one of the things which I would not want, one, one, one world which I would find very unexciting is one where, you know, we build this powerful tool and then the government said, "Okay, so the AGI said that society should be run in such a way and now we shall run society in such a way." I'd much rather have a world where people are still free to make their own mistakes and suffer their consequences and gradually evolve morally and progress forward on their own through their own strength. See what I mean? With the AGI providing more like a base safety net.

    9. DP

      How much time do you spend thinking about these kinds of things versus just doing the research that-

    10. IS

      I do think about those things a fair bit, yeah. I think those are very interesting questions.

    11. DP

      So in, in what ways have the capabilities we have today, in what ways have they surpassed where you expected them to be in 2015, and in what ways are they still not where you would've expected them to be by this point?

    12. IS

      I mean, i- in fairness, they did surpass what I expected them to be in 2015. In twe- in 2015, I... my thinking was a lot more, "I just don't wanna bet against deep learning. I wanna make the biggest-

    13. DP

      Uh-huh.

    14. IS

      ... possible bet on deep learning. Don't know how, but it will figure it out."

    15. DP

      But i- is there any specific way in which it's, uh, been more than you expected or less than you expected? Like some concrete prediction you had in 2015 that's been trounced?

    16. IS

      You know, unfortunately, I don't remember concrete predictions I made in 2015. But I definitely, but I definitely think that overall, in 2015, I just wanted to, to, to move, to make the biggest bet possible on deep learning, but I didn't know exactly. I didn't have a specific idea of how far things will go in seven years. Well, I mean, 2015, I did have all these bets with people in 2016, maybe 2017, that things will go really far, but specifics... So it's like, it's both, it's both the case that it surprised me, and I was making these aggressive predictions, but I think maybe I believed them only, only f- only 50% on the inside.

    17. DP

      Uh-huh. Well, what do you believe now that even most people at OpenAI would find far-fetched?

    18. IS

      I mean, I think that at this... Because we communicate a lot at OpenAI, people have a pretty good sense of what I think. And so, yeah, we've r- we've reached the point at OpenAI where I think we see eye to eye on all these questions.

    19. DP

      So Google has, you know, its custom TPU hardware. It has all this data from all its users, you know, Gmail, what- um, and so on. Does, does it give it an advantage in terms of training bigger models and better models than you?

    20. IS

      So I think, like, when the t- first, at first when the TPU came out, I was really impressed and I thought, "Wow, this is amazing," but that's because I didn't quite understand hardware back then. What really turned out to be the case is that TPUs and GPUs are almost the same thing. They are very, very similar. It's like, I think a GPU chip is a little bit bigger. I think a TPU chip is a little bit smaller, it may be a little bit cheaper, but then they make more GPUs than TPUs, so I think the G- the GPUs might be cheaper after all. But fundamentally, you have a big processor and you have a lot of memory, and there is a bottleneck between those two. And the problem that both the TPU and the GPU are trying to solve is that by the, the amount of time it takes you to move one floating point from the memory to the processor, you can do several hundred floating point operations on the processor, which means that you have to do some kind of batch processing. And in this sense, both of these architectures are the same. So I, I really feel like hardware, like, in some sense, the only thing that matters about hardware is cost, cost per flop, overall systems cost.

    21. DP

      Okay. The, there, that, there isn't much, that much difference?

    22. IS

      Well, I actually don't know. I mean, I don't know how much, what, what the T- what the TPU costs are, but I would suspect that probably not. If anything, probably the TPUs are more expensive because there is

  6. 26:5636:22

    New ideas are overrated

    1. IS

      less of them.

    2. DP

      When you're doing your work, how much of the time is spent, you know, configuring the right initializations, making sure the training run goes well, and getting the right hyperparameters, and how much is it just coming up with whole new ideas?

    3. IS

      I would say it's a combination, but I think the coming up with c- co- it's, it's a combination, but coming up with whole new ideas is actually not... It's, it's, it's like a modest part of the work. Certainly, coming up with new ideas is important, but I think even more important is to understand the results, to understand the existing ideas, to understand what's going on. Because normally you'd have these... You know, a neural net is a very complicated system, right? And you ran it and you get some behavior which is hard to understand. What's going on? Understanding the results, figuring out what nes- next experiment to run, a lot of the time is spent on that. Understanding what could be wrong, what could have caused the sys- the, the neural net to produce a result which was not expected, I'd say a lot of time is spent as well. Of course, coming up with new ideas, but not new ideas. I think, like, I don't, I don't like this, this, um, framing as much. It's not that it's false, but I think the main activity is actually understanding.

    4. DP

      I- well, what do you see is the difference between the two?

    5. IS

      So at least in my mind, when you say, "Come up with new ideas," I'm like, "Oh, like, what happen if you did such and such?" Whereas understanding, it's more like, like, "What is this whole thing?"

    6. DP

      Ah.

    7. IS

      Like, what are the real underlying phenomena that are going on? What are the, what are the underlying effects? Like, why, why are we doing things this way and not another way? And of course, this is very adjacent to what can be described as coming up with ideas, but I think the understanding part is where the real action takes place.

    8. DP

      I- i- does that describe your entire career? Like, if you think back on, like, ImageNet or something, w- was that more a new idea or was that more understanding?

    9. IS

      Oh, I was definitely understanding.

    10. DP

      Uh-huh.

    11. IS

      Definitely understanding. It was a new understanding of very old things.

    12. DP

      What has the experience of training on Azure been like using Azure?

    13. IS

      Fantastic. I mean... Yeah, I mean, Microsoft has been a very, very good partner for us and they've really helped take Azure and make it, bring it to a point where it's really good for ML, and we are super happy with it.

    14. DP

      How, um, how vulnerable is the whole AI ecosystem to something that might happen in Taiwan? So let's say there's, like, a tsunami, uh, in, in Taiwan or something. What, what would ha- what happens to AI in general?

    15. IS

      Like, it's def- it's definitely going to be a significant setback.

    16. DP

      Uh-huh.

    17. IS

      It's not going to... Like, it might be something e- equivalent to, like, no one will be able to get more, more computers for a few years. But I expect computers will spring up. Like, for example, I believe that Intel has fabs just of the previous ge- of, like, a few generations ago. So that means that if Intel wanted to, they could produce something GPU-like-

    18. DP

      Hmm.

    19. IS

      ... from, like, four years ago. So yeah, it's not the best, let's say. I'm actually not sure about if, if, uh, uh, if my statement about Intel is correct, but I do know that there are fabs outside of Taiwan. They're just not as good. But you can still use them and still go very far with them. It's just, it just cost... It's just a setback.

    20. DP

      Will inference get cost prohibitive as these models get bigger and bigger?

    21. IS

      So I have a different way of looking at this question.

    22. DP

      Yeah.

    23. IS

      It's not that inference will become cost prohibitive.

    24. DP

      Mm-hmm.

    25. IS

      Inference of better models will indeed become more expensive. But is it prohibitive? Well, it depends on how useful is it. Like, if it is more useful than it is expensive, then it is not prohibitive. Like, to give you an analogy, like, suppose you want to talk to a lawyer. You have some case you... or need some advice or something. You are perfectly happy to spend $500 an hour, right? So if your neural net could give you, like, really reliable legal advice, you'd say, "I'm happy to spend $400 for-"... that advice, and suddenly inference becomes very much non-prohibitive.

    26. DP

      Mm-hmm.

    27. IS

      The question is, is, can, can neural net produce an answer good enough at this cost?

    28. DP

      Yes. And you will just have diff- like, price discrimination, different-

    29. IS

      Yeah.

    30. DP

      ... different models at different levels?

  7. 36:2241:27

    Is progress inevitable?

    1. IS

      sensible thing to do and...

    2. DP

      I- isn't it odd that we have the data we need at exactly the same time as we have the transformer, at the exact same time that we have these GPUs? Like, uh, uh, is it odd to you that all of these things happen at the same time or do you not see it that way?

    3. IS

      I mean, it i- it is definitely an interesting... it is an interesting situation that is the case. I will say that...It is odd and it is less odd on some level. Here's why it's less odd. So what is the driving force behind the fact that the data exists, that the GPUs exists, that the transformer exists? So the data exists because computers became better and cheaper, we've got smaller and smaller transistors and suddenly at some point it became economical for every person to have a personal computer. Once everyone has a personal computer, you really wanna connect and meet a network, you get the internet. Once you have the internet, you have suddenly data appearing in great quantities. The GPUs were improving concurrently because you have more, smaller and smaller transistors and you're looking for things to do with them. Gaming turned out to be a thing that you could do. And then at some point, the gaming GPU, NVIDIA said, "Wait a sec, try and make, turn it into a general purpose GPU computer, maybe someone will find, will find it useful." Turns out it's good for neural nets. So it could- it could've been the case that maybe the GPU would've arrived five years later, 10 years later if, but what if, let's suppose gaming wasn't a thing. It's kinda hard to imagine, what does it mean if gaming isn't a thing? But it could, maybe there was a counterfactual world where GPUs arrived five years after the data or five years before the data, in which case maybe things would move a little bit more s- things would've been as ready to go as they are now, but that's the picture which I imagine, that all this progress in all these dimensions is very intertwined. It's not a coincidence that, like you don't get to pick and choose which dimension, in which dimensions things improve, if you see what I mean.

    4. DP

      How inevitable is this kind of progress? So if like let's say you and Geoffrey Hinton and a few other pioneers, if they were never born, does the d- uh, deep learning revolution happen around the same time? How much does it delay it?

    5. IS

      I think maybe there would've been some delay, maybe like a year delay. It's- it's really hard to tell.

    6. DP

      Really, that's it?

    7. IS

      It's really hard to tell. I mean, I hesi- I hesitate to give a lot, a lot, a longer answer because okay, well then you'd have, GPUs would keep on improving, right? Then at some point I can- I cannot see how someone would not have discovered it, 'cause here's the other thing, the, if- if, okay, so let's suppose no one has done it. Computers keep getting faster and better, becomes easier and easier to train these neural nets because you have bigger GPUs so it takes less engineering effort to train one. You don't need to optimize your code as much, you know? When the- when the ImageNet dataset came out, it was huge and it was very, very difficult to use. Now imagine you wait for a few years and it becomes very easy to download and people can just- just tinker. So I- I would imagine that like a modest number of years maximum, this would be my guess. I hesitate, I hesitate to- to give, to give a lot, a longer answer though, you know, you can't, you can't run, you can't rerun the world, you don't know what'll happen.

    8. DP

      Let's go back to alignment for a second. As somebody who deeply understands these models, what is your intuition of how hard alignment will be?

    9. IS

      Like I think with the c- so here's what I would say, I think with the current level of capabilities, I think we have a pretty good set of ideas of how to align them. But I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions. Like I think, I think it's something to- to think, to think about a lot and to research. I think this is one area also, by the way, you know, like oftentimes academic researchers asked me, ask me where, what- what's the best place where they can contribute. And I think alignment research is one place where I think academic researchers can make very meaningful contributions.

    10. DP

      Other than that, do you think academia will come up with imp- more insights about actual capabilities or is that gonna be just the companies at this point?

    11. IS

      The companies will realize the capabilities. I think it's very possible for academic research to come up with those insights. I think it's just, it doesn't seem to happen that much for some reason, but I don't- I don't think there's anything fundamental about academia, like it's not that academia can't, I think maybe they're just not thinking about the right problems or something because maybe it's just easier to see what needs to be done inside these companies.

    12. DP

      Mm-hmm. I see. But there's a possibility that somebody could just realize...

    13. IS

      Yeah, I totally think so. Like why- why would I possibly rule this out?

    14. DP

      Yeah, yeah.

    15. IS

      You see what I mean?

    16. DP

      Yeah. What are the concrete steps by which, um, these language models start actually impacting the world of atoms and not just the world of bits?

    17. IS

      Well you see, I don't think that there is a distinction, a clean distinction between the world of bits and the world of atoms. Suppose the neural net tells you that, "Hey, like here is like something that you should do and it's going to improve your life but you need to like rearrange your apartment in a certain way." Then you go and you rearrange your apartment as a result. If the neural net impact the world of atoms, just happened.

    18. DP

      Yeah. Fair enough.

  8. 41:2747:40

    Future Breakthroughs

    1. DP

      Fair enough. Do you think it'll take a couple of additional breakthroughs as important as the transformer to get to superhuman AI or do you think we basically got the insights in the books somewhere and we just need to implement them and connect them?

    2. IS

      So I don't really see such a big distinction between those two cases and let me explain why. Like I think what's, what- one of the ways in which progress has taken place in the past is that we've understood that something had a property, a desirable property all along but you didn't realize. So is that a breakthrough? You can say yes it is. Is it an implementation of something on the books? Also yes. So I- I, my- my feeling is that a few of those are quite likely to happen but that in hindsight it will not feel like a breakthrough. Everybody's gonna say, "Oh, well of course, like it's totally obvious that such and such thing can- can work." You see, with a transformer, the reason it's been brought up as a big, as- as a specific advance is because it's the kind of thing that was not obvious for almost anyone so people can say yeah, like it's not something which they knew about. But if an advance comes from something... Like let's consider the- the- the most fundamental advance of deep learning, that the big neural network trained with back propagation can do a lot of things, like where's the novelty? It's not in the neural network. It's not in the back propagation.... but then somehow it's the kind of- but it was, it is most definitely a giant conceptual breakthrough because for the longest time, people just didn't see that. But then now that everyone sees it, everyone's gonna say, "Well, of course. Like, it's totally obvious, big neural network." Everyone knows that they can do it.

    3. DP

      What is your opinion of your, uh, former adviser's new forward-forward algorithm?

    4. IS

      I think that it's an attempt to train a neural network without back propagation, and I think that this is especially interesting if you are motivated to try to understand how the brain might be learning its connections. The reason for that is that as far as I know, neuroscientists are really convinced that the brain cannot implement back propagation because the signals in the synapses only move in one direction. And so if you have a neuroscience motivation and you wanna say, "Okay. How can I come up with something that tries to approximate, proximate the good properties of back propagation without doing back propagation?" That's what the forward-forward algorithm is trying to do. But if you are trying to just engineer a good system, there is no reason to not use back propagation. Like, it's- it's- it's the only algorithm, right?

    5. DP

      Hmm. I guess I've heard you in different contexts talk about the nee- like using humans as the, uh, you know, the existing example case that, uh, you know, AGI exists, right? So, uh, at what point do you take the metaphor less seriously and feel, uh, don't feel the need to pursue it in terms of research? 'Cause it is important to you as a sort of existence case.

    6. IS

      Like at what point do I stop caring, caring about humans as an existence case of intelligence?

    7. DP

      Or as the sort of, as the example and the model you wanna follow in terms of pursuing intelligence in models.

    8. IS

      I see. I mean, like, you gotta... I think it's good to be inspired by humans. I think it's good to be inspired by the brain. I think there is an art into being inspired by humans and the brain correctly because it's very easy to latch onto a non-essential quality of humans or of the brain, and I think many people who are in sch- who, many people whose research is trying to be inspired by humans and by the brain often gets a little bit specific. People get a little bit too, "Okay, so, like, what cognitive science model should we follow?" At the same time, consider the idea of the neural network itself, the idea of the artificial neuron. This too is inspired by the brain, but it turned out to be extremely fruitful. So, how do we do this? You want what behaviors of human beings are essential that you say, "Like, this is something that proves to us that it's possible." What is an essential? No, actually this is like some emergent phenomena of something more basic and we just need to focus on our, on our, on doing, getting our own basics right. I would say, I would say that it's, like, I think one should, one can and should be inspired by human intelligence with care.

    9. DP

      Final question. Why is there, in your case, such a strong correlation between being first, uh, to the deep learning revolution and still being one of the top researchers? You would think that these two things wouldn't be that correlated, but why is that their correlation?

    10. IS

      I don't think those things are super correlated indeed. I feel like in my case... I mean, honestly, it's hard to answer the question. You know, I just kept on, kept, I kept trying really hard and it- it turned out to have sufficed thus far.

    11. DP

      Got it. So it's the perseverance.

    12. IS

      I think it's a necessary but not a sufficient condition. Like, you know, many things need to come together in order to really figure something out.

    13. DP

      Mm-hmm.

    14. IS

      Like, you need to- to really go for it and also need to have the right way of looking at things and so it's hard, it's hard to give him, like, a really meaningful answer to this question.

    15. DP

      All right. Um, Ilya, it has been a true pleasure. Thank you so much for coming to The Lunar Society. I appreciate you bringing this to the offices. So thank you.

    16. IS

      Yeah, I really enjoyed it. Thank you very much.

    17. DP

      Hey, everybody. I hope you enjoyed that episode. Just wanted to let you know that in order to help pay for the bills associated with this podcast, I'm turning on paid subscriptions on my Substack at warkashpatel.com. No important content on this podcast will ever be paywalled, so please don't donate if you have to think twice before buying a cup of coffee. But if you have the means and you've enjoyed this podcast or gotten some kind of value out of it, I would really appreciate your support. As always, the most helpful thing you can do is to share the podcast. Send it to people you think might enjoy it, put it in Twitter, your group chats, et cetera, just blitz the world. Appreciate you listening. I'll see you next time. Cheers. (instrumental music)

Episode duration: 47:41

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Yf1o0TQzry8

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome