Skip to content
Dwarkesh PodcastDwarkesh Podcast

Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment

In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from: * how fast algorithmic progress & hardware improvements in AI are happening, * what primate evolution suggests about the scaling hypothesis, * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers, * how quickly robots produced from existing factories could take over the economy. We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch Part 2 here: https://youtu.be/KUieFuV1fuo 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/carl-shulman * Apple Podcasts: https://bit.ly/3P9rPpJ * Spotify: https://bit.ly/42Vnbzb * Follow me on Twitter: https://twitter.com/dwarkesh_sp * Carl's blog: http://reflectivedisequilibrium.blogspot.com/ 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Intro 00:00:47 - Intelligence Explosion 00:17:18 - Can AIs do AI research? 00:38:15 - Primate evolution 01:02:45 - Forecasting AI progress 01:33:35 - After human-level AGI 02:08:54 - AI takeover scenarios

Carl ShulmanguestDwarkesh Patelhost
Jun 14, 20232h 43mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:47

    Intro

    1. CS

      Human-level AI is deep, deep into an intelligence explosion. Things like inventing the transformer, or discovering chinchilla scaling and doing your training runs more optimally, or creating flash attention. That set of inputs probably would yield the kind of AI capabilities needed for intelligence explosion. (air whooshing) We have a race between, on the one hand, the project of getting strong interpretability and shaping motivations, and on the other hand, these AIs, in ways that you don't perceive, make the AI takeover happen. (air whooshing) We spend more compute by having a larger brain than other animals, and then we have a longer childhood. It's analogous to, like, having a bigger model and having more training time with it. (air whooshing) It seemed very implausible that we couldn't do better than completely brute force evolution.

  2. 0:4717:18

    Intelligence Explosion

    1. CS

      How quickly are we running through those orders of magnitude?

    2. DP

      Okay, today I have the pleasure of speaking with Carl Shulman. Many of my former guests, and this is not an exaggeration, many of my former guests have told me that a lot of their biggest ideas, perhaps most of their biggest ideas, have come directly from Carl. Especially when it has to do with the intelligence explosion and its impacts. And so I decided to go directly to the source, and we have Carl today on the podcast. Carl keeps a super low profile, but he is one of the most interesting intellectuals I've ever encountered, and this is actually his second podcast ever. So we're gonna get to get deep into the heart of many of the most important ideas that are circulating right now, uh, directly from the source. So, and b- by the way, so Carl is also an advisor to The Open Philanthropy Project, which is one of the biggest funders on causes having to do with AI and its risks, not to mention global health and well-being, and he is a research associate at the Future of Humanity Institute at Oxford. So Carl, it's a huge pleasure to have you on the podcast. Thanks for coming.

    3. CS

      Thank you, Dhurakhar. Uh, I've enjoyed seeing, uh, some of your episodes recently, and I'm, uh, glad to be on the show.

    4. DP

      Excellent. Let's talk about AI. Before we get into the details, give me the sort of big picture explanation of the, uh, feedback loops and just the general, uh, dynamics that would start when you have something that is approaching human-level intelligence.

    5. CS

      Yeah, so I think the- the way to think about it is we have a process now where humans are developing new computer chips, new software, um, running larger training runs, and takes a lot of work, uh, to keep Moore's Law, uh, chugging. Well, it was. It's slowing down now. Um, and it takes a lot of work to develop things like transformers, um, to develop, uh, a lot of the improvements to AI and neural networks, uh, that are advancing things. And the core method that I think I want to highlight, um, on this podcast, uh, and I think is underappreciated, uh, is the idea of input-output curves. So we can- we can look at the increasing difficulty of improving chips, uh, and s- so sure, each time you double the performance of computers it's harder. And as we approach physical limits, eventually it becomes impossible. But how much harder? Uh, so there's a- a paper, uh, called, uh, Ideas Getting Harder to Find, uh, that was published a few years ago, um, from the, like, 10 years ago, uh, at MIRI, we did, uh... I mean, I- I did, uh, an early version of this, uh, of this analysis, uh, using mainly, um, data from Intel and, like, the large semiconductor fabricators. Uh, anyway, and so in- in- in this paper, uh, they cover a period where the productivity of computing went up a million-fold. So you could get a million times the computing operations per second per dollar. Big change. But it got harder. So the- the amount of investment, the labor force, uh, required to make those continuing advancements went up and up and up. Uh, indeed it went up 18-fold over that period. And now- so some take this to say, "Oh, diminishing returns. Things are just getting harder and harder, and so it'll be the end of progress eventually." However, in a world where AI is doing the work, that doubling of computing performance translates pretty directly to a doubling or better of the effective labor supply.

    6. DP

      Mm.

    7. CS

      That is, if when we had that million-fold compute increase, we used it to run artificial intelligences, uh, who would replace human scientists and engineers, then A ) the 18X increase in the labor demands of the industry would be trivial. We- we're getting more than one doubling of the effective labor supply that we need for each doubling of the labor requirement. A- and in that data set, it's, like, over four. So we double- we double compute. Okay, now we need somewhat more researchers. But a lot less than twice as many. And so okay, we- we use up some of those doublings of compute on the increasing difficulty-

    8. DP

      Mm.

    9. CS

      ... of further research. But most of them are left to expedite the process. So if you- you double your labor force-

    10. DP

      Yeah.

    11. CS

      ... that's enough to get several doublings of compute. You u- you use up one of them on meeting the increased demands from diminishing returns.

    12. DP

      Yep.

    13. CS

      The others-

    14. DP

      Yep.

    15. CS

      ... can be used to accelerate the process. So you have your- your first doubling takes however many months. Your next doubling can take-

    16. DP

      Mm.

    17. CS

      ... a smaller fraction of that. The next doubling, less, and so on. At least in so far as this, the outputs you're generating compute for AI in this story-... are able to serve the function of the necessary inputs. If there are other inputs that you need, eventually those become a bottleneck, uh, and you wind up more restricted on those.

    18. DP

      Got it, okay. So yeah, I think, I think the Bloom paper had that 30, there was 35% increase in, was it transfer density or cost per flop? And there was a 7% increase per year in the number of researchers required to sustain that pace.

    19. CS

      Something in that vicinity, yeah. It's like-

    20. DP

      Yeah.

    21. CS

      ... four, four to five, uh, doublings of compute per doubling of labor inputs.

    22. DP

      I guess there's a lot of questions we can delve into in terms of whether you would expect a similar scale with AI, and whether it makes sense to think of AI as a population of researchers that keeps growing with compute itself. Actually, let's go there. So can you explain the intuition that compute is a good proxy for the number of AI researchers, so to speak?

    23. CS

      Uh, so far talked about hardware-

    24. DP

      Yeah.

    25. CS

      ... as an initial example because we had good data-

    26. DP

      Yep.

    27. CS

      ... um, about a past period. Uh, you can also make im-, uh, improvements on the software side. Uh, when we think about an intelligence explosion, that can include AIs doing work on making hardware better, making better software, making more hardware. Um, but the basic, uh, idea for the hardware is, is especially simple, uh, in that if you have a worker, an AI worker that can substitute for a human, if you have twice as many computers, you can run two separate instances of them.

    28. DP

      Mm-hmm.

    29. CS

      Uh, and then they can do two different jobs, manage, uh, two different, uh, machines, work on, uh, two different design problems. Uh, now you can get more gains than just what you would get by having two instances. We get improvements from using some of our compute, not just to run more instances of the existing AI, but to train larger AIs. So there's hardware technology, how much you can get per dollar you spend on hardware.

    30. DP

      Yeah.

  3. 17:1838:15

    Can AIs do AI research?

    1. CS

      of the inputs to AI is on the software side.

    2. DP

      Where, uh, wha- at what point would it get to the point where the AIs are helping develop better software or better models for future AIs? Some people claim today, for example, that, you know, programmers at OpenAI are using Copilot to write programs now. So in some sense, you're already having that sort of feedback loop, that I'm a little skeptical of that, um, as a mechanism. At what point would it be the case that the AI is contributing significantly in the sense that would almost be the equivalent of having additional researchers to AI progress in software?

    3. CS

      The, the quantitative magnitude of the help is absolutely central.

    4. DP

      Yeah.

    5. CS

      So there, there are plenty of companies that, like, make some product that, like, very slightly boosts productivity.

    6. DP

      Yeah.

    7. CS

      So when Xerox makes fax machines, it maybe increases people's productivity, uh, in office work by 0.1% or something.

    8. DP

      Yeah.

    9. CS

      You're not gonna have explosive growth-

    10. DP

      Yeah.

    11. CS

      ... out of that because-Okay. Now, "0.1% more, uh, you know, effective R&D, uh, at Xerox than any customers buying-"

    12. DP

      Yeah.

    13. CS

      ... "the machines." Eh, not, not that important. So I think the, the thing, the thing to look for, uh, is when is it the case that the, the contributions from AI are starting to, uh, become as large or larger as the contributions, uh, from humans? So like, uh, when this is boosting their effective productivity by 50 or 100%-

    14. DP

      Yeah.

    15. CS

      ... and you, like, if you then go from, you know, eight months doubling time, say, for effective compute from software innovations, things like inventing the transformer or discovering chinchilla scaling-

    16. DP

      Yeah.

    17. CS

      ... and doing your training runs more optimally or creating flash attention. Yeah, if you move that from, say, eight months to four months-

    18. DP

      Yeah.

    19. CS

      ... and then the next time you apply that, it significantly increases the boost you're getting-

    20. DP

      Mm-hmm.

    21. CS

      ... uh, from the AI. So now, maybe instead of giving a 50% or 100% productivity boost, now it's more like a 200%.

    22. DP

      Yeah.

    23. CS

      Um, and so it doesn't have to have been able to automate everything involved in the process of AI research, it can be it's automated a bunch of things and then those are being done in extreme profusion. Because any, uh, a thing, a thing that A- AI can do, you have it done much more often because it's so cheap. Uh, and so it's not a threshold of, "This is human level AI, it can do everything a human can do with no weaknesses in any area." It's that even with its weaknesses, it's able to bump up the performance so that instead, instead of getting, like, the results we would have with, say, the 10,000 people working on finding these innovations, we get the results that we would have if we had twice as many of those people with the same kind of skill distribution. And so that's a, it's like a demanding challenge.

    24. DP

      Yeah.

    25. CS

      It's like, uh, you need quite a, quite a lot of capability for that. But it's also important that it's significantly less than this is a system where there's no way you can point at it and say in any respect it is weaker than a human. A system that was just as good as a human in every respect but also had all of the advantages of an AI, that is just way beyond this point. Like, if you consider that there's, like, the, the output of our existing fabs make tens of millions of advanced GPUs per year. Those GPUs, if they were running sort of AI software that was as efficient as humans, is sample efficient, it doesn't have any major weaknesses. So they can work four times as long, uh, you know, the 168-hour work week, they can have much more education than any human. So it's, you know, yeah, a human, you know, they got a PhD. You know, it's like, uh, wow, it's like 20 years of education, uh, maybe longer if they, they take a, if they take a slow, uh, slow route on the PhD. Um, it's just normal for us to train large models by eat the internet, eat all the published books ever, um, read everything on GitHub, uh, and get good, uh, at predicting it. Um, so, like, the level of education vastly beyond any human. The degree to which the models are focused on task, uh, is higher than all but, like, the most motivated humans when they're really, really gunning for it. Uh, so you combine the things, tens of millions of GPUs, each GPU, uh, is doing the work of the very best humans in the world. And, like, the most capable humans in the world, uh, can command salaries that are a lot higher than the average. And particular in a field like STEM or narrowly AI. Like, there's no human in the world who has a thousand years of experience with TensorFlow or, or let alone the new AI technologies that were invented the year-

    26. DP

      Yeah.

    27. CS

      ... the year before. But if, if they were around, uh, yeah, they'd, they'd be paid millions of dollars a year. Uh, and so when you consider this, okay, tens of millions of GPUs, each is doing the work of maybe 40, maybe more, uh, of these kind of existing workers. This is like going from a workforce of tens of thousands to hundreds of millions. You immediately make all kinds of discoveries then. You immediately develop all sorts of tremendous technologies. So human level AI is deep, deep into an intelligence explosion.

    28. DP

      Mm. Yeah.

    29. CS

      Intelligence explosion has to start-

    30. DP

      Yeah.

  4. 38:151:02:45

    Primate evolution

    1. CS

      resources into AI. That's- that's a one-time thing.

    2. DP

      If the current scale-up works, it's going to happen, we're gonna get to AGI really fast, like within the next 10 years or something. If the current scale-up doesn't work, all we're left with is just like our economy growing like 2% a year, so we have like 2% a year more resources to spend on AI. And at that scale you're talking about decades before you can, um, just through sheer brute force you can, you know, train the $10 trillion model or something. Let's talk about why you have your- your thesis that the current scale-up would work. What- what is the evidence from AI itself or maybe from primate evolution and the evolution of other animals? Just give me the whole, (laughs) the whole confluence of reasons that make you think-

    3. CS

      I- I think maybe the best way to look- look at that might be to consider when I first became interested in this area, so in the 2000s, which was before the deep learning revolution, how would I think about timelines? How did I think about timelines? And then how have I updated based on what has been happening with deep learning? Uh, and so back then, I would have said we know the brain is a physical object, an information processing device. It works, um, it's possible, and not only is it possible, it was created by evolution on Earth. And so that gives us something of an upper bound in that this kind of brute force was sufficient. Uh, there are some complexities, uh, with like, well, what if it was a freak accident-

    4. DP

      Mm-hmm.

    5. CS

      ... and it, you know, that didn't happen on all of the other planets and that added some value? Um, I have a paper with Nick Bostrom on this. Um, I think basically that's not that important an issue. Um, there's convergent evolution. Like octo- octopi are also, uh, quite sophisticated. If the, if a special event was at the level of forming cells at all, um, or forming brains at all, we get to skip that because we're choosing to build computers and we already exist. We have that- that advantage. So say evolution gives something of an upper bound, really intensive massive brute force search, um, and things like evolutionary algorithms can produce intelligence.

    6. DP

      Doesn't the fact that, um, octopi and I guess other mammals, they got to the point of being like pretty intelligent but not human level intelligent, is that some evidence that there's a hard step between a cephalopod and a human?

    7. CS

      Yeah. So that- that would be a place to look.

    8. DP

      Yeah.

    9. CS

      Um, it doesn't seem particularly compelling. One source of evidence on that, uh, is work by, um, Herculano-Houzel. Uh, I hope I haven't mispronounced her name, but she's a neuroscientist who has dissolved the brains of many creatures, uh, and by counting, uh, the, uh, the nuclei she's able to, uh, determine how many neurons are- are present, uh, in different species and find a lot of interesting trends, uh, in scaling laws. And she has a paper, um, discussing the human brain as a scaled-up primate brain.

    10. DP

      Mm-hmm.

    11. CS

      Uh, and across like a wide variety of animals and in mammals in particular, um, there are certain characteristic, uh, changes in the relative number of neuron size of different brain regions, uh, as things scale up. Um, there's a lot of, yeah, there's a lot of structural, um, structural simi- similarity there, and you can explain a lot of what is different about us with a pretty brute force story-

    12. DP

      Mm-hmm.

    13. CS

      ... uh, which is that you expend resources on having a bigger brain, keeping it in good order, giving it time to learn, so we have an unusually long childhood, unusually long neonate period. We spend more compute by having a larger brain than other animals, uh, three, more than three times as large as chimpanzees, and then we have a longer childhood, um, than- than chimpanzees and much more than many, many other creatures. So we're spending more compute in a way that's analogous to like having a bigger model and having more training time with it.

    14. DP

      Yeah.

    15. CS

      Um, and given that we see, um, with our AI models-... um, this sort of, like, large consistent benefits from increasing compute spent in those ways, and with qualitatively new capabilities, uh, showing up over and over again, uh, particularly in areas that sort of AI skeptics, uh, call out. In my experience, like, over the last 15 years, um, the things that people call out as like, "Ah, but the AI can't do that and it's because of a fundamental limitation."

    16. DP

      For sure.

    17. CS

      Uh, I've gone through a lot of them. You know, there were Winograd schemas, uh, catastrophic forgetting, quite a number. Um, and yeah, they have repeatedly gone away, uh, through scaling. And so there's a, th- there's a picture that we're, we're seeing supported from biology and from our experience with AI, where you can explain, like, yeah, in general, there are trade-offs where the extra fitness you get from a brain is not worth it, and so cre- creatures wind up mostly with small brains because they can save that biological energy and that time to reproduce, for digestion, and so on. And humans, we actually seem to have wound up in a, a niche, uh, that was then self-reinforcing, where we greatly increased the returns to having large brains. And language and technology are the sort of obvious candidates. When you have humans around you who know a lot of things and they can teach you, and compared to almost any other species, we have vastly more instruction from parents and the society of the young'un, then you're getting way more from your brain. Because you can get, per minute, you can learn a lot more useful skills, and then you can provide the energy you need to feed that brain, um, by hunting and gathering, by having fire that makes digestion easier. And basically how this process goes on, it's increasing the marginal increase in reproductive fitness you get from allocating more resources along a bunch of dimensions-

    18. DP

      Mm-hmm.

    19. CS

      ... towards cognitive ability. And so, um, that's bigger brains, longer childhood, having our attention be more on learning. So humans play a lot, and we, we keep playing as adults, which is a very weird thing compared to other animals. Um, we're more motivated to copy other humans around us, uh, than, like, even than the other primates. And so these are sort of motivational changes that keep us using more of our attention and effort on learning, which pays off more when you have a bigger brain and a longer lifespan in which to learn it. Many, many creatures are subject to lots of predation or disease. And so if you try, you know, you're, you're a mayfly or a mouse, if you try and invest in, like, a giant brain and a very long childhood, you're quite likely to be killed by some predator or some disease-

    20. DP

      Mm-hmm.

    21. CS

      ... before you're able to actually use it. And so that means you actually have exponentially increasing costs in a given niche. So if, if, if I have a 50% chance of dying every few months, as a, you know, a little mammal or a little lizard or something, that means the cost of going from three months to 30 months of learning and childhood development-

    22. DP

      Yeah.

    23. CS

      ... is now 10 times, uh, the loss. It's now, it's two to the negative 10. So one th- a factor of 1,024 reduction in the benefit I get from what I ultimately learn, because 99.9% uh, of the animals will have been killed before that point. We're in a niche where we're like a large, long-lived animal with language and technology, so wh- where we can learn a lot from our groups. And that means it pays off to really, um, just expand our investment on these multiple fronts in intelligence.

    24. DP

      That, that's so interesting. Um, I, uh, just for the audience, the, the calculation about, like, two to the whatever months is just like, you have a half chance of dying this month, a half chance of dying next month, um, you multiply those together. Okay. There's other species, though, that do live in flocks or as, um, packs where you could imagine... I mean, uh, they do have, like, a, a smaller version of the development of cubs into, that are, like, play with each other. Why isn't this a hill on which they could have climbed to human level intelligence themselves? If it's something like language or technology, um, humans were getting smarter before we got language. I mean, uh, obviously we had to get smarter to go get language, right? Uh, we couldn't just get language without becoming smarter. So yeah, where did... Wh- wh- what... It seems like, uh, there should be other species that should have beginnings of this sort of cognitive revolution, especially given how valuable it is, given, listen, we've dominated the world. You would think there'd be selective pressure for it.

    25. CS

      Evolution doesn't have foresight.

    26. DP

      Yeah.

    27. CS

      The thing in this generation that gets more surviving offspring and grandchildren, that's the thing that becomes more common. Uh, evolution doesn't look ahead and say, "Oh, in, in a million years, you'll have a lot of descendants." Uh, it's-

    28. DP

      Yeah.

    29. CS

      ... what, what survives and reproduces now. Uh, and so in fact there, there are correlations where, uh, social animals, uh, do on average, uh, have larger brains. Uh, and part of that is probably, uh, that the additional social applications of brains, like keeping track of which of your group members have helped you before so that you can reciprocate. You scratch my back, I'll scratch yours. Uh, remembering who's dangerous within the group. That sort of thing, um, is an, is an additional application of intelligence. Um, and so there's some correlation there. Um, but what it, what it seems like is that, yeah, in most of these cases, um, it's enough to invest more, but not invest to the point where a mind can easily develop language and technology-

    30. DP

      Ah.

  5. 1:02:451:33:35

    Forecasting AI progress

    1. CS

      given the rates of change we've seen-

    2. DP

      Okay.

    3. CS

      ... with the last few scale ups.

    4. DP

      All right. So at this point, somebody might be skeptical, okay, like, listen, we already have a bunch of human researchers, right? Like, the incremental researcher, how, how powerful is that? And then you might say, well, no, this is like thousands of researchers. I don't know how to express this, uh, skepticism exactly, but sceptics is skeptical of just generally the effect of scaling up the number of people working on the problem to rapid, rapid progress on that problem. W- what, somebody might think, okay listen, with humans the reason population working on a problem is such a good proxy for progress on that problem is that there's already so much variation that is accounted for when you say there's like a million people working on a problem. You know there's like a, a co- hundreds of super geniuses working on it, thousands of people who are like very smart working on it. Whereas with an AI, all the copies are like the same level of intelligence. Um, and if it's not super genius intelligence, the, uh, the, the, the, the total quantity might not matter as much.

    5. CS

      Yeah, I'm not sure what your, um, model is here. So is this the model that the diminishing returns kick-off suddenly has a cliff right where we are? And so like, there was, there were results in the past from throwing more people at problems. Um, and I mean, this has been useful in historical prediction. Um, one of the, there's this idea of experience curves and Wright's Law, um, basically measuring cumulative production in a field or, which is a, also gonna be a mea- a measure of like the scale of effort and, and investment. And people have used this correctly, uh, to argue that renewable energy technology like solar, uh, would be falling rapidly in price because it was going from a, a low base of very small production runs, not much investment in doing it efficiently, um, and yeah. Uh, climate advocates correctly called out, um, people and people like David Roberts, um, and the futurist Ramez Naam, uh, actually has some, some interesting, uh, writing on this that, yeah, correctly called out that there would be a really drastic, uh, fall in prices of solar and batteries because of the increasing investment going into that. Um, the Human Genome Project would be another. So I'd say that there's like, yeah, real, real evidence. These observed correlations from, like, ideas getting harder to find have, have held over a fair, uh, a far range of data and over quite a lot of time. Uh, so I'm wondering-

    6. DP

      But, but what you're specifically-

    7. CS

      ... what, what, what's the, yeah, the, the nature of the deviation you're thinking of.

    8. DP

      That, um, we're talking about, uh, maybe th- this is like a good way to describe what happens when more humans enter a field. But does it even make sense to say like a greater population of AIs is doing AI research if there's like more GPUs running a copy of GPT-6 doing AI research? M- it just like, how, how applicable are these, uh, economic models of human, uh, the quantity of humans working on a problem to the, to the magnitude of AIs working on a problem?

    9. CS

      Yeah. So if you have AIs that are directly automating, uh, you know, particular jobs that humans were doing before, then we say, "Well, with additional compute, we can run more copies of them, uh, to do more of those tasks simultaneously. Uh, we can also run them at greater speed." And so some people have an intuition that like, "Well, you know what matters is, like, time. It's not how many people are working on a problem at a g- at a given point." Um, I think that that doesn't bear out super well. Um, but AI can also be run faster than humans. And so if you have a, a set of AIs, um, that can do the work of the individual human researchers and run at 10 times or 100 times, uh, the speed and we ask, "Well, could the human research community have solved these algorithmic problems, do things like invent transformers, uh, over 100 years?"... uh, if we have this, um, we have AIs with a population, effective population similar to that of the humans but running 100 times as fast. Uh, and so you ha- you have to tell a story where, no, the AI, they can't really do the same things as the humans. Uh, and we're talking about what happens when, uh, the AIs are more capable of, in fact, doing that.

    10. DP

      Although they become more capable as l- lesser capable versions of themselves help us ma- make themselves more capable, right? So w- you have to, like, kickstart that at some point. Is there an example in analogous situations, is intelligence unique in the sense that you have a feedback loop of w- with a learning curve or something else, a system's outputs are feeding into its own inputs in a way that... Because if we're talking about something like Moore's law or the cost of solar, you do have this way where, where, you know, more people are, we're throwing more people at the problem and it's, um, we're, you know, we're making a lot of progress. But we don't have the sort of additional part of the model where Moore's law leads to more humans somehow, uh, and the more humans are becoming researchers.

    11. CS

      So you do actually have a version of that in the case of solar. So you have a small infant industry that's doing things like providing solar panels for space satellites and then getting increasing amounts of subsidized government demand because of, uh, you know, worries about fossil fuel depletion and then climate change. You, you can have the dynamic where visible successes, uh, with solar or, like, lowering prices then open up new markets. So there's a particularly huge transition where renewables become cheap enough to replace large chunks of the electric grid. Uh, earlier, you're li- you're dealing with very niche situations like, yeah, so the satellites where you have (laughs) , uh, it's very difficult to refuel a satellite, uh, in place and then remote areas and then moving to, like, you know, the super sunny, the sunniest areas in the world with the biggest solar subsidies. Um, and so there was an element of that where more and more investment has been thrown into the field and, like, the market has rapidly expanded as the technology improved. But I think the, the closest analogy, um, is actually the long run growth of human civilization itself. And I know you had Holden Karnofsky from the Open Philanthropy Project on earlier and discussed some of this research about the long run acceleration of human population and economic growth. And so developing new technologies allowed human population to expand, uh, humans to o- occupy new habitats and new areas and then to invent agriculture which supported larger populations and then even more advanced agriculture in the modern industrial society. And so their total technology and output allowed you to support more humans who then would discover more technology, uh, and continue the process. Now, that was boosted because on, on top of expanding the population, the share of human activity that was going into invention and innovation went up. And that was a key part of the Industrial Revolution. There was no such thing as a corporate research lab or, like, an engineering university, um, prior to that. And so you, you were both increasing the total human population and the share of it going in. But this population dynamic is pretty, is pretty analogous. Humans invent farming, they can have more humans, then they can invent industry and so on.

    12. DP

      Mm. So maybe somebody would be skeptical that with AI progress specifically, it's not just a matter of, you know, some, like, um, some farmer fig- figuring out crop rotation or some blacksmith figuring out how to do metallurgy better. You in fact, even to make the, uh, for the 50% improvement in productivity, you basically need something on the IQ that's close to Ilya Sutskever. Eh, so there's, like, a discontinuous, um... You're, like, contributing very little to productivity and then you're like Ilya and then you contribute a lot. But the, the becoming Ilya is... You, you see what I'm saying? There's not, like, a gradual increase in capabilities at least with the feedback loop.

    13. CS

      You're imagining a case where, uh, the distribution, uh, of tasks is such that there's nothing that you can, where individually automating it particularly helps. Uh, and so the ability to contribute to AI research is really end loaded. Is that what you're saying?

    14. DP

      Yeah. Uh, I mean, we already see this in, in these sorts of, like, really high IQ, uh, companies or projects where theoretically, I guess Jane Street or OpenAI could hire, like, a bunch of, uh, you know, mediocre people to do... There's a comparative advantage. They could do some, uh, menial tasks and that could free up the time of the really smart people. But they don't do that, right? Uh, transaction costs, whatever else.

    15. CS

      Self-driving cars would be another example where you have a very high quality threshold. And so when you, your performance as a driver is worse than a human, like, you have 10 times the accident rate or 100 times the accident rate, then the cost of insurance for that, which is a, a proxy for people's willingness to ride the car and stuff too-

    16. DP

      Yeah.

    17. CS

      ... um, would be such that the insurance costs would absolutely dominate. So even if you have zero labor cost, it's offset by the increased insurance costs.

    18. DP

      Right.

    19. CS

      Uh, and so there are lots of cases like that where, like, partial automation is not in practice, uh, very usable because complementing other resources, you're gonna use those other resources le- less efficiently. Um, you know, in, uh, in a post-AGI future, um, and the same thing can apply to humans. So people can say, "Well, comparative advantage, you know, even if AIs can do everything better than a human, uh, well, it's still worth something." A human can do, do something, make an em- you know, they can lift a box. That's something. Um, now there's a question of property rights if, well, if they could just-

    20. DP

      Slipper rocks. (laughs)

    21. CS

      ... replace a human and use them to make more robots.

    22. DP

      Yeah, yeah.

    23. CS

      Um, but even, even absent that, uh, in such an economy, you wouldn't want to let a human worker-... into any industrial environment because, you know, in a clean room, they'll be emitting all kinds of skin cells and messing things, things up. You need to have an atmosphere there, you need a bunch of supporting tools and resources and materials. And those supporting resources and materials will do a lot more productively working with AI and robots, rather than a human. So you don't wanna let, let a human anywhere near the thing, just like, you know, in a, uh, you don't, you don't wanna have a gorilla wandering around in a China shop even if you've trained it to most of the time pick up a box for you if you give it a banana. It's just not worth it to have it wandering around your China shop.

    24. DP

      Yeah, yeah, yeah. But, like, why, why is that not a good objection to-

    25. CS

      I mean, I think that, that is one of the, the ways, uh, in which partial automation, uh, can fail to really translate into a lot of economic value.

    26. DP

      Mm-hmm.

    27. CS

      Um, that's something that will attenuate as we go on and as the AI is more able to work independently and more able to handle its own, uh, its own screw-ups, get more, more reliable.

    28. DP

      But the way in which it becomes more reliable is by AI progress speeding up, which happens if AI can contribute to it.

    29. CS

      Yeah.

    30. DP

      But if there is the, if there is some sort of reliability bottleneck that prevents it from contributing to that progress, then you don't have the loop, right? So...

Episode duration: 2:43:33

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode _kRg-ZP1vQc

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome