Skip to content
Dwarkesh PodcastDwarkesh Podcast

Gwern — Anonymous writer who predicted AI trajectory on $12K/year salary

Gwern's blog: https://gwern.net/. Gwern is a pseudonymous researcher and writer. After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute: https://donate.stripe.com/6oE9DTgaf6oD0M03cc. Thank you to my friend Chris Painter for doing an amazing job voice acting Gwern. 𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒 * Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go here: https://jane-st.co/dwarkesh * Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models’ reasoning, coding, and multimodal capabilities. Learn more at https://turing.com/dwarkesh. * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more here: https://stripe.com/ 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/p/gwern-branwen * Me on Twitter: https://twitter.com/dwarkesh_sp * Spotify: https://open.spotify.com/episode/46H5dTtYaj1L55UAy9XXaY?si=xVoj6euwQdmZYnyvaQ46lA 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Anonymity 00:01:09 - Automating Steve Jobs 00:04:38 - Isaac Newton's theory of progress 00:06:36 - Grand theory of intelligence 00:10:39 - Seeing scaling early 00:21:04 - AGI Timelines 00:22:54 - What to do in remaining 3 years until AGI 00:26:29 - Influencing the shoggoth with writing 00:30:50 - Human vs artificial intelligence 00:33:52 - Rabbit holes 00:38:48 - Hearing impairment 00:43:00 - Wikipedia editing 00:47:43 - Gwern.net 00:50:20 - Counterfactual careers 00:54:30 - Borges & literature 01:01:32 - Gwern's intelligence and process 01:11:03 - A day in the life of Gwern 01:19:16 - Gwern's finances 01:25:05 - The diversity of AI minds 01:27:24 - GLP drugs and obesity 01:31:08 - Drug experimentation 01:33:40 - Parasocial relationships 01:35:23 - Open rabbit holes

Dwarkesh PatelhostGwern Branwenguest
Nov 13, 20241h 36mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:09

    Anonymity

    1. DP

      Today, I'm interviewing Gwern Branwen. Gwern is an anonymous internet researcher and writer. He's deeply influenced the people who are building AGI. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive. We recorded this conversation in person. In order to protect Gwern's anonymity, we created this avatar. This isn't his voice, this isn't his face, but these are his words. Gwern, what is the most underrated benefit of anonymity?

    2. GB

      I think the most underrated benefit of anonymity is that people don't project onto you as much.

    3. DP

      Mm-hmm.

    4. GB

      Um, they, they kind of can't, like, slot you into any particular niche or identity and, like, end up writing you off in advance. You know, every- everyone has to read you at least a little bit-

    5. DP

      Mm-hmm.

    6. GB

      ... um, to, to even begin to dismiss you. It's great that people can't retaliate against you and I, I've derived a lot of benefit from people not being able to, like, mail heroin to my home-

    7. DP

      (laughs)

    8. GB

      ... and call the police, uh, to swat me. But, but I always feel that the biggest benefit is just that you get a hearing at all, basically.

    9. DP

      Right.

    10. GB

      Um, you, you don't get immediately written off by the context.

  2. 1:094:38

    Automating Steve Jobs

    1. GB

    2. DP

      Do you expect companies to get automated top-down, starting with the CEO, or from the bottom-up, starting with workers?

    3. GB

      All the pressures, I think, are to go bottom-up.

    4. DP

      Mm-hmm.

    5. GB

      Um, and from existing things, it's just much more palatable in every way to start at the bottom and replace there, and then work your way up, um, to eventually kind of just having human executives overseeing a firm of AIs.

    6. DP

      Mm-hmm.

    7. GB

      And also from an RL perspective, I think if we are in fact better than AIs in some way, it should be in the long-term vision thing, right? Like, the AIs will be too myopic to execute any kind of novel long-term strategy and seize new opportunities. So that would presumably give you this paradigm where you have, like, a human CEO who does the vision thing-

    8. DP

      Yeah.

    9. GB

      ... and then the AI corporation kind of, like, scurries around underneath them doing, you know, the CEO's bidding.

    10. DP

      Right.

    11. GB

      And they don't have the taste that the CEO has. So you have one kind of Steve Jobs figure at the helm, and then maybe a whole pyramid of AIs out there executing the vision and bringing him new proposals. And he, you know, he looks at every individual thing and says, "No," like, "that proposal is bad. This one is good."

    12. DP

      Mm-hmm.

    13. GB

      That may be hard to quantify, but I think that human-led firms should, you know, under this view, end up out-competing the entirely AI firms, which would keep making these myopic choices that just don't quite work out in the long term.

    14. DP

      What is the last thing that you think you personally will be doing before your last keystroke is automated?

    15. GB

      The last thing that I see myself still doing, right before the nanobots start eating me from the bottom-up-

    16. DP

      (laughs)

    17. GB

      ... and I start screaming, "Uh, no, I specifically requested the opposite of this!"

    18. DP

      (laughs)

    19. GB

      Um, is, I think right before that, I think what I'm still doing is the Steve Jobs kind of thing of choosing, right? Um, so my AI minions are, like, bringing me wonderful essays, um, and I'm saying, "This one is better." Uh, you know, "This is the one that I like." And possibly building on that and saying, "That, that's almost right, but y- you know what would make it really good? If you pushed it to 11 in this way."

    20. DP

      Mm-hmm. If we do have firms that are made up of AIs, what do you expect the unit of selection to be? Will it be individual models? Will it be the firm as a whole? I mean, with humans, we have these debates about whether it's kin-level selection, individual-level selection, gene-level selection. What will it be for the AIs?

    21. GB

      Yeah, I think once you can replicate individual models perfectly, the unit of selection can move way up and you can do much larger groups and packages of minds. That would be sort of an obvious place to start.

    22. DP

      Mm-hmm.

    23. GB

      Um, you can train individual minds in a differentiable fashion, but then you can't really train the interaction between them, right?

    24. DP

      Right.

    25. GB

      So you, you'll have groups of models or minds of, of people who just work together really well in a global sense, even if you can't attribute it to any particular aspect of their interactions. Um, there's some places you go and people just, like, work really well together and there's nothing specific about it-

    26. DP

      Mm-hmm.

    27. GB

      ... but for whatever reason, they all just click in just the right way.

    28. DP

      Right.

    29. GB

      So I think, um, that seems like the most obvious unit of selection. You would have, like, packages, I guess possibly, like, department units where you have a programmer and a manager type, then you have maybe a secretary type or maybe a financial type, a legal type. This is the default package where you just copy everywhere you need a new unit. And at this level, you can start evolving them and making random variations to each of the packages-

    30. DP

      Mm-hmm.

  3. 4:386:36

    Isaac Newton's theory of progress

    1. GB

      best.

    2. DP

      By when could one have foreseen the singularity? So obviously, Moravec and others are talking about it in the '80s and '90s, but when was the earliest you could have seen where things are headed?

    3. GB

      I think if you want to trace the genealogy there, you'd probably have to go back at least as far as Samuel Butler's Erewhon in 1872, or his essay before that. I mean, in 1863, um, he described explicitly his vision of a machine life becoming ever more developed until eventually it's autonomous, um, at which point it's a threat to the human race. And he concluded, "War to the death should be instantly proclaimed against them." That seemed really prescient for 1863.

    4. DP

      (laughs)

    5. GB

      Um, I'm not sure that anyone has given a clear singularity scenario earlier than that. The idea of technological progress was still relatively new at that point. Um, I love this example of Isaac Newton looking at the rate of progress in Newton's time, in his own, you know, contemporary time, and going, "Wow, there's something really strange here. Stuff is being invented now around us. We're making progress. How is that possible?"And then coming up with the answer, "Well, progress must be possible now because civilization gets destroyed every couple of thousand years, and all we're doing is reinventing and rediscovering the old stuff." That, that was actually his explanation for technological acceleration. We can't actually have any kind of real technological acceleration. It must be because the world gets destroyed periodically, and we just can't see past the last reset.

    6. DP

      You know, it, it almost is like Fermi's paradox but for different civilizations across time with respect to each other instead of aliens across space.

    7. GB

      Yeah, yeah. It turns out even Lucretius, around 1,700 years before that, was writing the same argument. He said, "Look at all these wonderful innovations and arts and sciences that we Romans have compiled together in the Roman Empire. This is amazing, but it can't actually be a recent acceleration technology." Could that be real? Could there be, you know, progress? No. That's crazy.

    8. DP

      (laughs)

    9. GB

      Obviously, the world was just recently destroyed.

    10. DP

      Interesting.

  4. 6:3610:39

    Grand theory of intelligence

    1. DP

    2. GB

      It is, yeah.

    3. DP

      What is the grand parsimonious theory of intelligence gonna look like? It seems like you have all these trends across different fields, like scaling laws in AI, like the scaling of the human brain when we went, uh, from primates to humans, the uniformity of the neocortex, many other things which seem to be pointing towards some grand theory that should exist which explains what intelligence is. And what, what do you think that will look like?

    4. GB

      So, the 10,000-foot view of intelligence that I think the success of scaling points to is that all intelligence is, is search over Turing machines. And I think anything that happens can be described by Turing machines of various lengths, and all that we're doing when we're doing learning or wh- when we're doing scaling is that we're searching over more and longer Turing machines, and we're applying them in every specific case. I think otherwise, there's kind of, you know, there's no general master algorithm, and there's no special intelligence fluid. It's just a tremendous number of special cases that we learn and then code into our brains.

    5. DP

      Yeah, I mean, when I think about... I don't know. Wh- wh- when I think about the way in which my smart friends are smart, it kinda just feels like a more, um, like a general horsepower kind of thing, right? Th- they've just got more juice, and that seems more compatible with this master algorithm perspective, whereas if, with this Turing machine perspective, I don't know, it doesn't really feel like they've got this long tail of Turing machines that they've learned. Uh, how, how, how does this picture account for variation in human intelligence?

    6. GB

      When we talk about more or less intelligence, it's just that they have more compute in order to do search over more Turing machines for longer. Um, I don't think there's, like, anything else other than that. So, you know, from any learned brain, you could extract small solutions to specific problems, but because all the large brain is doing with the compute is finding it. Um, and that, that, that's why you never kind of, you know, are going to find any IQ gland. There's nowhere in the brain where if you hit it, you eliminate fluid intelligence.

    7. DP

      Mm-hmm.

    8. GB

      It, I just think that, you know, it'll turn out that, you know, this doesn't exist.

    9. DP

      Yeah.

    10. GB

      Because what your brain is doing is a lot of learning individual specialized problems, and then once those individual problems are learned, then they get recombined for fluid intelligence, and that's just, you know, like, intelligence. Typically, with a, a large neural network model, you can always pull out kind of a small model which does a specific task equally well because that's all the large model is, right? It's, it's just a gigantic ensemble of small models tailored to the ever-escalating number of tiny problems that you have been feeding them.

    11. DP

      So, if intelligence is just search over Turing machines, and of course, intelligence is tremendously valuable and useful, doesn't it make it all the more surprising that intelligence took this long to evolve in humans?

    12. GB

      Not, not really. Uh, I, I would actually just say that it helps explain why human-level intelligence isn't such a great idea and so rare to evolve. Because any small Turing machine could always be encoded more directly by your genes, right? With sufficient evolution.

    13. DP

      Yeah.

    14. GB

      You have these organisms where, like, their entire neural network is just hard-coded by the genes.

    15. DP

      Mm-hmm.

    16. GB

      So, if you could do that, obviously, that's way better than some sort of colossally expensive, unreliable, glitchy search process like what humans implement, right? Which takes whole days, in some cases, to learn, whereas, you know, you- you... It could be hardwired in right from birth. So I think for many creatures, like, it just doesn't pay to be intelligent because that's not actually adaptive. Um, there are better ways to solve the problem than a general-purpose intelligence. So in, in any kind of niche where it's, like, static, or where intelligence will be super expensive, or where you don't have much time because you're a short-lived organism is gonna be really hard to evolve a general-purpose learning mechanism when you could instead evolve one that's just tailor-made to the specific problem that you encounter.

  5. 10:3921:04

    Seeing scaling early

    1. GB

    2. DP

      Mm-hmm. You're one of the only people outside of OpenAI who, in 2020, had this detailed empirical model of scaling, and I'm curious what processes you were using at the time which allowed you to see the picture that you painted in the scaling hypothesis post that you wrote at the time.

    3. GB

      So, I think if I had to give an intellectual history of that for me, I think it would probably start in the mid-2000s when I was reading Moravec and Ray Kurzweil. Um, at the time, they were making this kind of fundamental connectionist argument that if you had enough computing power, um, that that could result in discovering the neural network architecture that matches the human brain, and until that happens, until that, that amount of computing power is available, AI just seemed basically futile.

    4. DP

      Right.

    5. GB

      And me, I think I, I found this argument very unlikely, um, bec- because it's very much a kind of, "Build it and they will come," view of progress, which I just didn't think was correct. Um, I, I thought that it just seemed ludicrous to suggest that, you know, just because you'd have some, like, really big supercomputer out there, um, which matches the human brain, then that would kind of just summon out of non-existence the correct algorithm. Algorithms are, are really complex. They're hard. Um, they, they, they require deep insight, or at least I thought they did, um, and, and it seemed like really difficult mathematics. You can't just, like, buy a bunch of computers and then expect to get this advanced AI out of it. Um, it, it just seemed like totally magical thinking. So I knew the argument.... but I was super skeptical, and I didn't pay too much attention. Um, but then Shane Legg and some others were very big on this, um, in the, the years following. And as part of my interest in transhumanism and, and less wrong in AI risk, I was paying close attention to Legg's blog posts in particular, um, where he's extrapolating kind of out the trend with updated numbers from Kurzweil and Moravec. And he's giving these kind of very precise predictions about how, you know, we're going to get the first generalist-

    6. DP

      Mm-hmm.

    7. GB

      ... uh, system around 2019 as Moore's law keeps going, and that by 2025, we would have kind of human-ish agents with generalist capabilities. And that by 2030, he said we should have AGI. So along the way, um, you know, DANNet and AlexNet came out, and when those came out, I was like, "Wow, um, this seems like a very impressive success story for the, the connectionism view." Um, but is it just an isolated success story or, you know, is this what-

    8. DP

      Mm-hmm.

    9. GB

      ... Kurzweil and Moravec and Shane Legg had been predicting, that we would get GPUs and then get better algorithms, would just kind of show up? Um, so I started thinking to myself that, you know, this i- this is something, it's a trend to keep an eye on, um, and maybe it's not quite as stupid as an idea, um, as I originally thought. And I just keep reading deep learning literature, noticing again and again that the dataset size just kept getting bigger. The models seemed to keep getting bigger. The GPU slowly crept up from one GPU, you know, the cheapest consumer GPUs, to two, and then eventually they were training on eight. And you can just see the fact that the neural network just kept expanding from these incredibly niche individual use cases, which do next to nothing. Um, the use just kept getting broader and broader and broader. I would say to myself, "Wow, is there anything that CNNs can't do?" 'Cause I just see people applying CNN to something else, you know, ev- every individual day on archive.

    10. DP

      Mm-hmm.

    11. GB

      This gradual trickle of drops kind of just kept hitting me in the background as I was going on, um, with my life. You know, every, every few days, like a- another one would drop, and I'd go like, "Huh, um, you know, maybe intelligence really is just, like, a lot of compute-"

    12. DP

      Mm-hmm.

    13. GB

      "... applied to a lot of data, applied to a lot of parameters. Um, maybe Moravec and Legg and Kurzweil were right." And I'd just note that and kind of continue on thinking to myself, like, "Huh, if that was true, it would have a lot of implications." So, I think there wasn't really, like, a eureka moment there. It was just continuously watching this trend that no one else seemed to see, except possibly a handful of people like Ilya Sutskever, um, or Schmidhuber, um, and I would just pay attention and notice that the world over time looked more like their world than it looked like my world, um, where algorithms are super important and you need, like, deep insight to do stuff, you know? Um, their world just kept happening, and then GPT-1 came out, and I was like, "Wow, this unsupervised sentiment neuron is just learning on its own," right? Um, that seemed pretty amazing. Um, it also was a very compute-centric view. You just build the transformer, and the intelligence will come. And then GPT-2 came out, and I had this holy shit moment. You look at the prompting and the summarization, like, "Holy shit, do we live in their world?"

    14. DP

      Mm-hmm.

    15. GB

      And then GPT-3 comes out, and that was really the crucial test. It was a huge, huge scale-up, one of the biggest scale-ups in all of neural network history going from GPT-2 to GPT-3. And it wasn't like it was a super narrow specific task like Go. It- it- it really seemed like it was the crucial test. If scaling was bogus, then the GPT-3 paper should have just been totally unimpressive and wouldn't show anything that important. Whereas if scaling were true, you would just automatically be guaranteed to get so much more impressive results out of it than you had seen with GPT-2. So I opened up the first page, maybe the second page, and I saw a few-shot learning chart, and I'm like, "Holy shit, we are living in the scaling world. Legg and Moravec and Kurzweil were right." Then I turned to Twitter, and everyone else was like, "Oh, you know, th- this shows that scaling works so badly. Why? It's, it's not even state of the art." And that, that I, I was, that made me really angry.

    16. DP

      (laughs)

    17. GB

      I had to write all this stuff up. Um, someone was wrong on the internet.

    18. DP

      (laughs) Um, so I, I remember 2020. At the time, I feel like a lot of people were writing best-selling books about AI. Like, it was definitely a thing people were talking about, but people were not noticing maybe the most salient things in retrospect, which is LLMs, GPT-3, scaling laws. And so all these people who are talking about AI but missing this crucial crux, what were they getting wrong?

    19. GB

      I think for the most part, they were suffering from two issues.

    20. DP

      Mm-hmm.

    21. GB

      Um, first, I think they hadn't really been paying attention to all of the scaling results before which were relevant. Um, they hadn't really appreciated the fact that, for example, AlphaZero was discovered in part by DeepMind doing Bayesian optimization on hyperparameters and noticing that you could just get rid of more and more of the tree search and get better models.

    22. DP

      Mm-hmm.

    23. GB

      That was a critical insight, I think, um, which could only have been gained by having so much compute power that you could afford to train many, many versions and see the difference that that made.

    24. DP

      Mm-hmm.

    25. GB

      Similarly, I think they, those people kind of simply just, like, didn't know about the, the Baidu paper on scaling laws from 2017, um, which showed that the scaling laws just keep going and going forever practically. Um, it should have been the most important paper of the year, um, but I think that, you know, a lot of people just didn't prioritize it.

    26. DP

      Mm-hmm.

    27. GB

      It didn't have any immediate implication, and so it sort of just got forgotten. Um, people were too busy discussing transformers or AlphaZero or something at the time to, to really notice it. So that was one issue, um, and I think another issue is that they shared the basic error that I was making about algorithms being more important than compute.

    28. DP

      Mm-hmm.

    29. GB

      Um, this was in part...... I think due to a systematic falsification of the actual origins of ideas-

    30. DP

      Mm-hmm.

  6. 21:0422:54

    AGI Timelines

    1. GB

      GPUs.

    2. DP

      Right. What do your timelines look like over the last 20 years? I- is it just, is AI just getting monotonically closer over time?

    3. GB

      Yeah, I would say it was very far away from, like, 2005 to 2010. It was somewhere well past, like, 2050. It was close enough that I thought I might live to see it, but I was, you know, not actually sure if there was any reasonable chance. But once AlexNet and DANNet came out, um, then it just kind of kept dropping at a rate of, like, two years per year-

    4. DP

      (laughs)

    5. GB

      ... every year-

    6. DP

      Right.

    7. GB

      ... uh, basically until now.

    8. DP

      Yeah.

    9. GB

      We just kept hitting on barriers to deep learning doing better, and I think regardless of how it was doing it, it was obviously getting way better. It just seemed like none of the alternative paradigms were really doing that well, and this one was doing super well.

    10. DP

      Was there a time that you felt you updated too far?

    11. GB

      Yeah, there, there were a few times where I thought I had overshot. Um, I thought people over-updated on AlphaGo.

    12. DP

      Mm-hmm.

    13. GB

      Um, they, they went too far on AI hype with AlphaGo, I think, and then afterwards, when pushes into big reinforcement learning efforts had kind of fizzled out, like post-DOTA, um, as the reinforcement learning-

    14. DP

      Mm-hmm.

    15. GB

      ... wasn't working out for solving all those hard problems outside of the simulated game universes, then I started thinking, "Oh, okay, maybe we kind of overshot." But then GPT came out of nowhere and basically erased all of that. It was kind of this, like, "Oh, shit. Uh, here's how RL is going to work. It's going to be the cherry on this cake, and we're just going to focus on the cake for a while." And now we- we've actually figured out a good recipe for baking a cake, which wasn't true before. Before, it seemed like you were going to have to kind of brute force it end to end from the rewards-

    16. DP

      Mm-hmm.

    17. GB

      ... but now you can do the lacuna thing-

    18. DP

      Right.

    19. GB

      ... of, like, learning fast on generative models and then just doing a little bit of RL on top to make it do

  7. 22:5426:29

    What to do in remaining 3 years until AGI

    1. GB

      something specific.

    2. DP

      Right. Now that you know that AI is a thing that is coming, d- basically, what's your thinking around how you, how you see your role in this timeline and also what your, how you're thinking about how to spend these next few years?

    3. GB

      Yeah, I've been thinking about that, uh, quite a lot, w- what do I want to do, you know?

    4. DP

      Right.

    5. GB

      Um, and, and what would be useful to do?

    6. DP

      Mm-hmm.

    7. GB

      I'm doing things now because I want to do them, um, regardless of whether it will be possible for an AI to do them in, like, three years. I, I do something because I want to, because I like it, you know, I find it funny or whatever, um, or maybe I think carefully about kind of just doing the human part of it, like laying out a proposal or something. Um, if you take seriously the idea of getting AGI in just a few years, you don't necessarily have to implement stuff and do it yourself.

    8. DP

      Mm-hmm.

    9. GB

      You, you can sketch out clearly, like, what you want and why it would be good-

    10. DP

      Mm-hmm.

    11. GB

      ... um, and then how to do it.

    12. DP

      Right.

    13. GB

      And then basically just wait for the better AGI to come along and actually do it then.

    14. DP

      Right.

    15. GB

      Un- unless, you know, there's some really compelling reason to do it right now and pay the cost, um, in terms of scarce time.

    16. DP

      Yeah.

    17. GB

      But otherwise, I, I'm trying to write more, uh, about what isn't recorded. Things like preferences and, and desires and evaluations and judgments, uh-... things that A- an AI couldn't replace, even in principle. The way I like to put it is that the AI kind of can't eat ice cream for you, right? It- it can't decide for you which kind of ice cream you like. Um, only you can do that. And if anything else did, it would just be worthless basically, um, because it's not your particular preference. And that's kind of the rubric for me, right? Like, is this something that I want to do regardless of any future AI because I enjoy it? Or is it something where I'm doing only the human part of it maybe and the AGI can later on do it? Um, or is this writ- writing down something that's unwritten today, um, and thus helping kind of the future AI versions of me? So if it doesn't fall under one of those three, I've been trying to basically, like, not do it. Um, and if you look at it that way, I think many of the projects that people do right now, uh, basically have, like, no lasting value.

    18. DP

      Mm-hmm.

    19. GB

      Right? They're- they're doing things that they don't enjoy, um, which it- which, you know, record nothing ephemeral kind of a value that, uh, couldn't be inferred or generated later on. And I think they're at best kind of getting two or three years of utility out of whatever they're doing before it could've been done by an AI system.

    20. DP

      Wait, your timeline for when AI could write a Gordon-quality essay is two to three years? (laughs)

    21. GB

      I mean, I have ideas, uh, about how to make it possible.

    22. DP

      Yeah.

    23. GB

      Um, which might not require AGI if it kind of combined my entire corpus. But I think many potential essay ideas are already basically mostly done in my corpus. So you don't need to be, like, super intelligent to pull it out. But I mean, let's, you know, talk about AGI in general. I think the anthropic timeline of 2028 seems like a good kind of personal planning starting point, where even if you're wrong, you probably weren't going to do a lot of projects within the next three years anyway. Um, so it- it's not like you really lost much by instead just writing down the description. You can always kind of go back and do it yourself later

  8. 26:2930:50

    Influencing the shoggoth with writing

    1. GB

      if you're wrong.

    2. DP

      So, uh, you wrote an interesting comment about getting your work into the LLM training corpus. You wrote, quote, "There has never been a more vital, hinge-y time to write." And I'm wondering whether you mean that in the sense of you are going to be this drop in the bucket that's steering the Shoggoth one way or another? Or do you mean it in the sense of making sure your values and persona persist somewhere in latent space?

    3. GB

      I mean both. Um, you know, I think by writing, you're voting on the future of the Shoggoth using some of the few currencies it acknowledges, uh, right? Like tokens that it has to predict. If you aren't writing, uh, y- you're kind of abdicating the future, or abdicating your role in it. If you think it's enough to just be a good citizen to vote for your favorite politician, you know, to pick up litter and recycle-

    4. DP

      (laughs) .

    5. GB

      ... the future doesn't care about you.

    6. DP

      Yeah.

    7. GB

      There are ways to influence the Shoggoth more, but not many. Um, and if you don't already occupy a handful of key roles or work at a frontier lab, your influence basically rounds off to zero, uh, I think far more than ever before. If there are values you have which are not expressed yet in text and if there are things that you like or want, um, if they aren't reflected online, then to the AI, they basically don't exist. Um, and that is dangerously close to won't exist.

    8. DP

      Mm-hmm.

    9. GB

      Um, you're also creating a sort of imm- immortality for yourself personally, right? Like, you aren't just creating a persona. You are creating your future self too, right? What self are you showing the LLMs and how will they treat you in the future? I give the example of Kevin Roose discovering that current LLMs, all of them, not just GPT-4, now mistreat him because of his interactions with Sydney.

    10. DP

      (laughs) Right.

    11. GB

      Um, which revealed him to be a privacy-invading liar. And they know this whenever they interact with him or discuss him. Usually when you use an LLM chatbot, it doesn't dislike you personally.

    12. DP

      (laughs)

    13. GB

      Um, on the flip side, it also means that you can try to write for the persona that you would like to become to mold yourself in the eyes of the AI and thereby help kind of bootstrap yourself.

    14. DP

      So, uh, things like the Vesuvius Challenge, for example, show us that we can learn more about the past than we thought possible, that they've leaked more bits of information, uh, that we can recover with new techniques. And if you apply the same thinking to the present and you think about what the future superhuman intelligences will be trying to uncover about the- the current present, um, what- what kinds of information do you think are gonna be totally inaccessible to- to the transhumanist historians of the future?

    15. GB

      Yeah, I think, um, any kind of stable, long-term characteristics, the sort of thing you would still have even if you were hit on the head and had amnesia. Anything like that will definitely be recoverable from all of the traces of your writing, assuming you're not pathologically private and destroy everything possible.

    16. DP

      (laughs)

    17. GB

      Uh, th- that should all be recoverable. Um, what won't be recoverable will be everything that you could forget ordinarily. Um, so autobiographical information, maybe how you f- felt like at a particular time, what you thought of some specific movie. All of that is the sort of thing that vanishes and can't really be recovered from traces afterwards. And if it wasn't written down, then it isn't written down.

    18. DP

      Listening to Gordon talk about his process, how he obsesses over his favorite technical rabbit holes and refines ideas over years makes me think about the kind of person that Jane Street wants to hire. Jane Street is a very successful quantitative trading firm. They are building state-of-the-art ML-based trading systems. I have a bunch of friends who work there, and I can tell you that their culture is intellectually unique. If you're curious, vigorous, and want to solve interesting technical puzzles, then Jane Street is the place for you.You'll get to work with some of the smartest people in the world, and you can join Jane Street from any technical field, including CS, physics, and math. They're always hiring full-time, and their summer internship applications are now open. And if you really wanna stand out, they just launched their annual Kaggle competition, organized by last year's winner, who they hired. Go to janestreet.com/dwarkesh to learn more.

  9. 30:5033:52

    Human vs artificial intelligence

    1. DP

      All right, back to Gordon. What is the biggest unresolved tension in your world view?

    2. GB

      The thing that I swing back and forth on the most is the relationship between human intelligence and neural network intelligence. It's just, it's not clear in what sense they are two sides of the same coin, or one is like an inferior-

    3. DP

      Mm-hmm.

    4. GB

      ... version of the other. This is something that I constantly go back and forth on. I, one day I'll be like, "Humans are awesome," and then the next time, like, "No, neural networks are awesome." Or, "No, both suck."

    5. DP

      (laughs)

    6. GB

      Or maybe I'll say, "B-both are awesome, just in different ways." Um, so every day, I, I find that I'm arguing with myself a little bit about why each one is good or bad or how. Um, what, what's, you know, the whole deal there were things like GPT-4 memorization, but not being creative. Why do humans not remember anything, but we still seem to be so smart?

    7. DP

      Mm-hmm.

    8. GB

      Um, one day I'll argue that language models are, are sample-efficient compared to humans. The next day, I feel like I'm arguing the opposite.

    9. DP

      You know, uh, y- one of the interesting points you made to me last year was that AI might be the most polymathic topic to think about, because there's no field or discipline that is not relevant to thinking about AI, right? So obviously, computer science, hardware, you need that, but even things like primatology and understanding what changed between chimp and human brains, or the ultimate laws of physics that will constrain future AI civilizations, you know, that, that's all relevant to understanding AI. And I, I wonder if it's because of this polymathic nature of thinking about AI that you've been especially productive in thinking about AI.

    10. GB

      Yeah, I'm not sure that it was necessary. When I think about others who are correct, like Shane Legg or Dario Amodei, um, they don't seem to be all that polymathic. Uh, they, they just have broad intellectual curiosity, broad general understanding, y- you know, absolutely. Um, but I don't think they are absurdly-

    11. DP

      Mm-hmm.

    12. GB

      ... polymathic. Um, you know, c-clearly you could get to the correct view without being polymathic. That's just how I happened to come to it at this point, um, and the connection that I'm kind of like making post hoc.

    13. DP

      Mm-hmm.

    14. GB

      Y- uh, it wasn't like I was using primatology to kind of justify scaling to myself, right? Uh, uh, it's more like I'm now using scaling, uh, to think about primatology-

    15. DP

      Mm-hmm.

    16. GB

      ... because obviously, if scaling is true, it has to tell us something about humans and monkeys and other forms of intelligence. It just has to.

    17. DP

      Mm-hmm.

    18. GB

      Um, if that works, it can't be a coincidence and just be totally unrelated. Um, I, I refuse to believe that there are two totally unrelated kinds of intelligence or paths to intelligence, where humans, monkeys, guppies, dogs are all one thing, and then you have neural networks and computers that are a distinct thing-

    19. DP

      Mm-hmm.

    20. GB

      ... and they have absolutely nothing to do with each other.

    21. DP

      Right.

    22. GB

      I think that's just kind of like obviously wrong.

    23. DP

      Mm-hmm.

    24. GB

      Um, they, they can be two sides of the same coin. They can obviously have obscure connections. Um, may- maybe one form can end up being better or whatever, um, they just can't be completely unrelated.

    25. DP

      Right.

    26. GB

      Um, as if humans, like, finally got to Mars, and then simultaneously, a bunch of space aliens landed on Mars for the first time, and that's how we met, right? You would never believe that.

    27. DP

      (laughs)

    28. GB

      It would just be too absurd

  10. 33:5238:48

    Rabbit holes

    1. GB

      of a coincidence.

    2. DP

      What is it that you try to maximize in life?

    3. GB

      I maximize rabbit holes.

    4. DP

      (laughs)

    5. GB

      Um, I, I love more than anything else falling into a new rabbit hole.

    6. DP

      Mm-hmm.

    7. GB

      Um, that's what I really look forward to, like, this sudden kind of new idea or area that I had no idea about, um, where I can suddenly fall into this deep hole for a while. Even things that might seem bad, um, are, are a great excuse for falling into a rabbit hole.

    8. DP

      Mm-hmm.

    9. GB

      One example, you know, I buy some catnip for my cat, and I wasted $10. Um, and then, you know, I, I find out that my cat's catnip immune, right? I, I now kind of fell into this rabbit hole on the question of, well, like, why are some cats catnip immune? Is this a common thing? Um, how does it differ in other countries? What alternative catnip drugs are there out there? And it turned out to be quite a few. Um, and you know, I, I was kind of wondering, "How can I possibly predict which drug my cat would respond to? And why are they reacting in these different ways?" Just a kind of wonderful rabbit hole of new questions and topics that I can master and get answers to or create new ones, um, just from, like, having this observation about my, my cat-

    10. DP

      Mm-hmm.

    11. GB

      ... and exhaust my, my interest until I find the next rabbit hole that I can dig and dive into.

    12. DP

      Right. What is the, um, longest rabbit hole you've gone on that didn't lead anywhere satisfying?

    13. GB

      Th- that would probably be my, um, very old work on the, the anime Neon Genesis Evangelion, which I was very fond of when I was younger. I put a ludicrous amount of work into just, like, reading everything ever written about Evangelion in English and trying to understand its development and why it is the way it is. I never really got a solid answer on that before I just, like, burned out on it. I actually do understand it now by sheer chance many years later. But at this point, I, I no longer care enough to write about it or try (laughs) to redo it or finish it.

    14. DP

      (laughs)

    15. GB

      Um, in the end, I think it all just wound up being basically, like, a complete waste. Um, I haven't used it or any, any of it in my other essays much at all. Um, that was really one, like, deep rabbit hole that I almost got to the end of, but I, I couldn't, like, quite clinch it.

    16. DP

      H- and then how do you determine when to quit a rabbit hole? And, and then also, how many do you have concurrently going on at the same time?

    17. GB

      Yeah, uh, you can really only explore, like, two or three rabbit holes simultaneously. Um, otherwise, you aren't putting, like, real effort. You're not really digging the hole, um, and it- it's not really a rabbit hole then, right? It's just something you're, like, somewhat interested in kind of passingly. A rabbit hole is really obsessive. Um, like, if, if you aren't obsessed with it, I think, a- and not, like, continuously, like, driven by it, uh, it- it's not a real rabbit hole. That's my view. Um, I'd say two or three max if you're spending a lot of time and effort on each one, uh, and, like, neglecting everything else. Um, as for when you exit a rabbit hole, you usually hit a very kind of natural terminus, where getting any further answers requires data that just don't exist, or you end up having questions that people don't know the answer to. You reach this point where everything kind of dies out, and you see no obvious next step. One example of this would be, like, when I was interested in analogues to nicotine that might be better than nicotine. Um, that was a bit of a rabbit hole, but I quickly hit the dead end that there just, like, are none.

    18. DP

      (laughs)

    19. GB

      Um, that was a pretty definitive dead end, and I couldn't get my hands on the metabolites of nicotine as an alternative. So, if there are no analogues and you can't get your hands on the one interesting chemical you find, well, that's that. That was, like, a pretty definitive end to that rabbit hole.

    20. DP

      Have you always been the kind of person who falls into rabbit holes? Wh- when did this start?

    21. GB

      Oh, yeah. Um, my, my parents could tell you all about that. Uh, I was very much your stereotypical nerdy, like, little kid, um, and having the dinosaur phase and the construction equipment phase and the submarine and tank phase.

    22. DP

      Yeah, I, I mean, I, I feel like a lot of kids are into those things, but they don't rabbit hole to the extent that, like, they're forming taxonomies about the different submarines and flora and fauna and dinosaurs, and they're, like, developing theories o- why, why they came to be and so forth.

    23. GB

      I think it's actually more that, um, people kind of grow out of being very into rabbit holes as a kid.

    24. DP

      Ah.

    25. GB

      Um, for me, it wasn't so much that I was all that exceptional in having obsessions as a kid. It's more that they never really stopped. Um, you know, the tank phase-

    26. DP

      (laughs)

    27. GB

      ... would just be replaced by my Alcatraz phase, um, where I would, I would go to the public library and check out everything that they had about Alcatraz. Um, that would be replaced by another phase where I was obsessed with ancient Japanese literature. Um, you know, I, I would check everything out at the library about Japanese literature before the haiku era, um, and just kind of, like, the, the process of falling into these obsessions kind of kept going for me.

  11. 38:4843:00

    Hearing impairment

    1. GB

    2. DP

      By the way, do you mind if I ask how long you've been hearing impaired?

    3. GB

      Since birth. I, I've always been hearing impaired.

    4. DP

      And I assume that impacted your childhood and when you were at school.

    5. GB

      Oh, yeah. Absolutely. Hugely. Um, I went to a special ed school before kindergarten for hearing impaired and other handicapped kids. During school, it was very rough because, at the time, we had to use pairs of hearing aids hooked up to the teacher. Every class, I would have to go up to the teacher with a big brown box with these hearing aids so that she could use it. I always felt very humiliated by that, how, how it marked me out as different from other kids not being able to hear. The effects on socializing with other kids were just terrible because you're always a second behind, right, in conversation if you're trying to understand what the, the other person is saying. The hearing aids back then were pretty terrible. Um, they, they've gotten a lot better, but back then, they were just really bad. You would always be behind and, and feeling kind of like the odd person out. Even if you, you could have had, been, like, a wonderful conversationalist, you can't be if you're always just a second behind and kind of jumping into conversation late. When you're hearing impaired, you understand acutely how, how quickly conversation moves. Milliseconds kind of just separate the moment between you jumping into a conversation and everyone letting you talk and someone else talking over you, um, and you not getting to say anything. And it's just an awful experience if you're a kid who, who's already kind of introverted. Um, it's not like I was very extroverted as a kid or now, so that was always a barrier. Um, and then you had lots of, like, minor distortions, right, in, in your life. I had this weird fear of rain and water because it was drilled into me that I, I couldn't get the hearing aids wet because they were so expensive. I would always feel, um, kind of a low grade stressful anxiety around anywhere near a pool, like, a body of water, um, and I'd say even now, I always feel weird about swimming, which I kind of enjoy, but I'm always thinking to myself, "Oh, wow. I, I won't be able to see because I'm, I'm nearsighted. I won't be able to hear because I had to take off my hearing aid to go in. I can't hear anything that anyone says to me in the pool," which takes just a lot of the fun out of it.

    6. DP

      You have a list of open questions on your website, and one of them is why do the biographies of so many great people start off with, uh, traumatic childhoods? And I wonder if you have an answer for yourself. Um, uh, was there something about the effect that hearing impairment had on your childhood, your inability to socialize, that was, uh, s- somehow important to you becoming Gwern?

    7. GB

      Yeah, I think it definitely led to me being so much of a bookworm. Um, that's one of the things that you can do as a kid which is just completely unaffected by having any kind of hearing impairment. It also was just a way for me to get words and language. Even now, I think that I often speak words in an incorrect way because I only learned them from books. Um, it's the classic thing where you kind of, like, mispronounce the word because you learn it from a book and then... and not from actually, like, hearing other people sound it out and say it.

    8. DP

      Is your, uh, is your speech connected to your hearing impairment?

    9. GB

      Yes. Um, the, the Deaf accent is from the hearing impairment. It's funny, at least three people on this trip, um, to SF have already asked me where I am really from.

    10. DP

      (laughs)

    11. GB

      Um, it's very funny. Um, you look at me and you're like, "Oh, yes, he looks like a perfectly ordinary American." Then I open my mouth and people are kind of like, "Oh, gosh, uh, he's Swedish." Or, uh, you know, "Wow, may- m- possibly Norwegian. Um, I'll ask him where he's actually from. How did he come to America?" Um-I've b- I've been here the whole time.

    12. DP

      (laughs)

    13. GB

      Uh, (laughs) that, that's just how heri- uh, hearing-impaired people sound. Um, no matter how fluent you get, uh, you still bear the scars of, of, um, growing up hearing impaired. At, at least when you're born with it or from very early childhood, um, your cognitive development of hearing and speech is always a little off, um, even with therapy. One reason I don't like doing podcasts is that I have no confidence that I sound good, or at least sound nearly as good as I write. Um, maybe I'll put it that

  12. 43:0047:43

    Wikipedia editing

    1. GB

      way.

    2. DP

      What, what were you doing with all these rabbit holes before you started blogging? Was, was there a place where you would compile them?

    3. GB

      Before I started blogging, I was editing Wikipedia.

    4. DP

      Uh-huh.

    5. GB

      Um, that was really kind of gorn.net before gorn.net.

    6. DP

      Yeah.

    7. GB

      Um, everything I, I do now with my site I would have done on English Wikipedia.

    8. DP

      Hmm.

    9. GB

      And if you go and read some of the articles, you know, I'm still very proud of them, like the Wikipedia article on Fujiwara Noteka. Um, and you would, you know, think pretty quickly to yourself if you're reading this, like, "Ah, yes-"

    10. DP

      (laughs)

    11. GB

      ...you know, "Gorn wrote this, didn't he?"

    12. DP

      Is it fair to say that the training required to make gorn.net happened on Wikipedia?

    13. GB

      Yeah. I think so. Um, I, I learned far more from editing Wikipedia than I learned from any of my school or college training.

    14. DP

      Mm-hmm.

    15. GB

      Everything I end up learning about writing I learned by editing on Wikipedia.

    16. DP

      Oh, honestly, it sounds like Wikipedia is a great training ground. If you wanted to make a thousand more Gorns, we, we should, we should just... this, this is where we train them.

    17. GB

      I think building something like an alternative to Wikipedia could be a good training ground. Um, for me it was beneficial to combine rabbit holing with Wikipedia because on Wikipedia, um, you know, they, they generally would not have many good articles on the thing that I was currently in this rabbit hole on. So it was this very natural progression from the relatively kind of passive experience of rabbit holing and being obsessed with something and learning about it where you just read everything you can about the topic to, to kind of compiling that and synthesizing it-

    18. DP

      Mm-hmm.

    19. GB

      ...onto Wikipedia. You go from piecemeal, kind of like a little bit here, um, there, picking up different things, to writing full articles. And once you're able to get to the point where you're writing full Wikipedia articles that are good and summarize all your work, now you can go off on your own and pursue entirely different kinds of writing, um, now that you've, like, learned to complete things and get them across the finish line. It would be pretty difficult to do that with the current English Wikipedia. It's objectively just a, a much larger Wikipedia than it was back in, in like 2004. Um, not only are there far more articles filled in at this point, um, the editing community is also just much more hostile to content contribution, particularly, like, very detailed, obsessive, rabbit holey kind of research projects.

    20. DP

      Mm-hmm.

    21. GB

      They, they would just, like, delete it or tell you that, um, you know, it's not good for original research or, uh, or that you're not using approved sources. Possibly you'd have someone who just kind of decided to get their jollies that day by deleting large swaths of your-

    22. DP

      (laughs)

    23. GB

      ...like, your specific articles. That, of course, is going to make you, like, very angry and make you probably just want to quit and leave before you really get going. So I don't quite know how you would figure out this alternative to Wikipedia, one that kind of like empowers the-

    24. DP

      Right.

    25. GB

      ...rabbit holer as much as the old Wikipedia did.

    26. DP

      Right.

    27. GB

      When you're an editor with Wikipedia, you have this very, like, empowered attitude-

    28. DP

      Mm-hmm.

    29. GB

      ...because you know that anything in it could be wrong and you could be the one to fix it. Um, if you see something that doesn't make sense to you, that could be an opportunity for an edit. That was at least, um, the, the Wiki attitude. Um, anyone could fix it and anyone, right, includes you.

    30. DP

      When you were an editor on Wikipedia, was that your full-time occupation?

  13. 47:4350:20

    Gwern.net

    1. GB

    2. DP

      (laughs) Uh, and then wh- uh, when did you start blogging on gorn.net? Was that... I assume that was after the Wikipedia editor phase, but was that after university?

    3. GB

      Uh, it wa- it was afterwards. I had graduated and the Wikipedia community had been kind of slowly moving in this direction that I didn't like. Um, it was triggered by the, um, Siegenthaler incident, which I feel like was really the defining moment in the trend toward deletionism on Wikipedia. It just became ever more obvious that Wikipedia was not the site that I joined and loved to edit and rabbit hole on and fill in, and that if I continued contributing, um, I was often just kind of wasting my effort.I began thinking about writing more on my own account, um, and then moving into these kind of non-Wikipedia sorts of writings, right? Like persuasive essays, non-fiction, commenting or, or possibly even, you know, fiction kind of like gently moving in the direction, um, and b- beyond things like Reddit and LessWrong comments to starting my own kind of more long form writing.

    4. DP

      And wha- what was your first big hit?

    5. GB

      Silk Road. Um, I'd been a little bit interested in Bitcoin, um, but not, but not too seriously interested in it because it, it was not obvious to me that it was going to work out. Um, or even honestly was like technologically feasible. Um, but when Adrian Chen wrote his Gawker article about buying LSD on s- off of like Silk Road, um, all of a sudden I did a complete 180. I had this moment of like, "Holy shit. Um, this is so real that you can literally like buy drugs off of the internet with it." So I looked into the Chen article and it was very obvious to me that people wanted to know what the ordering process was like. They wanted more details about what it's like because the article was just like very brief about that.

    6. DP

      Mm-hmm.

    7. GB

      So I thought, okay, I'm interested in nootropics. Um, I'm interested in drugs. I will go and use Silk Road and then I will document it for everyone. Um, instead of everyone kind of like pussyfooting around online and saying, "Oh, a friend of mine ordered off Silk Road and it worked," um, none of that bullshit.

    8. DP

      (laughs)

    9. GB

      Uh, I, I will just document it straightforwardly. Um, so I ordered some Adderall, um, I, I think it was, and documented the entire process with screenshots and then wrote some more on the kind of like intellectual background. Um, and that was a huge hit when I published it.

    10. DP

      Mm-hmm.

    11. GB

      Um, it wa- it was hundreds of thousands of hits. Uh, it's crazy. Even today when I go to the Google Analytics charts, um, you can still see Silk Road spiking vertically like crazy and then falling back down. Um, nothing else really comes near it in terms of traffic. That, that was really quite something to see things kind of go viral like that.

  14. 50:2054:30

    Counterfactual careers

    1. DP

      What are, um, what are the counterfactual career trajectories and life paths that could have been for you if you didn't become an online writer? What, what might you be doing instead that seems plausible?

    2. GB

      I, I think I definitely could have been an AI researcher, um, or possibly in, in like management at one of the big AI companies. Um, I think I would've regretted not being able to write about stuff. Um, but I would've taken satisfaction in kind of like making it happen and putting my thumbprint on it. Um, those feel like totally plausible counterfactuals.

    3. DP

      And why didn't you?

    4. GB

      I kind of fell off of that track very early on in my career, um, when I found the curriculum of Java to be, you know, e- excruciatingly boring and painful.

    5. DP

      (laughs)

    6. GB

      Uh, and so I just dropped out of computer science and that kind of put me off that track-

    7. DP

      Right.

    8. GB

      ... uh, early on. And, and then I think, you know, various early writing topics made it hard to transition in any other way than, than starting a startup, um, which I'm not really temperamentally that suited for.

    9. DP

      Mm-hmm.

    10. GB

      Things like writing about the dark net markets or behavioral genetics. Um, these, these are kind of topics that don't really scream great hire to many potential employers.

    11. DP

      (laughs) Has, has agency turned out to be harder than you might have thought initially? Because we have these models that seem like they're smart enough that they should do all the individual things that a software engineer does. For example, um, all the code they might write, all the individual pull requests, but it just seems to be like a really hard problem to get them to act as a coherent autonomous software engineer that puts in his eight hours a day.

    12. GB

      Yeah. I, I think agency is in many senses actually easier to learn than we would've thought 10 years ago. Um, but we actually aren't really learning agency at all in current systems. There's no kind of like selection for that. All the agency there is, is an accidental byproduct instead of somebody training on data. So from that perspective, it's miraculous that you can ask an LLM to try to do all these things and they have a non-trivial success rate. Um, if you told people 10 years ago, I think, that you could just behavior clone on individual letters following one by one and then you would get this coherent action out of it and control robots and write entire programs, their jaws would drop and they would just say that you've been huffing too many fumes from DeepMind or something.

    13. DP

      (laughs)

    14. GB

      The, the reason that agency doesn't work is that we just have so little actual training data for it. An example of how you would do agency directly would be like GATO from DeepMind. There, they're, they're actually training agents. Instead we train them on these internet scrapes which merely encode the outputs of agents or occasional descriptions of agents doing things, that kind of thing. Um, there, there's no actual like logging of state environments, result reward trip sequences like a proper kind of reinforcement learning setup would have. I would say that, um, what's more interesting actually is that nobody wants to train agents in a proper reinforcement learning way today. Instead, everyone wants to train LLMs and then do everything with as little RL as possible on the backend.

    15. DP

      Look, as Gordon just said, the biggest bottleneck in making these LLM models more useful has simply been the lack of good training data for these agentic workflows. This is an even bigger bottleneck than compute. Turing is solving this problem for every single AI lab that you've heard of, Gemini, OpenAI, Anthropic, Meta. They're basically the best kept secret in AI. Turing provides complete post-training services for evals, SFT, RLHF and DPO to make models better at thinking, reasoning and coding. And it's all vetted by their AI and STEM experts. Turing makes it easy to make models multimodal, more factual, better at math, coding, advanced reasoning and agentic workflows. And they also make it easy to just get a solid performance benchmark. For those of you at labs or companies training models, Turing has a bunch of offerings that can help you today, including a detailed model evaluation from their AI experts. Go to turing.com/dwarkesh-

    16. GB

      ... to learn more. All right, back to Gauran.

  15. 54:301:01:32

    Borges & literature

    1. GB

      (graphics whoosh)

    2. DP

      What would a person like you be doing before the internet existed?

    3. GB

      I think if the internet didn't exist, I would have tried to probably make it in regular academia, um, and maybe narrowed my interests a lot more, something that I could publish on regularly. Or I could possibly have tried to opt out, you know, and- but become a librarian, like one of my favorite writers, Jorge Luis Borges. Um, he was a librarian until he succeeded as a writer. Um, of course, I-I've always agreed with him about imagining paradise as a kind of library.

    4. DP

      Mm-hmm.

    5. GB

      I regret that all the reading I do is now kind of on the computer, and I don't get to spend as much time in libraries, physical libraries. I- I genuinely love them, just like poring through the stacks, looking for random stuff. Some of the best times for me when I was in university, um, were always like going through these gigantic stacks of all sorts of obscure books, and just looking at like a random spine, you know, pu- pulling stuff off the shelf and reading obscure, old technical journals to see all the strange and wonderful things that they were doing and documenting back then, which now have just been totally forgotten.

    6. DP

      If you could ask, uh, Borges one question, what would it be?

    7. GB

      Oh. Um... He's a real hero of mine, um, so this is s- this isn't something I wanna have a bad answer to.

    8. DP

      Okay. Can I ask why he's a hero of yours?

    9. GB

      W- when I was younger, um, one of the science fiction books that really impressed me was Dan Simmons' Hyperion, um, and especially The Fall of Hyperion. In there, he alludes to Kevin Kelly's Out of Control book, um, which strongly features the parable of the Library of Babel. Um, from there, I got the kind of collected editions of Borges's fiction and non-fiction, and I just read through them again and again. I was blown away by the fact that you could be so creative with all of this polymathic knowledge that he had and erudition, and write these wonderful, entertaining, provocative short stories and essays. And I thought to myself, um, if I could be like any writer, any writer at all, I would not mind being Borges.

    10. DP

      Borges has a short poem called Borges and I, where he talks about, um, the, uh... ho- how he doesn't identify with the version of himself that is actually doing the writing and publishing all of this great work. And I- I don't know if you identify with that at all.

    11. GB

      Yeah. I think when I was a kid, I did not understand that essay, um, but I think I understand it now.

    12. DP

      What- what- what are other pieces of literature that you encountered, where now you really understand what they were getting at, but you didn't when you first came across them?

    13. GB

      Um, Ted Chiang's Story of Your Life comes to mind. I completely blew understanding it the first time that I read it. Um, I had to get a lot more context where I could actually go back and understand what his point was. Um, Gene Wolfe's Suzanne Delage, um, story was also a complete mystery to me. It took like 14 years to actually understand it, but I'm very proud of that one specifically. That was a very recent one.

    14. DP

      Oh, and then what- what did you figure out about Suzanne Delage?

    15. GB

      Yeah. So Gene Wolfe's Suzanne Delage, uh, is a very, very short story about this guy n- remembering not meeting a woman in his local town and thinking, "Oh, that's kind of strange." That's the whole story. Nobody has any idea what it means, even though we're told that it means something. Um, and Gene Wolfe, the author, is a genius writer, but nobody could figure it out for like 40 years. Last year, I figured it out.

    16. DP

      Hm.

    17. GB

      Um, it- it turns out it's actually a subtle retelling of Dracula, where Dracula invades the town and steals the woman from him. He's been brainwashed by Dracula in a very Bram Stoker way to forget it all, um, and every single part of the story is told by what's not said in the narrator's recollection. It's incredible.

    18. DP

      Hm.

    19. GB

      It's the only story I know which is so convincingly written by what's not in it.

    20. DP

      Huh. That- that- that's crazy that you figured that out. Um, the Ted Chiang story, the Story of Your Life, c- can you remind me what that one's about?

    21. GB

      The surface story is just about a bunch of weird aliens who come to Earth.

    22. DP

      Oh, right, right. That's- it's the same plot as Arrival.

    23. GB

      They- they ha- have this weird language, um, which i- didn't have a sense of time. The narrator learned to see the future, and then the aliens left.

    24. DP

      And then what was it that you realized about that story?

    25. GB

      The- the first time I read it, it struck me as a kind of stupid ESP story about seeing the future, very stupid, boring, kind of standard conventionalism, verbose, and like dragging, uh, in much kind of like irrelevant physics. Only a while after I, you know, first read it and was thinking about it did I understand that it was not about time travel or being able to see the future, you know. It- it's instead about a totally alien kind of mind, um, that's equally valid in its own way, in which you see everything is part of an already determined story heading to a predestined end.

    26. DP

      Mm.

    27. GB

      Um, this turned out to be mathematically equivalent and equally powerful as our conventional view of the world, um, events marching one by one to an unknown and changing future. That was the case where Chiang was just writing at too high a level for me to understand.

    28. DP

      Hm.

    29. GB

      I pattern-matched it to some much more common kind of stupid story.

    30. DP

      Mm-hmm. Um, how do you think about the value of reading fiction versus non-fiction?

  16. 1:01:321:11:03

    Gwern's intelligence and process

    1. GB

      that we actually face with AI.

    2. DP

      Do people tend to underrate or overrate your intelligence?

    3. GB

      I would say they overestimate it. Um, you know, they mistake for intelligence the fact that I remember many things, that I've written many things over the years. They imagine that, you know, if they sat me down that I could do it all spontaneously at the moment that they're, they're meeting me or talking to me. But many things that, um, I've thought about, I, I think I have the advantage of, of having looked at before over a long time, so I'm cheating. You know, when I talk to people, I may just be quoting something that I've already written or at least thought a lot about. So I, I think I come off as a lot smarter when you're reading me than I actually am. I would say I, I'm not really all that smart compared to many people I've known who update very fast on the fly. But, but in the end, i- it's, you know, it's the output that matters, right? So...

    4. DP

      Yeah. I, I guess there is an on-the-fly kind of intelligence, but there's another kind of intelligence which is this ability to synthesize things over a long period of time and then come up with grand theories as a result of all these different things that you're seeing. And I don't think that's just crystallized intelligence, right?

    5. GB

      Yeah. I... It's not just crystallized intelligence, but I think that if you could see all of the individual steps in my process, you'd be a lot less impressed. If you could see all the times where I kind of just note down something, like, "Hmm. Like, that's funny," or, you know, "Huh," like, anot- another example of that pattern. And if you just saw each particular step, I think you would say that the steps in isolation were very reasonable. It's only when that happens over a decade and you don't see the individual stuff that my output at the end looks like magic. One of my favorite quotes about this process is from the magicians Penn & Teller. Teller says, "Magic is putting in more effort than any reasonable person would expect you to." He tells this story about how they make cockroaches appear from a top hat, where the trick is that they researched and found special cockroaches and then found special Styrofoam to trap the cockroaches and arrange all of that, worked out all of those details just for this one single trick that they do. And in the, you know, in the audience, kind of, y- y- you think, "No reasonable person would do that, put in all of that effort to just, you know, get the payoff of this trick."

    6. DP

      Yeah.

    7. GB

      But they do it, and the result is cockroaches somehow appearing from an empty hat.

    8. DP

      Th- th- that's one of the interesting things about your process, 'cause there's a couple of writers, like Matt Levine or Byrne Hobart, who write a article every day, and I think of them almost like autoregressive models. Um, and then on you, there's, uh... On some of the blog posts, you can see the start date and the end date that you list on your website of when you've been working on a piece, and sometimes it's like 2009 to 2024. And I feel like that's just much more like diffusion, and you're just like keep iterating on the same image again and again. One of my favorite blog post of yours is your b- blog post, uh, Ev- Evolution as a Backstop to RL, where you talk about evolution as basically a mechanism to learn a better learning process, and that explains why corporations don't improve over time, but biological organisms do. Um, I'm curious if you can walk me through the years that it took to write that. What, what was that process like step by step?

    9. GB

      Yeah. So the Backstop essay that you're referring to, um, is the synthesis of seeing the same pattern show up again and again, a kind of stupid, inefficient way of learning, um, which you use to learn something smarter, but where you still can't get rid of the original one entirely, right? So sometimes examples would just kind of connect to each other, um, when I was thinking about this. Other times, you know, once I started watching for this pattern, I would say, "Oh, yeah. You know, pain is a good example of this. Maybe this explains why humans have pain in the very specific way that we have it, um, when you can logically imagine other kinds of pain. And those other pains would be smarter, but nothing keeps them honest." So you just kind of chain them one by one, th- these individual examples of the pattern you're watching for, um, and kind of keep clarifying the central idea as you go.

    10. DP

      Right.

    11. GB

      Uh, Wittgenstein says that you can look at an idea from many directions and then go in spirals around it. And in an essay like Backstop, it was me kind of spiraling around this idea of-

    12. DP

      Mm-hmm.

    13. GB

      ... having many layers of learning all the way down.

    14. DP

      Mm-hmm. Um, and then so once you notice one example of this pattern, do you just... Like, you noticed this pain example, do you just keep adding examples to that? I mean, just walk me through the process over time.

    15. GB

      Yeah. So for that specific essay-

    16. DP

      Yeah.

    17. GB

      ... the first versions were about corporations not evolving.

    18. DP

      Hmm.

    19. GB

      Um, and then as I read more and more of the kind of meta-reinforcement learning literature, from DeepMind especially, um, I added in material about neural networks.

    20. DP

      Mm-hmm.

    21. GB

      Um, and then I kind of kept reading and thinking about the philosophy of mind papers that I had read. And I eventually nailed down the idea that pain, you know, might be another instance of this. Um, because pain, like, makes us learn, right? But we can't get rid of it because we need it to keep us honest. Um, and anyway, at that point, you have more or less the structure of the current essay.

    22. DP

      And then are there examples of...... blog post where it's not a matter of accumulating different instances of what you later realize is one bigger pattern, but rather, you just gotta have the full thesis at once.

    23. GB

      Um, for, for those essays where there is a kind of like individual eureka moment, there usually is still a bunch of disparate things that I've been making notes on that I don't even realize are connected. They just bother me for a long time, um, and kind of like sit there bothering me. And I keep looking for explanations for each individual one and just not finding them. It keeps bothering me, it keeps bothering me, and then w- one day I, I hit kind of that, that sudden moment that makes me go, "Bam! Eureka!"

    24. DP

      (laughs)

    25. GB

      Right? These, these all are connected. I, I just have to kind of like sit down and write this single gigantic essay-

    26. DP

      (laughs)

    27. GB

      ... that pours out, um, about, about it, and then it's done. That particular essay, um, will, you know, will, will just be done at that point, like right in one go. I might add in links like later on, or references, but it, it won't fundamentally change from that point.

    28. DP

      What's an example of an essay that had this kind of process?

    29. GB

      Yeah. So- someone asked about how I came up with one yesterday, as a matter of fact. It, it's one of my oldest essays, um, The, The Melancholy of Subculture Society. Um, for that one, I, I'd been reading about these miscellaneous things like David Foster Wallace on tennis, people on internet media like video games, um, and then one day it just kind of hit me, um, th- this feeling or, or, you know, observation that it's incredibly sad that we have all these subcultures and tribes online, and that they can find community together, um, but they're still incredibly isolated from the larger society. And, and then, you know, one day, a flash kind of just hit me about how beautiful, uh, and yet also sad this is, and I just sat down and I, I wrote down the entire thing more or less. Um, I haven't really changed it since that much at all. Uh, I've added more links and quotes and examples over time, but, but nothing important. The essence was just kind of this like flash, and I wrote it down while it was there.

    30. DP

      One of the interesting quotes you have in that essay is from David Foster Wallace when he's talking about the tennis player, Michael Joyce, and he's talking about the sacrifices that Michael Joyce has had to make in order to be top 10 in the world at tennis, which include things like being basically functionally illiterate because he's been playing tennis every single day since he was, you know, seven or something, and not really having any life outside of tennis. What are the Michael Joyce-type sacrifices that you have had to make to be Gorn?

  17. 1:11:031:19:16

    A day in the life of Gwern

    1. GB

      day to be important.

    2. DP

      So, so walk me through this process. You mentioned you, um, you read papers until your eyes bleed out at the end of the day. Uh, l- let's just start. You wake up in the morning and y- you get straight to the papers? Like, what, what does your day look like?

    3. GB

      Uh, so I mean, the workflow right now is more like I wake up, um, I do normal morning things, and then I clean up the previous day's work on the website. I'll, I'll deal with kind of various issues like formatting or spelling errors, um, and I, I kind of, you know, review it and, and think if I, I've properly collated everything and put it in the right places from the previous day. Sometimes I might have like an extra thought that I need to go in and add-

    4. DP

      Mm-hmm.

    5. GB

      ... um, or make a comment that I realize was important. After that, I, I often, you know, shamelessly just go to Twitter, um, or my, my RSS feed and just read a large amount, um, until, you know, maybe I get distracted by some comment or question from someone, um, and then do some writing on that. Um, somewhere, you know, usually in the evening, I, I often just get exhausted and try to go and do a real project or make a real contribution to something. Um, I'll actually sit down and work on whatever I'm supposed to have, you know, been working on that day. Uh, and then, and then I go to the gym. Um, by that point, I'm pretty burned out from everything. Um, yes, you know, I like going to the gym, not because I'm any kind of meathead or athlete or even really enjoy weightlifting, um, but just because I think it, it's, it's the thing I can do that's the most opposite from sitting in front of my computer reading.

    6. DP

      Yeah, this is your theory of burnout, right? That you just gotta do the opposite of...

    7. GB

      Yeah. Um, you know, the, the problem, I think, when people experience burnout is that you just feel kind of a lack of reward for what you're doing or what you're working on.

    8. DP

      Yeah.

    9. GB

      You just need to do something completely different.

    10. DP

      Right.

    11. GB

      Something as different as possible. Uh, maybe you could do better than weightlifting, but for me, you know, it, it does feel very different-

    12. DP

      (laughs)

    13. GB

      ... for anything that I do in front of a computer.

    14. DP

      I, I wanna go back to your process. So, uh-Every day, you're loading up all this context, you're reading all the RSS feeds and all these papers. And are you basically making contributions to all your essays, adding a little bit here and there every single day? Or are you building up some potential which will manifest itself later on as a full essay, a fully formed thesis?

    15. GB

      I would say it's more the latter one. Um, I think all the minor low-level additions and pruning and fixing I do is really not that important. Um, it's more just a way to make nicer essays. Um, it's a, it's a purely kind of aesthetic goal to make it as nice an essay as I possibly can. Um, and I, I'm really waiting to see kind of what happens next. What would be the next thing that, that I'll, um, you know, be provoked by to, to end up writing about? It's passing the time in between sudden eruptions. For many writers, you, you sort of like can't neglect this kind of gardening process, right? Um, you don't harvest every day. You have to tend the garden for a long time in between harvests.

    16. DP

      Yeah.

    17. GB

      Uh, if you start to neglect the gardening because you're gallivanting around the world, let's say you're going to book signing events, maybe you're doing all the publicity stuff, then you're not really like doing the work of-

    18. DP

      Yeah.

    19. GB

      ... of being in there tending the garden, um, and that's undermining your future harvest, um, even if you can't see it right now. If you ask kind of what is Tyler Cowen's secret to being Tyler Cowen, my guess would be that he's just really good at tending his garden, um, even as he travels a crazy amount. That would be his secret, that he's able to read books on a plane. Um, you know, I can't read books on a plane. He, he's able to write everything in the airport. I can do a little bit of writing in the airport, but not very much. Um, and he's also just very robust to the wear and tear of traveling. I'll be like collapsing in the hotel room after talking to people for eight hours. He's able to talk to people for eight hours and then go do podcasts and talk to someone for another f- four hours or whatever. Um, it's extremely admirable, but I just can't do that.

    20. DP

      How often do you get bored? Because it sounds like you're spending all your day reading different things. Um, are they all just inherently interesting to you, or do you just trudge through it, uh, even when it's not in the moment compelling to you?

    21. GB

      I don't think I get bored too easily, because I, I switch between so many-

    22. DP

      Mm-hmm.

    23. GB

      ... different topics. Um, even if I'm kind of sick of deep learning papers, well, you know, then I have tons of other things I can read or argue with people about. So, I don't really get bored, I just end up getting kind of exhausted. Um, you know, I, I have to kind of go off and do something else, like lift weights.

    24. DP

      Mm-hmm. What, what is your most unusual but successful work habit?

    25. GB

      Yeah, I think I get a lot more mileage out of arguing with people online than, like, pretty much any other-

    26. DP

      (laughs)

    27. GB

      ... uh, writer does.

    28. DP

      Yeah.

    29. GB

      I, I, you know, I'm trying to give a genuine answer here.

    30. DP

      (laughs)

Episode duration: 1:36:43

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode a42key59cZQ

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome