Skip to content
Uncapped with Jack AltmanUncapped with Jack Altman

AI, Learning, and Podcasting | Dwarkesh Patel | Ep. 19

(If you enjoyed this, please like and subscribe!) In his early twenties, Dwarkesh Patel has become one of the leading podcasters with nearly 1 million YouTube subscribers excited to consume his deeply-researched interviews. Dwarkesh has caught the attention of influential figures such as Jeff Bezos, Noah Smith, Nat Friedman, and Tyler Cohen, who have all praised his interviews – the latter describing him as “highly rated but still underrated.” In 2024, he was included in TIME’s 100 most influential people in AI alongside the likes of Ilya Sutskever, Andrew Yao, and Albert Gu. Dwarkesh’s interviews span far beyond AI, his North Star being his curiosity and preparation. We covered: - Digital minds leading huge companies - AI making us smarter vs rotting our brain - His approach to learning as his job - Best in class interview preparation Timestamps: (0:00) Intro (0:23) Skepticism around the timing of AGI (6:07) Confidence in AI researchers (7:17) Future utility of superintelligence (11:23) Impact of scaling digital minds (15:41) Driven by increases in compute (17:17) Is AI making us smarter? (21:03) AI’s impact on biology (23:54) Interests outside of AI (26:18) Chronology of his interests (31:10) His approach to learning (33:43) New thinking on human evolution (40:44) Learning and the media (45:52) Podcasting success (48:53) Best in class interview preparation More on Dwarkesh: https://www.dwarkesh.com/ https://x.com/dwarkesh_sp More on Jack: https://www.altcap.com/ https://x.com/jaltma https://linktr.ee/uncappedpod Email: friends@uncappedpod.com

Dwarkesh PatelguestJack Altmanhost
Jul 30, 202552mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:23

    Intro

    1. DP

      First of all, we just didn't realize how much we didn't know about human evolution. Just like the story you learned in high school, all of it is, like, at least somewhat false about how, when, where, who?

    2. JA

      What do you mean?

    3. DP

      Like, did it happen in Africa?

    4. JA

      Did it?

    5. DP

      A big chunk of it didn't.

    6. JA

      We have stuff right up to, you know, a certain amount of history, though, right?

    7. DP

      Yes.

    8. JA

      Okay. Yeah.

    9. DP

      [chuckles]

    10. JA

      That's good to know. At least there's something we can hold on to. [upbeat music] Dwarkesh, I've been really looking forward to this. Thanks for making time for it.

  2. 0:236:07

    Skepticism around the timing of AGI

    1. DP

      Thanks for having me on.

    2. JA

      So I want to start by talking about your thinking around the state of AI. You obviously are very close to it. You're a user of it. You have gone really deep with a lot of people who know it on many levels, and you recently wrote this really interesting, uh, blog post called, "Why I Don't Think AGI Is Right Around the Corner." Um, and I wanna ask you a little bit about that and just this general topic. Um, a lot of my guests so far, probably myself included, have been, like, a little breathlessly, like, "You know, this is, this is here. If, you know, we just sort of deployed all the AI research that we have, you know, or capabilities today, we would have, you know, insane GDP growth." I think you have a slightly different take than some of my other guests, so I wanted to start by asking you about how you see the current state of AI.

    3. DP

      Mm. I'm in a similar position as you, where I've also interviewed a lot of people who are breathlessly anticipating, um, what's coming with AI, sometimes in a, a very optimistic way of the AI researchers. In other cases, they're worried that, like, the world's gonna end in two years. And I think what's changed my mind around how soon we're gonna get to these super transformative outcomes is just trying to use these AIs to help me with very simple, like, ScriptKitty kind of task for my own podcast. And so I have a lot of friends who think, "Look, if the reason the Fortune Fly five hundred isn't using AI all across their stack right now is because the management is too stodgy. They're, like, just, like, not being creative enough about how to get, um, o3 into their workflows." And look, I, I'm like... I'm thinking a lot about how to use AI in my, like, podcast post-production setup. I've tried for a hundred hours to get it to be useful for me, and it hasn't been that useful. And I think that that's because it's just genuinely hard to get human-like labor out of these models, fundamentally because these models can't learn on the job in a way a human can. So if you think about a human employee, probably for the first three to six months, they're not even useful, especially when it comes to knowledge work. The reason they become more useful over time is not mainly their raw intellect, although obviously raw intellect matters, but it's rather their ability to build up context and to learn from their failures in a very, um, in a very rich way and to, um, uh, to interrogate them. And the models currently, you just get, like, whatever they can do in a session. Uh, you talk to them for thirty minutes, and then they totally lose awareness or understanding of how your business works, what your preferences are, et cetera. And a lot of tasks just require you to, like, you do a five out of ten job at something, then you, like, talk to your boss, you, like, go out to the, uh, consumer, and then, like, you learn what didn't go wrong. You, like, ask yourself what didn't, uh, go well, and you just, like, keep iterating on that. And they just can't do this on-the-job kind of training, which I think, like, is what makes humans valuable.

    4. JA

      Yeah, like, there's a certain set, particularly within, like, language tasks, and maybe we can get over to coding, which is sort of, like, a whole different beast. But-

    5. DP

      Sure

    6. JA

      ... within language tasks, it seems like there is a limit still to, like, how good it can be. And so if, for example, even if you're trying to either pull out the most interesting segment from a podcast or caption it and make clips that are postable, you know, those clips need to be perfect, and, like, a human still, you know, you're gonna trust a person-

    7. DP

      Yeah

    8. JA

      ... more to do that. Picking what's actually interesting might be kind of hard.

    9. DP

      But, but even there, so, um, offline, we were having a conversation about how to, how to write a tweet for something for your podcast, right? And then we were discussing, oh, well, one strategy might be, like, write it for a group chat. Maybe you can add that to, uh... Like, you would think this is exactly what LLMs are for, right? You- th- it's like, write the tweet.

    10. JA

      Yeah.

    11. DP

      This is, like, language in, language out, here's a system prompt. But, like, why is it... I, I assume you are, aren't getting that much use out of just, like, having the AI write your tweets for you.

    12. JA

      No.

    13. DP

      And why is that? Not because write- by the way, I'm focusing on this not because writing tweets is, like, the most important thing in the economy. [chuckles]

    14. JA

      [chuckles] Yeah.

    15. DP

      This is just, like-

    16. JA

      Twenty-five, yeah. [chuckles]

    17. DP

      This is just, like, this is the first thing they should be able to do, right? Why are we not delegating this to them yet? If you po- posted something, you, like, notice it doesn't do well, you have this, um... You can, like, think about what went wrong. You also have, like, th- this experience from, uh, what do your users want or what do your followers want that these AIs can't pick up easily.

    18. JA

      Well, it's actually a reasonably high... So, like, think about you posting something on Twitter or Substack or whatever. Like, that bar for you is actually gonna be pretty high.

    19. DP

      Yeah.

    20. JA

      There's a lower bar of thing where, you know, language models are really useful, like customer support tickets, for example. You know, it works really well there 'cause you don't need the language to be perfect. You don't need insane nuance.

    21. DP

      Sure.

    22. JA

      And it's actually okay for a certain class of thing if it's right ninety-seven percent of the time, and if it knows how to, you know, say, "Hey, I'm wrong here." So I do think that what we're describing here still does drive a lot of GDP growth in certain areas at least because, you know, there's, you don't need the high bar everywhere.

    23. DP

      Yeah. I mean, I'd, I'd be curious to see what the actual numbers are on customer service employment. I think, uh, th- from what I understand, they're not, like, down that much.

    24. JA

      Yeah.

    25. DP

      So it's interesting, even in those areas, you're s- not seeing these transformative impacts.

    26. JA

      Yeah, I mean, the, the, the, the venture view against that would be that it's, like, the beginning of this massively exponential curve, and these things are, like, you know, in the first half of the first inning, but all the curves are like this.

    27. DP

      Yes.

    28. JA

      And so three years from now, it's gonna... It'll show up.

    29. DP

      I agree with that. Maybe not just in three years, but, um, I agree with that over the course of a decade, but that's just because I think this is such a big bottleneck to getting these models to be valuable, that it would, has to be s- like, people will want to solve it. Like, right now, OpenAI or Anthropic's revenue is on the order of ten billion, uh, ARR, and that's a lot, obviously. But, like, McDonald's and Kohl's make more money, and those companies aren't AGI, right? So, like, if you have real AGI, like, trillions of dollars a year, that's what, like, humans around the world ear- earn in Agi- uh, wages.

    30. JA

      Yeah.

  3. 6:077:17

    Confidence in AI researchers

    1. JA

      I guess maybe then to the broader thing about, like, being, you know, AGI-pilled in general. Like, you obviously have spent a lot of time talking to-

    2. DP

      Yeah

    3. JA

      ... researchers. You're very close to that. Do you still feel as confident as ever in that, even if your timelines are different than, you know, maybe what you've gotten from some of your guests?

    4. DP

      There is one thing re- really interesting to observe, which is that, like, they have cracked reasoning. It just so happens that reasoning ended up being much easier than something we take for granted, which is just that day in, day out, you're gonna be picking up information in your workplace. But, you know, like, you go, go back to Aristotle, and w- his big take was like: "Look, well, the thing that makes humans special-

    5. JA

      Yeah

    6. DP

      ... uh, from other animals is that we can reason, and the other animals can't." And it, it's sort of funny that, like, these models just aren't that useful yet. They can't do almost anything, except for the one thing they can do is reason. Given the fact that these sort of, like, ambiguous capabilities do come online, um, that makes me think that, like, continual learning might also be something that, like, in the 10 years... It's also useful to remember, deep learning is, like, not that old.

    7. JA

      Yeah.

    8. DP

      It's, like, fif- 13, uh, 14 years old, or at least, like, sorry, Alex Net is, like, that old, uh, w- when we started training these models. Yeah, uh, like, uh, it's very possible to me that in a decade or two we

  4. 7:1711:23

    Future utility of superintelligence

    1. DP

      find a solution to this problem.

    2. JA

      It's very interesting to hear you continue using language that's like, "You know, these models aren't that useful yet," which part of me very much agrees with, and part of me, it's, you know, it's like putting language to an experience that I have when I, you know, use them myself. The other side of it would be, you know, they are, you know, an amazing, you know, replacement for search in a lot of cases. They do things like code generation in, like, a very effective way. It seems like, you know, doctors are able to use them to, like, you know, handle a lot of, like, scribe work. So, like, there are these things that are, I would say, clearly working.

    3. DP

      Yes.

    4. JA

      Um, do you see it that way, too, where you're like, some things are highly useful?

    5. DP

      Yeah. No, 100%. I, I, I'm, I'm putting it more in the context of the real potential for AGI, just like a genuine replacement for human labour. I expect that to cause, like, um, a 10X increase in the level of growth, and so if it's on the scale of the internet, I'm like, "Oh, wow, this is so much shorter-

    6. JA

      Uh-huh

    7. DP

      ... of what AGI could be," so where this is clearly not that useful yet. And, uh, sort of a more tangible reason to expect this kind of change, well, one is just that the amount of labour supply just dramatically increases. So I think people often, especially in tech, focus on how it will make a specific industry more productive, these, like, narrow productivity improvements, where you just like... No, imagine, like, a trillion people in the world who are each specialising and each, um, discovering new knowledge or, um, uh, we just get all these gains from com- comparative advantage as a result. But another is that because these minds are digital, they have advantages, even if they're, say, the same amount of intelligence-

    8. JA

      Mm-hmm

    9. DP

      ... specifically advantages in collaboration, that humans just can't have because of the way our minds work. Um, one, one example of this is, okay, suppose this problem is solved, where you can, like, actually learn on the job. Now, like, a human can, like, on the job, learn from, uh, learn from their work, and then so over the course of 20 years, they become a master of their craft. They're incredibly valuable. You're one such person, right? You've, like, picked up all this context in the tech industry, from running companies, from investing in companies. If we get models that have this human-like capability, not only could they learn on a single job, copies of the model are deployed all through the economy. They can amalgamate the learnings from basically doing every single job in the economy at the same time, and at that point, even if you don't have further algorithmic innovations, you would still have something that's functionally becoming a superintelligence. You've- but you have this, like, broadly deployed intelligence explosion.

    10. JA

      Yes.

    11. DP

      That's just one of the many ways in which the fact that they are digital just gives them-

    12. JA

      Like emergent intelligence, like ants or something.

    13. DP

      Yeah, exactly.

    14. JA

      Yeah. You're kind of describing a world where, like, the way that the impact happens is through a trillion new, you know, white-collar workers. There's another version of it where it never actually does exactly that, but does this other thing, which is just, like, a higher level of intelligence than anything we've ever encountered, and it, like, creates new paths and, like, identifies new ways to do things that humans could still then do. Sometimes the way I think about this is like the 400 IQ AI.

    15. DP

      Mm.

    16. JA

      Even if it can't do all the things that a person can do, it could help or fully identify new drugs, help us get to space more effectively, like, all sorts of innovations like that.

    17. DP

      Mm. I think a good way to think about this is maybe China. So, uh, okay, why has China been so successful in not only catching up in science technology, but in fact, in many ways, surpassing America in a, a lot of key domains? And obviously, Ch- China is full of lots of brilliant people, and so that does lend credence to your argument that, like, look, they have to have... I think the intelligence is a big part of it, but I think m- more fundamentally, just, like, once you've hit a benchmark of intelligence, the scale is what makes China so successful, right? You have, just within manufacturing, there's 100 million people who have built up- who are working in manufacturing in China, who have built up all this process knowledge-

    18. JA

      Mm-hmm

    19. DP

      ... in whatever sub-domain is relevant to, uh, uh, um, whatever is being built. So, uh, th- that scale, I think, is, like, if China is graduating, I think, like, tens of millions of STEM graduates every single year, it's not that any one of them is super brilliant, it's just that each of them can specialise-

    20. JA

      Right

    21. DP

      ... in whatever radar technology that, uh, BYD needs or whatever, um, production

  5. 11:2315:41

    Impact of scaling digital minds

    1. DP

      technology is needed.

    2. JA

      I'm thinking about this out loud, but it, it opens up the question of: Would more impact happen from a trillion more super connected, super collaborative, human-level intelligence people, or from one just, like, demigod-level intelligence who could, like, figure stuff out and tell us all what to do?

    3. DP

      Do we have evidence in Silicon Valley history that it's more of the latter? It seems to me that-

    4. JA

      Well, it depends.

    5. DP

      Yeah.

    6. JA

      You know, there's the great man theory thing-

    7. DP

      Right

    8. JA

      ... where it's like there are these special people who, you know, that's how the big leaps happen, whether it's, you know, like a Steve Jobs or Elon or whatever, where, like, there is some just outlier person who directs the resources-

    9. DP

      Right

    10. JA

      ... and that one person can pull greatness out.

    11. DP

      Yes. I haven't seen them up close as you have, but it seems to me, I agree that they've had a huge impact, people like Elon and Steve Jobs. It seems to me their impact has been-... but w- more so a fact, a product of, like, just, like, "You will do this, otherwise I will throw a tantrum," um, which is good, right? Like, you should throw a tantrum. And I'm gonna s- sleep in the office for years on end, and just, like, get people in the right place at the right time. But it's less so, like, only Elon can come up with how the fins on the SpaceX rocket should be designed, and therefore, this, like, his, like, uber intelligence allows him to, like, design, like, across five different hardware verticals.

    12. JA

      It's probably-- it, it, it's definitely something that's more to do with the leading of people and the clarity of vision, I would say, than the probably, like, um, engineering genius or something like that.

    13. DP

      Yes.

    14. JA

      Um-

    15. DP

      Yeah. But, I, I mean, this is actually another way to illustrate why digital minds are such an upgrade. Elon has obviously been super successful across so many different, uh, areas of technology. Um, but of course, he's just, like, one person. To the extent that you think there's something unique about him or about the small teams he had assembled, like early SpaceX or early Tesla, if they were digital, you could just replicate them. Like, early Te- SpaceX team, replicate them 1,000 times, throw them at 1,000 different hardware verticals-

    16. JA

      Yep

    17. DP

      ... and see what happens. Y- because you can't scale that with humans.

    18. JA

      The other thing I'm thinking about now, as we're talking, is could a digital intelligence be, like, a leader of, you know, 10,000 people, you know?

    19. DP

      Mm.

    20. JA

      And I do think that coordinating large groups seems to be incredibly important to getting big-

    21. DP

      Yes

    22. JA

      ... societal things done. So now I'm wondering, could you have a digital, a, you know, a digital mind at the top, or does it need to be a human?

    23. DP

      Yeah, I think it'd have a much easier time if it was digital. This is another one of the key advantages-

    24. JA

      Wow!

    25. DP

      ... of being digital. Um, because right now, Elon has the same 10 to the 15 FLOPS in his head that every single other human has. Now, this could be negative as well, right? Like, Xi Jinping also only has 10 to the 15 FLOPS in his head that every other human has. The ratio of compute or deliberation happening at the top versus through the hierarchy is just so lopsided, and that requires a lot of delegation. Now, that's good in a sort of, like, free society sense. You don't want the president to have that much control over your life. But if you, um-- within these specific domains where we want... Like, companies have this outer loop where if they fail, they can just go down, unlike a country. So there, it might make sense to have more of a m- have the company be the product of a single, coherent vision.

    26. JA

      Mm-hmm.

    27. DP

      Um, this is the founder mode idea, right? But obviously, that's limited by the fact that if a company grows to a certain size, it's just, it's just hard for a single person to monitor everything. If it was an AI, and especially if you, if you could, like, inference scale mega Elon, uh, the, like, the, the thing that's, like, running on a huge data centre that's dedicated to just him, mega Elon can, uh, s- read every pull request, every, every comms-

    28. JA

      Right

    29. DP

      ... input, output into the company. He can, like, micromanage every single employee, down to, like, the technician at the dealership.

    30. JA

      Yeah.

  6. 15:4117:17

    Driven by increases in compute

    1. JA

      So maybe back on AGI. All the time that you spent with researchers and companies, and you really inspecting this, and you're very truth-seeking, you still do believe that AGI is gonna just, like, wake up? Like, you, you think it's all gonna happen still, or has anything, as you've learned over the last year or so, changed your mind, and you're like, "Well, it's a really cool idea, but it's not for sure it's gonna happen" to you?

    2. DP

      Yeah, I, I think the biggest consideration there is the progress for the last 10 years in AI-- I mean, more than 10 years, actually, but even going back to the '60s, the progress in AI, period, has been driven by increases in compute. And especially in the deep learning era, it's been, like, stupendous increases in compute dedicated to frontier systems. If you just look at public, uh, announcements of how big the compute runs are, it looks like the trend since, I think, 2012, 2016, has been, um, four X per year.

    3. JA

      Mm.

    4. DP

      So that's over four years is 160 X, the biggest system, uh, trained from the one before, in terms of the compute used. That physically cannot continue past this decade, from, uh, how much energy is used to, like, what fraction of advanced chips-

    5. JA

      Mm

    6. DP

      ... at TSMC you need to procure to even, like, raw fraction of GDP. So then you would just need progress from other... The, the progress would just have to come from algorithmic progress, or, I mean, it would just have to be that. And because this key input into AI progress would stop after the next five years, there is this dynamic that the yearly probability of AGI is, like, quite high now. Um, not in the sense that, like, it will happen, but there's a, like, a decent fra- chance every year until 2030. And then it just sort of craters-

    7. JA

      Mm-hmm

    8. DP

      ... 'cause then you're just like, "Okay, we, we just gotta think hard about what, what's missing. We can't just throw more compute at the problem."

    9. JA

      Yep,

  7. 17:1721:03

    Is AI making us smarter?

    1. JA

      yep, that makes sense. Do you, do you feel right now like AI is making us smarter, or, uh, is it, like, increasing brain rot? And do you think, like, over time, that that goes in a particular direction?

    2. DP

      Did you see the, uh, Meter uplift study-

    3. JA

      No

    4. DP

      ... from the other day?

    5. JA

      Mm-mm.

    6. DP

      Oh, it was super, it was super interesting. Um, Meter is this organisation that does evals on basically how AI is progressing. They had a very interesting result the other day. So they had open-source, um, uh, developers who are working in repositories that have tens of thousands of stars. Uh, they did a ex- a randomised control trial with these people, where they would issue them a random pull request that was, uh, that was open in these repositories.

    7. JA

      Mm.

    8. DP

      And they'd work on it, in one case, just by themselves, in another case, with the help of, like, Cursor and Claude 7 or Claude 3.7. And then they measured, in the case where they were working with the AI, one, how much do you think you were sped up?... um, and then two, how much were they actually sped up? And the developers thought that they were twenty percent, uh, they would be like twenty- they, they were twenty percent more productive as a result of AI.

    9. JA

      Mm-hmm.

    10. DP

      They were actually nineteen percent less productive as a result of AI.

    11. JA

      Wow!

    12. DP

      It was really interesting to read the threads of-

    13. JA

      Wow

    14. DP

      ... the developers who participated, trying to explain. So people who have looked at this, who are even more bullish on AI, are like, they think their experimental design was extremely robust.

    15. JA

      Even like senior engineers misread themselves like that?

    16. DP

      Especially the senior engineers.

    17. JA

      Wow!

    18. DP

      The senior engineers had, from what I remember, the biggest decreases in productivity.

    19. JA

      Huh.

    20. DP

      Um, so these are people who are experiencing this trajectory. They've been working on it for decades. Um, and there's a couple explanations: One is just that I think in a lot of domains, um, there is this common failure mode in intellectual work, where you default to procrastinating by doing a thing which seems productive, but is like, uh, is not moving the ball forward that much. So the classic example of this is, uh, in college, instead of like rereading the textbook, you should just do the practice problems. And maybe using these AI tools and then, like, going on social media for thirty minutes while you're waiting for the completion to complete is another example of this. Why did this happen? So at least for now, to go back to your original question, I don't-- it's, like, not obvious to me that it's making us smarter.

    21. JA

      It's interesting, um, the version for everybody is I think a large number of people are having, uh, ChatGPT sort of like give them guidance in life at this point. You know, may- I don't mean that in some like huge philosophical way, although maybe in some cases, but even just, like, day-to-day people are like: "Here's my whole setup. This is everything about me. Like, what should I do?" And that is a big influence, and so it's sort of, um... we kind of, to the extent that these models have quietly gripped a lot of people, either through their decisions, through their engineering work, through whatever else, it's, like, pretty important that these get really good.

    22. DP

      Mm-hmm. But have you found it useful, that kind of stuff? Just like... I mean, it is sort of like, "Okay, plan out a cute date for me," or something.

    23. JA

      I use it some. Um, I think I'm-- I think I use it less than a lot of my friends. I think more I'm commenting that I think broadly, a lot of people do use it for that.

    24. DP

      Yes.

    25. JA

      Even if you or I happen to be in the group that uses it a little bit less, I think a lot do.

    26. DP

      Right. Yeah.

    27. JA

      You know, and even probably taking personal advice and things like that.

    28. DP

      Yeah.

    29. JA

      Which is probably good in a lot of cases. Like, I think it is very smart and people know how to use it and things like that.

    30. DP

      Right.

  8. 21:0323:54

    AI’s impact on biology

    1. DP

      people in other domains have skills-

    2. JA

      Do you feel, actually, on biology in particular, have you spent time with, uh, Patrick at Arc by chance?

    3. DP

      A little bit, yeah.

    4. JA

      Um, I feel like the bullishness that a lot of people in biology have for AI's ability to do, like, drug discovery and things like that-

    5. DP

      Yeah

    6. JA

      ... seems very promising to me.

    7. DP

      Yeah.

    8. JA

      I don't know what you found as you've, like, spent time in biology.

    9. DP

      One interesting question I had for these people is, in biology, we, we can either employ models which think in thought space, so just like humans, they can come up with hypotheses and so forth, or models which think in protein space or g- DNA space or capsid space. You know, like, humans just aren't-- we can't like... You know, I think that, "I think G sounds really good here, then let's do T next." Um, and so I'm curious if which one they think is a more promising or more useful complement to the current progress in biology. Is it having better models that can think in, you know, like the AlphaFold-type stuff, or is it just like, have o3 come up with hypotheses and just, like, write them out in English? At least George seems to think it was the, the bio space, like thinking in proteins or DNA or so forth. Because this is... While we have millions of life science PhDs who can, like, come up with ideas, being able to prune through them in simulation-

    10. JA

      Yeah, exactly, like a digital cell kind of thing.

    11. DP

      Yeah, exactly.

    12. JA

      Yeah.

    13. DP

      Yeah. It, it is like the more useful complement there.

    14. JA

      Yeah, I mean, that seems like that would be like a very unequivocally positive output for humans if we can sort of, you know, just wildly change biology and, you know, pharmacology and things like that.

    15. DP

      Yeah. I think in the long run, I sort of worry about the fact that, um, we, like, we know ways in which things can just go horribly wrong. Like, we have the equivalent of nuclear weapons, but in different domains.

    16. JA

      Hmm.

    17. DP

      Um, so mirror life and biology, um, apparently, if you have, um, life with the opposite chirality, there's just no defense. Like, plausibly, it could render many life forms unviable on Earth. And so George Church was one of these people who wrote this letter saying, like: "Look, this thing exists. Let's not work on it." [chuckles] Um, but like, I don't know, over the course of a hundred years, what, what's the, what's the equilibrium here?

    18. JA

      Yeah.

    19. DP

      In physics, I interviewed this physicist, um, one of the things he's worked on is thinking about this, uh, thing called vacuum decay, and the TLDR is basically, like, it might be plausible to just, like, literally destroy the universe.

    20. JA

      Well, what's the, what's the idea there?

    21. DP

      Look, I'm, I'm a podcaster, so you're asking a- [chuckles]

    22. JA

      Okay, that's fine.

    23. DP

      You're asking a content-

    24. JA

      Yeah, yeah, yeah

    25. DP

      ... creator about physics.

    26. JA

      Yeah.

    27. DP

      So I'll do my best.

    28. JA

      Okay.

    29. DP

      Um, apparently, the, um, in quantum field theory, we're in a sort of, what's called a metastable state, where if-- it's like sort of having a huge valley and then a little bit of a hill, and then we're in this, like, little rump here. It's possible to throw such a, this huge amount of energy, and what would happen is that this, like, bubble would expand at the speed of light, which would just be, like, total destruction. It sounds like some wild sci-fi thing, but-

    30. JA

      No, but it actually takes me to the next thing I wanted to ask you about, which is, you have this wide range of guests and interests, so

  9. 23:5426:18

    Interests outside of AI

    1. JA

      maybe first, outside of AI, what domains are you most interested in right now? 'Cause I feel like I've seen you talk about, you know, politics and Russia, and math and science, and longevity. Like, what are you interested in most right now?

    2. DP

      Um, I'm interested in what the year 2050 looks like. Obviously, you need to understand AI in order to understand what happens in 2050. Um, but throughout history-... There's never been a case where there's just been a single technology which, which explains why, say, the Industrial Revolution happened, right? It wasn't just that we made better textile machines. You have improvements in sector after sector, which are enabled by, um, uh, key innovations in specific sectors. But like, for example, I, I d- yeah, it's im- interest- important to learn about what's happening in bio and robotics, et cetera, so I wanna get into those fields. Also, I've, I've just been interested in the fact that we are finally getting to a pace of change that we actually have seen before in history, but not for a long time. Um, I most recently interviewed this biographer of, uh, Stalin, Stephen Kotkin, and I think Stalin is born in s- the 1870s. If you just think about his life, from the 1870s onwards, um, railway, planes, airplanes, steamships, um, radio, uh, telegraph, light bulbs, um, uh, combustion. Um, w- I mean, World War I is a cr- crazy example, where you start off the war, I think there were... The Wright brothers had flown, but there weren't, like, many- there were, like, on the o- order of a hund- hundreds of planes in the world, um, to- and there were no tanks. Like, tanks was not a thing.

    3. JA

      Yeah.

    4. DP

      And World War I ends with, like, it's a tank war, it's a plane war-

    5. JA

      In not that many years.

    6. DP

      Yes. Like, y- you go from, like, almost no trucks to tens of thousands of trucks-

    7. JA

      Wow

    8. DP

      ... over the course of four years, um-

    9. JA

      And planes?

    10. DP

      Yes. So I think the, even planes were at, like, extremely low level before the war, and then-

    11. JA

      And then in military use during, obviously.

    12. DP

      Yes.

    13. JA

      Yeah.

    14. DP

      So I mean, uh, the reason Germany thought it was gonna win is because it had this, like, railway network, and, um, there was this plan on how you could do this two-front war and knock out both France and Russia at the same time, just by, like, you'd be amazing at, at railway logistics. And I think the, um, von Moltke, or whoever the leader of the German command was, said at the end of the war, uh, like, "We lost because of trucks," right? Like, "We didn't anticipate that, like, th- there was another way to ship enemy combatants to the front." Yeah, w- w- we're just gonna see, like, this level of change across so many different sectors.

  10. 26:1831:10

    Chronology of his interests

    1. JA

      There was, um... Somebody posted on Twitter, Bucko posted something that I thought was a, an interesting point about you, whether or not it's true, um, but I'm curious how you react to it, which was: It seems like you went through this evolution where you were learning a ton about AI, and then it seems like you believed that AGI was coming, and then your interest started expanding out into all these other things, like, you know, geopolitics, biology, all these other areas. I'm guessing the re- you know, which was sort of like saying, "You know, the technology is what, you know, creates, you know, moment for change, but then this backdrop of the world is what really influences it."

    2. DP

      Yeah.

    3. JA

      I'm guessing that, like, the chronology of your interests, uh, were a little bit different than the framing, but I'm curious just, like, how you think about that.

    4. DP

      Given what I do, I'm actually quite pessimistic about, like, being able to learn from other fields. I just know people who, um, will, like, read some philosopher in the 19th century, and they think, like, "Oh, this, like, explains how Silicon Valley works," or like, "This explains AI." And I'm like, "No, I think you just, like, have to read the papers about AI." I just think it's very hard to generalise.

    5. JA

      You're saying to understand the technology, you have to underst- you have to read the papers?

    6. DP

      Yeah.

    7. JA

      Yeah.

    8. DP

      I think just people have this idea that, like, "I'll come up with my grand theory of history."

    9. JA

      Yeah, you're like, "It's not philosophy, it's science."

    10. DP

      Yes. But I, I... Just like in any domain, it's just very hard to have this, like, um... I think there's- I, I know a couple of people who can do this, and I find it really impressive, but what I noticed about them, they're just, like, not hand-wavy at all. So there are people who, like, for example, um, if you're trying to model how AI will impact economic growth, one way is just to, like, read the sort of like first-hand accounts of people going through it in the 1500s, and like, "Oh, like, let's read the biography of the Medici," and so forth. And then there's other people who are just like, "Okay, let's look at the growth rates going back 10,000 years. Um, what is, like, the long-run secular trend? You know, like, what are the- what actually explains what changed?" Well, um, there's the endogenous growth theory, where the, the key change is that population growth and p- more people come up with more ideas. Okay, well, AI is more people. They'll come up with more ideas. There'll be more specialisation. So you just, like, there's this very different mode of learning from other fields-

    11. JA

      Yep

    12. DP

      ... which is empirical and, uh, I mean, empirical is maybe the wrong word, but just, like, very falsifiable and grounded, versus, "I'm just gonna read... I'm gonna go to the library and just, like, read a bunch of random books."

    13. JA

      Totally. So how do your interests connect to each other then? Like, are you following any particular threads, or, you know, does one connect to another in a certain way? Like, what's, what would you say is driving what creates various interests over time?

    14. DP

      Honestly, it's just a super, uh, bespoke, whatever I happen to be interested in that week, if I'm reading an interesting book.

    15. JA

      How are you spending your time? Like, are you reading a lot? Like, are you- is it- do you learn mostly through reading, through talking to people? Is- are those- are there other methods?

    16. DP

      Reading.

    17. JA

      Reading?

    18. DP

      Um, I, I think there's a couple of people you learn a lot from talking to. In general, I've just sort of been disappointed about, like, I mean, given the fact that I'm a podcaster-

    19. JA

      That's interesting, because you're talking to, like, some of the smartest people, though.

    20. DP

      It's also really interesting, especially- so in some domains, people can be super, like, um, generative outside their domains. I, I've sort of been disappointed about the fact that, look, you, you might hope that you could talk to some, uh, historian about World War I or about the history of oil or something, and then they'd have insights about, like, how this applies to AI, but really, those connections will likely come from you and not from them. When I was interviewing Daniel Yergin, who's the author of The Prize, it's this book about-

    21. JA

      Yeah

    22. DP

      ... you know, like the 200-year history of oil. One thing I thought was really interesting is that, uh, Drake discovers the first oil well in Pennsylvania, I think in the 1850s. Um, and then the Model T car, or I think it was, like, 1905 or something, that, like, you finally have cars with internal combustion engines that people are using to transport all around the world, and this is a sort of industrial, uh, case use of oil. Before that, most of oil was just wasted. It was only the kerosene component. Um, so all of Rockefeller, all of that history that you sort of think about as a Gilded Age, like oil baron stuff, that's happening when a small fraction of oil is being used just for lighting, before the electric light bulb was invented. And in fact, when the light bulb was invented... I might be getting my dates wrong on the Model T and the, uh, light bulb, but it's like ar- around that area. Uh, the, when the light bulb was invented, people are like: "Oh, um-... oil is go- like standard, standard oil is gonna go bust, because what's the other use case of oil? And so I do find it interesting that it took more than 50 years to go from, "We have discovered limitless energy in the Earth," to, "Here's a way to use billions of gallons of this stuff." And I think it has maybe interesting implications for AI, where look, we, we, we, you have like basically it is sort of shocking how cheap AI is. I think, uh, that's why I always find it confusing that people are like optimising on cost.

    23. JA

      Mm-hmm.

    24. DP

      Like, do I want like 2 cents per million tokens-

    25. JA

      Yeah

    26. DP

      ... or 0.2 cents per million tokens? So we have this like commodity that we could potentially use at an industrial scale, uh, and we don't know how to- uh, like part of it is just technological. We don't know how to like g- get those tokens to be more valuable, and part of it is just like we-- Yeah, uh, what do we do? Where's, what's the, uh, internal combustion engine equivalent for AI?

  11. 31:1033:43

    His approach to learning

    1. JA

      So you feel like most of your learning comes from what you read, rather than, you know, through your conversations with people?

    2. DP

      Yeah. I'm very lucky that there's maybe six to 12 people who I've known for five years. Al- almost all of them I've had on the podcast, but who I'm in just extremely regular touch with. I've genuinely learned a lot of what I know from, like, this handful, this, like this group chat. In one, in one sense, it makes me feel really- I know that this is sort of weird that I've, like, known them for five years-

    3. JA

      Yeah

    4. DP

      ... and now they're also, like, super successful.

    5. JA

      Mm.

    6. DP

      But, like, we, we were all college students at some point.

    7. JA

      Yeah, there's a lot to be said for that, you know, it's that five closest friends thing.

    8. DP

      Yeah.

    9. JA

      It's a big deal.

    10. DP

      Right. How, well, how do you feel about this? Do you, do you learn more from talking to people or-

    11. JA

      I think I learn more from talking to people. I also think that, like, um, there's different sorts of things that you can be seeking. You know, like seeking the truth versus seeking a good decision are very related, but not exactly the same thing. There's, uh... For example, if you're trying to learn from people about how do you spot great talent, and what can you pick up from somebody, you're not gonna get to the truth. You're just gonna get to techniques and things that have been useful for other people that you try to apply to yourself. So for that kind of thing, I wouldn't know how to read about it anyway.

    12. DP

      But on object-level stuff, so if you wanna learn about what's happening in robotics, is it... I, I just, I don't know. I've, I've, like, I've been sort of underwhelmed by how, uh-

    13. JA

      Yeah, I mean, the schools of thought to me would be either you can try to go learn it for real yourself-

    14. DP

      Right

    15. JA

      ... or if that hill is too high to climb, which, you know, for me, getting into robotics and, like, the white papers way, I'm like, I would be, you know, kidding myself-

    16. DP

      Right

    17. JA

      ... to think I could catch up and then get to the edge of anything-

    18. DP

      Yeah

    19. JA

      ... and on any timescale that mattered. And so for me, there's the other move of: Is there a way, if you're gonna do it, to shortcut the decision somehow to somebody? And so who can you most trust-

    20. DP

      Sure

    21. JA

      ... to give you good information or something like that?

    22. DP

      Yeah. I think we're also in a lucky position where we have, uh, enough of, enough public output that we can sort of reach out to people, and they'll say yes. This is a tougher position to be in if you're, like, 19, and "I wanna, like, I wanna learn about biology."

    23. JA

      Yeah.

    24. DP

      I feel very lucky because a lot of people do great work in many different domains. Mine just happens to be public-facing by default, so I get, like, more ability to reach out to people than, um, than I would by default.

    25. JA

      Than someone doing great work in private.

    26. DP

      Exactly, which does create this flywheel where, uh, if I do make good content, smart people will be willing to talk to me.

    27. JA

      Yeah.

    28. DP

      That helps me make better content. I think that's actually, like, more relevant to why the podcast grows than, um, just, like, audience, audience tells people-

    29. JA

      Some natural flywheel, you're saying?

    30. DP

      Yeah.

  12. 33:4340:44

    New thinking on human evolution

    1. JA

      Yeah. Before sort of moving on to a new topic, I wanted to ask you about that's related to this. You've mentioned a bunch of really interesting ideas there around, you know, oil and history, and Stalin, and, um, biology, and I'm curious if there's any other just ideas recently that have just really stuck out to you that you can't stop thinking about, that have really gripped you?

    2. DP

      Hmm.

    3. JA

      Just 'cause I love hearing about these.

    4. DP

      The one that's been on my mind for a long time is, um, I interviewed this geneticist of ancient DNA, David Reich, and what his lab and his research has revealed is that human history-- First of all, we just didn't realise how much we didn't know about human evolution. Just like the story you learned in high school, all of it is, like, at least somewhat false about how, when, where, who-

    5. JA

      What do you mean?

    6. DP

      Like, did it happen in Africa?

    7. JA

      Did it?

    8. DP

      A big chunk of it didn't. Like, there was this, th- there was, um, a group that went out 400,000 years ago, and then they mixed in back with the, the group that left out of Africa 70,000 years ago. A lot of the evolution that led to this branch of humanity maybe just didn't even happen in Africa. When did it happen? We like-

    9. JA

      Wow

    10. DP

      ... we're, like, learning more stuff about it, and then the key thing we're learning is: How did it happen? Um, and w- it seems like, uh, we, we just see this pattern again and again in history, which is very disturbing but, like, super recurring, is that some small group will figure something out, and it's not clear from the genetic record what it is, right? Like, um, 70,000 years ago, there's this group of 1 to 10,000 people in the Near East, so, like, where the mid- um, um, Middle East, uh, North Africa are right now. They figure something out, and they wipe out every single other species of humans across all of Eurasia.

    11. JA

      Whoa!

    12. DP

      Like, there was a half a dozen different species of humans. There were the Hobbits, obviously the Neanderthals-

    13. JA

      The Hobbits?

    14. DP

      Uh, I forget what their, like, real biological name is-

    15. JA

      Okay

    16. DP

      ... but, like-

    17. JA

      Yeah, yeah, yeah

    18. DP

      ... they're, I think they're- [chuckles] - they're called the Hobbits.

    19. JA

      That can't be it. Yeah, yeah. [laughing]

    20. DP

      [laughing]

    21. JA

      That's good. [chuckles]

    22. DP

      The, the Denisovans, um, they're all wiped out by this one group. Like, w- it starts out with, like, 1 to 10,000 people, expand all through. Ten thousand years ago, Anatolian farmers, a- also from the, uh, the modern-day Middle East, they expand out, kill off, like, 90% of the hunter-gatherers in Europe through Asia. This also happens again, by the way, with the group that goes through the land bridge to America. They also keep doing this. There were, like, multiple waves, and, like, one of the waves killed off the remaining ones.

    23. JA

      Wow!

    24. DP

      The only people who survived, by the way, interestingly, are, um, that we have genetic evidence of, is this group in the Amazon, where because the Amazon is so densely, uh, so dense to get through, like, the genocide wasn't completed, and so there was, like, more of an intermixing. Then 5,000 years ago, the Yamnaya, uh, which is this group of, um, like, steppe nomads, they sweep through all of-... Eurasia again. Like, and we're talking about, like, 90% death rates of the domestic population.

    25. JA

      Hmm.

    26. DP

      So, and not just like, multiple continents, like all of Europe, the people who k- the people who built the Stonehenge are killed off by these people, um-

    27. JA

      This is insane. Are we- And they're pr- pretty sure this is right?

    28. DP

      Yes.

    29. JA

      Wow!

    30. DP

      By the way, the, uh, the way you learn wh- why it's a genocide or why it was, like, violent, is you look at the fraction of maternal versus paternal DNA that comes from the native population versus the invading population.

  13. 40:4445:52

    Learning and the media

    1. JA

      Okay, this kind of flows into the next thing I wanted to ask you about, which is your broad sort of perceptions of the way learning is happening. And, you know, you've obviously been sort of, like, learning in public. You do a lot of self-directed sort of education and things like that. You know, you, but you're also, you've got a, you know, one foot sort of very tied to people in academia and at the top of research fields and things like that. And I'm curious, sort of, you know, your opinion on sort of this transition that's happening, where, like, a lot of learning and the way people think that the stuff should get consumed is self-directed and not part of the big institutions and things.

    2. DP

      Hmm. The standards people have for, like, "Is this thing true, do I really believe it?" have just really degraded, especially in sort of podcast land, um, to criticize my own tribe. People will just, like, people will just fucking say shit. Whatever you say about academia, there is this idea of, like: Okay, does this make sense? Like, have you actually made, like-

    3. JA

      Yeah

    4. DP

      ... a clear argument? Like, uh, do you even have a clear, like, end statement, or is it just, like, the thing that, like, people are saying? On the other hand, look, I mean, is it, like, net good for the world? If you read history, and you read about the worst things that ever happened, the Cultural Revolution in China, the Great Terror in, uh, the Soviet Union, um, y- you can complain that Twitter has low average IQ, uh, that, like, the conversation is very dumbed down. But you just don't need that many IQ points to realize the Cultural Revolution is bad. You just need some mechanism where you could have gone on and been like: Why is Mao having us kill all the sparrows? Like, isn't that actually really bad for, uh, you know, like, pest control? And just, like, making fun of this deification. And I think that actually has worked, right?

    5. JA

      Mm-hmm.

    6. DP

      Like, woke was a thing for a second, and then I think social media contributed to that being less than. I think also making fun of Trump is, like, a thing that people do and has worked. And so on net, I just think, like, getting rid of the worst axises is much more important for making history go well than making sure that we can have these giga brain-

    7. JA

      Yeah

    8. DP

      ... uh, genius-level takes all the time.

    9. JA

      Yeah.

    10. DP

      And I think social media does, like, a reasonably good job-

    11. JA

      Yeah, I mean-

    12. DP

      ... of helping us correct the worst axises.

    13. JA

      On, on the sort of truth point, I think part of the issue is that, like, the legacy kind of media corporations-... in my view, have, like, lost quite a lot of trust from people-

    14. DP

      Mm.

    15. JA

      -like, myself included, in many cases, where they seem, you know, like they've got agendas, and it seems like they are of, like, you know, for-profit and maximizing eyeballs.

    16. DP

      Yeah.

    17. JA

      And so I'm like, you know, citizen journalism on Twitter, I'm still not gonna trust that completely, but I also don't trust, you know, like, the institutions completely.

    18. DP

      Yeah.

    19. JA

      So I don't know that one's, like, that much better than the other at this point.

    20. DP

      I, I disagree with this.

    21. JA

      Tell me why.

    22. DP

      My, my, um, attempt to do this thing has actually given me more respect for the media, um, in this ... In a couple of ways. One, I think they genuinely are better at holding power to account than, than sort of independent creators. Um, like, talking to somebody, an extremely powerful politician or business leader, and then asking tough questions is a thing that the media will do, and often it won't happen if they could just get to go o- on the podcast of their choice. And it's like, it's just, it's harder than it looks-

    23. JA

      Yep

    24. DP

      ... and they're willing to uphold these standards when they do these interviews. Now, I mean, I agree that they can often be sanctimonious when they do this.

    25. JA

      Is that to do with the person or the institution? Like, as an example, like, you know, Tucker Carlson goes Fox News to indie. Same guy-

    26. DP

      Yeah

    27. JA

      ... theoretically. Like, does that change his truthiness?

    28. DP

      I mean, I think this is another example of, like, his show and many others, um, this is not coming from a place where I, I, I'm, like, making an objectual political point. I'm a libertarian. I'm, like, sort of cl- cl- close enough to an embedding space to these people. Um, but the standards of discourse in these pl- new places are just abysmal. They'll just, um, uh ... If you just, like, if you had a conversation, you could just, like, say one of a thousand things, and you just, like, paused on one of these, like, "What exactly do you mean here? Why do you think this?" And obviously just like, "Oh, I heard a thing in a group chat," or whatever. Whatever you see about the New York Times, they just, like, genuinely have fact-checkers who will go through content. Um, I know that, like, there's many cases where they failed by their own standards, especially in tech journalism, and I, like, don't like their bias and these kinds of things, but just, like, the, the or- the standards are, like, an order of magnitude different.

    29. JA

      I mean, my view is we actually probably ... At first with AI, it might have looked like that was gonna be, like, a big problem for The New York Times, and maybe in some ways it is, but I would actually, as time's gone on, I think we probably need these institutions more than ever in a certain way, which is that, you know, like, AI also now brings, you know, a whole new layer of FUD to what's true, with deepfakes and random content generation by bots everywhere. And so you kind of at some point do go back to needing somebody to, like, really hold the standards of, like, truth as much as possible.

    30. DP

      Mm-hmm.

  14. 45:5248:53

    Podcasting success

    1. JA

      We don't have a lot of time left, and I wanted to get to the last topic. So up until you, all of my guests have been, you know, VCs or founders, and, you know, I've been sort of using this as an excuse to kind of publicly learn from them. One of the things I wanted to sort of learn from you is about the way you've done podcasting, 'cause you've done it as well as anybody I've seen. You were, like, the, the o- you know, the one person I reached out to when I was getting started for advice. You gave me really good advice, which was to just, you know, basically all centered back to just, like, be authentic, follow your interests. Like, don't post stuff you're embarrassed of. Like, all that stu- and that was basically, like, the one North Star that I had. But I'm curious, because you've had so much success with it, if you can kind of reflect at all about what's made it work or, you know, like, why has it, um, played out the way that it has so far?

    2. DP

      It's sort of very hard to say from the inside. I feel extremely lucky that my job is I get to sit down this morning and decide, what do I want to learn about over the next few weeks? I'll interview the person who's the best in the world at that. I get to pepper them with questions for a few hours, and then I get to repeat that week after week. I try to ask the questions that I generally want the answers to, including the questions I want the answers to after having done two weeks of prepping that field and hopefully having had learned about it over the s- previous years. And so much content is very much like, "Give us the intro chapter of your book again. Um, explain this very basic thing in your field." I think people just really appreciate the feeling of being a fly on the wall. Like, one of the reasons it's valuable to be, say, in San Francisco, is that you get- go to dinners or events where you will miss a ton of context. Like, there, people will ha- know a bunch of things, and you will, uh, you won't know what they're talking about, but it sort of raises the bar, and immersion learning works.

    3. JA

      Mm-hmm.

    4. DP

      I try to provide that kind of environment in whatever field I'm trying to learn about, and I think people appreciate, like, not being talked down to.

    5. JA

      Yeah.

    6. DP

      Having a sense that, like, the host is actually interested in the questions they're asking. These are, like, if they were having a private dinner. I think the, the dynamic replicated is if you were, like, at a private, uh, a dinner party, what would ... You wouldn't, like, just, um, be deferential at a dinner party. You'd be like, you'd hassle them if you disagree with them about something. But there'd be a fun vibe, and you wouldn't like, "Can you explain this concept for everybody else here?" You wouldn't have that, uh, dynamic.

    7. JA

      Somebody that I spoke to recently, um, who worked closely with Steve Jobs, who I'm actually gonna have on the podcast at some point soon, said something that I really liked, which was that one of the things that made Steve Jobs special was that the fundamentals of just operating day-to-day, the way you talk to people, the way you give feedback, the way you, you know, ask questions, was just really good at those fundamentals. Um, before we started, I asked you about, like, what have you learned about conversations? Because, you know, I see that as, like, a big sort of fundamental thing that everybody does, that you're very practiced at. But you made a point, which is that actually what it is for you is preparation-

    8. DP

      Yeah

    9. JA

      ... and that that's the centerpiece, and, like, that's your fundamental. I still think that's highly applicable to everybody, 'cause we're all, you know, going to interviews as, you know, either somebody on the candidate side or, you know-

    10. DP

      Yeah

    11. JA

      ... the employer. We're all, like, you know, meeting people for, you know, all sorts of things. But everybody's preparing for stuff,

  15. 48:5352:13

    Best in class interview preparation

    1. JA

      so what is your preparation like? Like, what does that mean for you when you're saying, "I'm preparing really hard for this," and it's like, you know, the center of your YouTube?

    2. DP

      In some sense, it's the very obvious stuff.... it's if you're interviewing a researcher in a field, read the key papers. Um, a couple years ago, before- back when I was just, like, started getting into AI, and I was about to interview Ilya, I'm like, "Okay, I'm gonna, like, program the transformer. This is, like, how I'll learn about this," and then try to talk to any of the researchers I could. If I'm interviewing a scholar in the field, I interviewed, um, this person who actually wrote a rebuttal to The Power Broker, which is this book about how Moses changed New York City. That itself, I think, is, um, a 1,500-word book or some- it's a 1,500-page book. Um, I read that, and I read, like, his rebuttal of that book, and I read, like, review articles or whatever, like, different things about, like, New York construction history. I just do this for all- I try to do this for all my guests. So then, but in some sense, it's, like, very obvious.

    3. JA

      Mm-hmm.

    4. DP

      Just, like, read the things that could, uh, potentially be relevant, and then write down questions, obviously. The thing I've changed over the last year is I've started using spaced repetition. Spaced repetition is this, like, um, tool where you basically write flashcards for yourself, and this software serves them to you every couple of months.

    5. JA

      Like, you have to be preparing for something well ahead of time?

    6. DP

      No, this is for consolidating knowledge across interviews.

    7. JA

      Mm.

    8. DP

      So if I do an interview, I'm actually gonna, like, retain what I've learned. Because a lot of the concepts connect, I mean, especially with AI. With AI, you're just, like, trying to predict what a future civilization of different kinds of beings will look like, um, and there's no domain of knowledge which is not relevant to this question, right? So obviously, technical AI stuff is relevant, but history, anthropology, um, even primatology-

    9. JA

      Yeah

    10. DP

      ... like, you know, like, what happened between primates and humans? Everything is relevant. It'll come up in the interview, um, and so just having it cached, uh, through tools like this is extremely valuable.

    11. JA

      That's cool. So it's like you're, like, retaining a curriculum of all your work-

    12. DP

      Yeah

    13. JA

      ... basically.

    14. DP

      Yeah. It's the kind of thing where, um, if you're reading a book, I think, my, again, either you shouldn't just, like, shouldn't read, um, a- at least if you think you're doing it for, like, learning, or you should have a very intensive practice around, "I'm gonna make the cards, I'm gonna write the re..." Like, what- whatever thing is the equivalent to doing practice problems for the domain you're trying to study. Because it's shocking to me how often I will make a flashcard for s- a, a topic where I'm like, "Okay, this is, like, so basic. I, I don't need to write this down, um, but I just have to do something right now," and then a week later, I'm like, I was on the verge of forgetting it. And you just, like, think about, like, how many books have you read in your life? Hundreds, right? How much have you, like, taken away from them, or conversations, whatever other medium? The lack of efficiency here is really striking, and so I've, I've been thinking a lot about how to make this a process where, over the coming years, I can be like, "I'm really getting better over time. I'm not just, like, doing the next thing."

    15. JA

      It's sort of the, uh, spiritual opposite of the idea of an AI that's always listening and remembering for you, so you don't have to remember any conversation you ever had-

    16. DP

      Mm

    17. JA

      ... and it's sort of just, like, some thing on your person that is constantly listening, and you can always go back to it, but everything's captured for you.

    18. DP

      Yeah.

    19. JA

      You're like, "It needs to get in my brain, so that I can learn the next thing."

    20. DP

      This is exactly full circle from where we st- uh, where we started because people will say in response to the continual learning on-the-job training stuff that, "Oh, we'll just have an external memory system that, um, it'll be just like a document of, like, things the model has learned." Oh, oh, you know, ChatGPT already has this, and I think a lot of cognition is just memory. Like, it has to be on board, and it has to be cached the whole time.

    21. JA

      That's a great place to end. Thank you so much for doing this. I, uh, I hope you didn't mind being a guest, and keep doing your amazing work. I love, I love to watch it.

    22. DP

      It was super fun. Thanks for having me on. [upbeat music]

Episode duration: 52:13

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode QVgKY4uDqPU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome