Skip to content
a16za16z

From Vibe Coding to Vibe Researching: OpenAI’s Mark Chen and Jakub Pachocki

What comes after vibe coding? Maybe vibe researching. OpenAI’s Chief Scientist, Jakub Pachocki, and Chief Research Officer, Mark Chen, join a16z general partners Anjney Midha and Sarah Wang to go deep on GPT-5—how they fused fast replies with long-horizon reasoning, how they measure progress once benchmarks saturate, and why reinforcement learning keeps surprising skeptics. They explore agentic systems (and their stability tradeoffs), coding models that change how software gets made, and the bigger bet: an automated researcher that can generate new ideas with real economic impact. Plus: how they prioritize compute, hire “cave-dweller” talent, protect fundamental research inside a product company, and keep pace without chasing every shiny demo. Timecodes: 0:00 Introduction 0:25 The Launch of GPT-5 2:28 Evaluating Progress: Evals & Milestones 5:07 Surprising Capabilities of GPT-5 7:10 The Future of Automated Research 8:59 Agency, Reasoning, and Model Planning 10:18 Extending Progress Beyond Verifiable Domains 12:11 The Role and Success of Reinforcement Learning 14:44 Reward Modeling and Best Practices 15:54 The Evolution of Coding with AI 21:39 What Makes a Great Researcher? 27:20 Building and Sustaining a Winning Research Culture 31:40 Balancing Product and Fundamental Research 38:36 Prioritization, Compute, and Resource Allocation 41:19 The Intersection of Academia and Frontier AI 46:56 Maintaining Speed and Learning at Scale 48:52 Trust and Collaboration at OpenAI Resources: Find Jakub on X: https://x.com/merettm Find Mark on X: https://x.com/markchen90 Find Sarah on X: https://x.com/sarahdingwang Find Anjney on X: https://x.com/AnjneyMidha Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Jakub PachockiguestMark ChenguestAnjney MidhahostSarah Wanghost
Sep 25, 202553mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:25

    Introduction

    1. JP

      The big thing that we are targeting is producing an automated researcher, so automating the discovery of new ideas. The next set of evals and milestones that we're looking at will involve actual movement on things that are economically relevant.

    2. MC

      And I was talking to some, some high schoolers, and they were saying, "Oh, you know, actually, the default way to code is vibe coding." I, I do think, you know, the future hopefully will be vibe researching.

  2. 0:252:28

    The Launch of GPT-5

    1. AM

      Thanks for coming, Jakub and Mark. Jakub, you're the Chief Scientist at OpenAI. Mark, you are the Chief Research Officer at OpenAI, and you guys have the, both the, uh, the privilege and the stress of running probably one-

    2. SW

      [laughs]

    3. AM

      ... of the most high-profile research teams in AI. And so we're just really stoked, um, to talk with you about a whole bunch of things we've been curious about, including GPT-5, which was, you know, one of the most exciting updates to come out of OpenAI in recent times, and then stepping back, how you build a research team that can do not just GPT-5, but Codex and ChatGPT, and an API, uh, business, and can weave all of the many different bets you guys have across modalities, across product form factors, um, into one coherent research culture and story.

    4. MC

      Mm-hmm.

    5. AM

      And so to kick things off, why don't we start with GPT-5? Just tell us a little bit about the GPT-5 launch from your perspective. How did it go?

    6. MC

      So I think GPT-5 was really our attempt to bring reasoning into the mainstream. And, um, prior to GPT-5, right, we have two different series of models. You had, uh, the GPT kind of two, three, four series, which were kind of these instant response models, and then we had an O series, which, uh, essentially thought for a very long time and then gave you the best answer that it could give. So tactically, uh, we don't want our users to be puzzled by, you know, which mode should I use, and it, it involves a lot of research in kind of identifying what the right amount of thinking, uh, for any particular prompt looks like, and, uh, taking that pain away from the user. So we think the future is about reasoning, more and more about reasoning, more and more about agents, and, uh, we think GPT-5 is a step towards delivering reasoning and more agentic behavior by default.

    7. JP

      There is also a number of improvements across the BART in this model relative to O3, um, on our previous models. But our primary, our primary, um, thesis for, for this launch was indeed bringing the reasoning out to more people.

  3. 2:285:07

    Evaluating Progress: Evals & Milestones

    1. AM

      Mm.

    2. SW

      Can you say more about how you guys think about evals? I noticed even in that launch video, there were a number of evals where you're inching up from, you know, 98 to 99%, and that's kind of how you know you've saturated the eval. What approach do you guys take to measuring progress, and, and how do you think about it?

    3. JP

      One thing is that indeed for, like, these evals that we've been using for the last few years, they're indeed pretty close to saturated. And so yeah, like, uh, for a lot of them, like, you know, inching from, like, 96 to 98% is not necessarily, uh, the most important thing in the world. I think another thing that's maybe even more important but a little bit subtler, when we were in this, like, GPT-2, GPT-3, GPT-4 era, um, you know, there was kind of one recipe. You just, like, pre-train a model on a lot of, uh, data, and you kind of, like, use these, um, evals as just kind of a, a yardstick, um, of, um, how this generalizes to, like, different tasks. Um, now we have this, uh, different ways of training, in particular, uh, reinforcement learning on, like, serious reasoning, where we can take a domain and we can really train a model to, like, become an expert in this domain to reason very hard about it, which lets us, um, you know, target particular, uh, kinds of, of, of, of tasks, uh, which will mean that, like, we can get, like, extremely good performance on some evals, but it doesn't indicate as great generalization to, to other things. Uh, I think so the way we think about it in this one, we definitely think, like, uh, we are in a little bit of a, a deficit, like, uh, of, of, of, of great evaluations. And I think the big things that we look at are actual marks of the model being able to discover new things. I think for, for me, the most exciting trend and, like, actual sign of progress this year has been our models' performance in, uh, math and programming competitions, although I think, like, they are also becoming saturated in a sense.

    4. SW

      [laughs]

    5. JP

      Um, and the next set of evals and milestones that we're looking at will involve actual, um, discovery and, and actual, um, movement on, on, on, on things that are, that are economically relevant.

    6. SW

      Mm, totally. You guys already got number two in the AtCoder competition, so there's only, only number one left. [laughs]

    7. MC

      Yeah. Yeah, and I mean, I think it is important to note that these evals, like, um, you know, IOI, AtCoder, IMO, um, are actually real world markers for success in future research.

    8. SW

      Mm.

    9. MC

      I think a lot of, you know, the best researchers in the world have gone through these competitions and have gotten very good results. Um, and, and yeah, I think we are kind of preparing for this frontier where we're trying to get our models to discover new things.

    10. SW

      Yeah. Very exciting.

  4. 5:077:10

    Surprising Capabilities of GPT-5

    1. AM

      Which capability from GPT-5, before the release, surprised you the most when you were working through the eval bench or using it internally? Were there any moments where you felt like this was starting to get good enough to release because it was useful in your daily usage?

    2. MC

      I think one big thing for me was, um, just how much it moved the frontier in very hard sciences. Um, you know, we would try the models with some of our friends who are, you know, uh, professional physicists or professional mathematicians, and you already saw kind of some instances of, of, of this on Twitter where, you know, you can take, uh, a, a problem and have it discover maybe not, like, very complicated new mathematics but, you know, um, some nontrivial new mathematics.

    3. SW

      Mm.

    4. MC

      And, uh, you know, we, we see physicists, mathematicians kind of, uh, repeating this experience over and over where they're trying GPT-5 Pro and saying, "Wow, this is something that the la- you know, previous version of the models couldn't do." And it-is a little bit of a light bulb moment for them. It's like, uh, able to automate maybe like what could take, uh, one of their students months of, of time.

    5. JP

      Well, G- GPT-5 is a, is a definite improvement on o3. For, for me, o3 was definitely like that moment where the reasoning models became like actually very useful on a daily basis, I think especially for, um, you know, working through a, a math, uh, formula or, or, or a derivation. Like they, like it actually got to a level where it is like fairly trustworthy and, and I can actually use it as a, as a tool, uh, for, for my work. Um, and yeah, I think, I think, uh, yeah, it is very exciting to get to that moment, um, but I expect that, um, well, now as we're seeing, um, you know, these models like, like actually able to automate, well, yes, like, like we're saying, solving contest problems over, over longer time horizons, I, I, I expect that that is... Well, that's, that has, that, that was quite small compared to what's coming over the next year.

  5. 7:108:59

    The Future of Automated Research

    1. AM

      What is coming in the next one to five years? It would be just at whatever level you're, you're comfortable sharing, what, what does the research roadmap look like?

    2. JP

      So the big thing that we are targeting with our research is producing, um, an automated researcher, so auto-automating the discovery of new ideas. Um, and you know, of course, like a particular thing we think about a lot is automating our own, own work, automating ML research, uh, but that can get a little bit self-referential, so we're also thinking about automating, um, progress in, in, in other sciences. And I think like one good way to measure progress there is looking at like what is the time horizon on which these models actually can, um, reason and make progress. And so now as we kind of like get to a level of near mastery of this, of this, um, high school competitions, let's say, I, I, I would say like we get, we get to like maybe on, on the order of one to five hours of, of, of reasoning. Um, and, and so we are focused on extending that horizon both in terms of like the model's, um, capability to plan over very long horizons and actually able to retain m- ability to retain memory.

    3. MC

      And back to the evals question, that's why I think evals of the form of how long does this model autonomously operate for are of particular interest to us.

    4. AM

      Hmm.

    5. SW

      And actually maybe on that topic, there's been this huge move toward agency and model development, but I think at least the state that it's in currently, users have sort of observed-

    6. MC

      Mm-hmm

    7. SW

      ... this trade-off between too many tools or planning hops can result in quality regressions, uh, versus, um, something that maybe has a little bit less agency. The, the quality is at least observed today to be a bit higher. How, how do you guys think about the trade-off between stability and depth?

  6. 8:5910:18

    Agency, Reasoning, and Model Planning

    1. SW

      The more, um, steps that the model is undertaking, maybe the less likely the 10th step is to be accurate versus-

    2. MC

      Right

    3. SW

      ... you ask it to do one thing, it can do it very, very well. Um, and to have it keep doing that one thing better and better but more complex things, there's sort of that trade-off. Um, but of course, to get to full autonomy, you are taking multiple steps. You're using multiple tools.

    4. JP

      Uh, I, I, I think actually like, well, the, well, the ability to maintain depth is, a, a lot of it is being consistent over long horizons.

    5. SW

      Hmm. Yeah.

    6. JP

      Um, so I, I think they are very related problems. Um, and in fact I think like with the reasoning models, we have seen the models like greatly, um, extend the, the, the, the length over which they're able to reason, uh, and, and work, um, reliably without, without going off track. Yeah. I, I think this is, uh, this is going to remain a big-

    7. SW

      Yeah

    8. JP

      ... area of focus for us.

    9. MC

      Yeah, and I think reasoning is core to this ability to operate over a long horizon because, you know, you imagine kind of yourself solving a math problem, right? You try an approach, it doesn't work, and, you know, you have to think about, you know, what, what's the next approach I'm gonna take?

    10. SW

      Right.

    11. MC

      Um, what are the mistakes in the first approach? And then you try another thing, and, you know, the, the world gives you some hard feedback, right? And then you keep trying different approaches, and the ability to do that over a long period of time is reasoning and gives agents that robustness.

  7. 10:1812:11

    Extending Progress Beyond Verifiable Domains

    1. SW

      We talked a lot about math and science. Um, I, I was curious to get your take on do you think some of the progress that we've made can actually extend, um, similarly to domains that are less verifiable? There's sort of less of an explicit right or wrong.

    2. JP

      Oh, yeah. This is, uh, this is, uh, a question I, I really like. Um, I think if you actually truly want to extend to research, uh, and, you know, finding, discovering ideas that, that meaningfully advance technology on the, on, you know, the scale of like months and years, like I think the, these questions like stop being so different, right?

    3. SW

      Hmm.

    4. JP

      Like it is one thing to solve like a very well-posed, uh, constraint problem on the scale of an hour, right? And there's like kind of a finite amount of ideas you need to look through, and that might feel extremely different from solving something very open-ended. Um, but, you know, even if you want to solve like a very well-defined problem that is on a much longer scale, right? You, like, you know, prove this m- m- millennium price problem, uh, well, that suddenly requires you to think about, okay, like what are the fields of mathematics or other sciences that might possibly be relevant? You know, are there inspiration from physics that I must take? Like, what is kind of the entire, uh, program that I want to develop around this? And, and now these become very open-ended questions, and it's actually hard to, you know, for, for, for our own research, right? Like, if all we cared about is, you know, reduce the, uh, modeling clause on a given data set, right?

    5. SW

      Mm-hmm.

    6. JP

      Like, like measuring the progress on that, like, uh, you know, like, like are we kind of actually asking the right questions in research, like, like, actually becomes like a, a fairly open-ended affair.

    7. MC

      Yeah, and I think it also makes sense to think about what the limits of, you know, uh, open-ended means. You know, um, I think a while back Sam tweeted about some of the improvements that we were making in having our models write more creatively and, you know, we do consider the extremes here as well.

    8. SW

      Right. Right.

  8. 12:1114:44

    The Role and Success of Reinforcement Learning

    1. AM

      Let's talk about RL.Because it seems like since o1 came out-

    2. JP

      Mm-hmm

    3. AM

      ... RL has been the gift that keeps giving.

    4. SW

      [chuckles]

    5. AM

      You know, every, every couple months, OpenAI puts out a release, and everyone goes, "Oh, that's great, but this RL thing is going to plateau, and we're gonna saturate the evals, the models won't generalize, or there's gonna be mode collapse 'cause of too much synthetic data." For whatever num- everybody's got a laundry list of reasons to believe that the gains in performance from RL are going to tap out, and, and somehow they just don't. You guys just keep coming out and putting out continuous improvements. Why is RL working so well? And what, if anything, has surprised you about how well it works?

    6. JP

      RL is a very versatile method, right? And there are a lot of ideas you can explore, um, once you have an RL system working. A long time at OpenAI, we started from this before language models, right? Like, we were thinking about, like, oh, okay, like, RL is this, like, extremely powerful thing, of course, like, on top of deep learning, which is this, like, incredible general learning method. Um, but the thing that we struggled with for a very long time is, like, what is the environment? Like, how do we actually anchor these models to the real world? Or, like, should we, you know, simulate, uh, you know, some, some, some, some i- island where they all learn to collaborate a- and compete? Um, and, and then, you know, of course came the, the, the, the language modeling breakthrough, right? And we saw that, oh, well, yeah, if we, if we, if we scale deep learning on modeling natural language, we can create models with this, like, incredibly nuanced understanding of human language. And so since then we've been, we've been, you know, seeking how to combine these paradigms and how to get RL to work on natural language. And once you do, right, like, then you kind of have the... well, you have the ability to, um, to, to, well, to, to, to actually like, like, like, uh, execute on, on, on these different ideas and objectives in this, like, extremely, um, robust, rich environment given by pre-training. Uh, and so, yeah, so I think, uh, it's been a, it's been a, it's been a real, um, um... yeah, I think it's been, uh, uh, perhaps the most exciting period, uh, in our research over the last few years where, where we've really like, uh, yeah, w- we've found so many new directions and promising ideas, uh, that, that, that all seem to, to, to be working out and, and, and, and, and, and we're trying to, uh, yeah, u- u- understand how to compare.

  9. 14:4415:54

    Reward Modeling and Best Practices

    1. AM

      One of the hardest things about RL for folks who are not practitioners of RL is the idea of crafting the right reward model. And so especially if you're a business or an enterprise who wants to harness all this amazing progress you guys are putting out but doesn't even know where to start, how... w- what do the next few years look like for a company like that? What is the right mindset for somebody who's trying to make sense of RL to craft the right reward model? Is there anything you've learned about the best practices or a, a, an approach of thinking, the, of using this latest sort of, um, family of reasoning techniques? What, what is the right way I should think about even approaching reward modeling as a biologist or a physicist?

    2. JP

      I expect this will evolve quite rapidly. I expect it will become simpler, right?

    3. AM

      Hmm.

    4. JP

      Like, I think, I think, you know, maybe like two years ago, we would've been talking about, like, what is the right way to craft my fine-tuning data set? And-

    5. SW

      Hmm

    6. JP

      ... I, I don't think we are, like, at the end of that evolution yet, and I think we will be inching towards more and more human-like learning, uh, which, you know, RL is still not quite. So I think, I think maybe the most important part of the mindset is to, like, not assume that, like, what is now will be it forever.

    7. AM

      Yeah.

  10. 15:5421:39

    The Evolution of Coding with AI

    1. SW

      Um, so I wanna bring the conversation back to coding. We would be remiss not to say congrats on GPT-5 Codex, uh, which just dropped today. Um, can you guys say a little bit more about what's different about it? How it's trained differently? Um, maybe why you're excited about it?

    2. MC

      Yeah. So I think, um, one of the big focuses of the Codex team is to just take the raw intelligence that we have from our reasoning models and make it very useful for real-world coding. So, um, a lot of the work they've done is kinda consistent with this. Um, they are working on kind of having the model be able to handle more difficult environments. Um, we know that real-world coding is very messy, um, so they're trying to handle all of the intricacies there. Um, there's a lot of coding that has to do with, you know, style, with, um, just, like, kind of softer things like how, how proactive the model is, how, how lazy it is, and just being able to define, um, in some sense, like a, a spec for how, uh, a coding model should behave. Um, they do a lot of, you know, very strong work there. And as, as you've seen, it's like, um, they're, they're also working on a lot better presets. You know, uh, coders, they have some kind of notion of this is how long I'm waiting, I, I'm willing to wait for a particular solution. Um, I think we've done a lot of work to dial in on, you know, for easy problems being a lot, you know, lower latency. For harder problems, actually, the, the right thing-

    3. SW

      Hmm

    4. MC

      ... is to be even higher latency. Um-

    5. SW

      Interesting

    6. MC

      ... get you the really best solution. Um, and just being able to find that preset, um, is, is very sweet.

    7. SW

      What's the sweet spot for if you were to say, like, easier problems versus harder?

    8. MC

      What we've found is the, the latest, the, the previous generation of the Codex models, they, they were spending too little time solving the hardest problems and-

    9. SW

      Hmm

    10. MC

      ... too much time solving the easy, easy problems.

    11. SW

      Got it. Okay.

    12. MC

      And I think, um, that, that is actually just, um, probably out of the box, uh, what, what you might get out of o3.

    13. SW

      Maybe just on the, the topic of coding, since you guys are both competitive coders in prior lives, um, I know you've been at OpenAI for almost a decade now, but I was struck by, uh, the story of Lee Sedol, the Go player-

    14. MC

      Mm-hmm

    15. SW

      ... who kind of famously quit Go after he lost to AlphaGo, um, multiple times. Uh, and I think in a recent interview, you guys were both saying that now the coding models are better than your capabilities, uh, and that gets you excited. Um, but say more about that, and, um, how much would you say you code now? Well, if you're hands-on-keyboard, you, you can talk about OpenAI generally, but how much code is written by AI now?

    16. JP

      In terms of-Cutting models being better? I, I mean, I, I think, yeah, I think it is extremely exciting to see this progress. I think, like, the programming competitions have a nice kind of encapsulated test of, like, ability to, um, come up with some new ideas, um, in, in, in, in, you know, in this, like, boxed, uh, environment and timeframe. Um, I do think, like, you know, if you look at things like, uh, well, I guess the IMO problem six-

    17. SP

      Mm

    18. JP

      ... or, or maybe, um, some very hardest, uh, programming competitions problems, like, I think there's still a little bit of headway to go for the models, but I wouldn't expect that to last very long. I do go a little bit. Uh, historically, I've been like-

    19. SP

      He's being humble. [laughing]

    20. JP

      [laughing] Hi- hi- historically, I've actually been, like, extremely reluctant to use any sort of-

    21. SP

      Hmm

    22. JP

      ... tools. I, I-

    23. SP

      Oh, interesting

    24. JP

      ... I just used Vim pretty much. Uh-

    25. SP

      Oh, yeah. [laughing] Okay.

    26. JP

      Uh-

    27. SP

      Old school. [laughing]

    28. JP

      Yeah. Um, yeah, eventually, I think, like, like especially with this, with this, um, um, latest coding tools, um, like GPT-5, I f- I really kind of felt like, okay, like, this is, this is no longer the way. Like, like you can do a, you know, 30 file refactor, like, pretty much perfectly in, like, 15 minutes. Like, you kind of have to use it. Um, yeah, and so I've been, I've been kind of like, um, learning this new way of coding, which definitely feels a little bit different. I, um, I think it is, like, a little bit of an un- uncanny valley still right now, where, like, like, you kind of have to use it because it is just, like, accelerating so many things, but it's still, like, you know, a little bit, like, uh, n- not quite as good as a, as a, a, a, as a, as a coworker. Um, I... So, you know, I, I, I think, like, our, our priority is getting out of that uncanny valley.

    29. SP

      Yeah.

    30. JP

      But, uh, yeah, it's definitely, uh, an, an interesting time.

  11. 21:3927:20

    What Makes a Great Researcher?

    1. MC

      Yeah. [laughing]

    2. SP

      [laughing]

    3. AM

      Well, I, I- that- I have a question about that.

    4. SP

      [laughing]

    5. AM

      Which is, what makes a great researcher, right? When you say vibe researching, there's, um, a big part of vibe coding is just having good taste and wanting to build something useful and interesting for the world. And I, I think what's so awesome about tools like Codex is if you've got a good intuition for what people want, it helps you articulate that and then, and then basically actualize a prototype very fast. With, uh, with research, what's the, what's the analog? What, what makes a great researcher?

    6. JP

      Persistence, uh, is a, is a very key trait, right? Like, I think, like, what, w- what is different about research when you're actually trying to... I, I think the special thing about research, right, is you're trying to create something or, or learn something that is just not known, right? Like, it's not known to work. Like, you don't know whether it will work, and so always trying something that will most likely fail. And I think getting to a place where you are, like, in a mindset of, like, being ready to fail and being ready to learn from these failures and, you know, so... And, you know, and of course with that comes creating kind of clear hypothesis and being extremely honest with yourself about how you're doing on them, right? I think a trap many people fall into is going out of their way to, like, to, to prove that it works-

    7. AM

      Right

    8. JP

      ... right? Which is quite different from, you know, like, I think, like, believing in your idea and sticking to it is, like, extremely important, right? And you want to persist, persist that, but you have to be honest with yourself about when it's working and when it's not, uh, so that you can learn and adjust.

    9. AM

      Hmm.

    10. MC

      Yeah, I think there are just very few shortcuts for experience.

    11. AM

      Mm.

    12. MC

      Um, I, I think through experience you kind of learn, you know, what's the right horizon to be thinking of a problem, right? You can't pick something that's too hard or it's not satisfying to do something that's too easy. Um, and I think a lot of research is managing your own emotions over a long period of time too. You know, there's just gonna be a lot of things you try and they're not gonna work.

    13. AM

      Hmm.

    14. MC

      And-

    15. SP

      Hmm

    16. MC

      ... sometimes you, you need to know when to persevere through that or sometimes when to kind of switch to a different problem. Um, and I think interestingness is something, you know, you try to fit through reading good papers, talking to, to your colleagues, and, um, and you kind of maybe distill their experience into your own process.

    17. AM

      When I was in grad school, um, you know, there's a big part, uh... I, I was, I'm, I'm a failed machine learning researcher. I was in grad school for, for bioinformatics, but a big part of my research advisor's thrust was about picking the right problems to work on, such that you could then sustain and persist through the hard times. And you said something interesting, which was there's a difference between having conviction in an idea and then being maximally truth-seeking about when it's not working, and though both those things might... are sometimes in tension, 'cause you kind of go native-on an, on a topic or a problem sometimes that you have deep conviction in. Have you found, is there any set of heuristics you've found are useful at the taste step, at the problem-picking step, that help you arrive at the right set of problems where that conviction and truth-seeking is not as much in zero-sum tension as other kinds of problems?

    18. JP

      Yeah. To, to be clear, I don't think conviction and truth-seeking are really in a zero-sum tension. I think, like, you can be, like, you can be convinced at, or, you know, you can have a lot of belief in idea and, and you can be, uh, you know, very persistent in it while it's not working. I think it's just important that you're kind of honest with yourself, like-

    19. AM

      Right

    20. JP

      ... like how much progress you're making, and you're in a mindset where you're able to learn from, uh, the failures along the way. I think it's important to look for problems that you really care about and you really believe are important, right? And so, um, I think one, one thing I've observed in, in, in, in, in many, um, researchers that inspired me h- has been really going after the hard problems, like looking at the questions that are, you know, kind of like, you know, widely known but, like, not really kind of considered tractable, and just asking like, you know, "Why are they not tractable?" Or like, you know, what, like, what, w- w- what about this approach? Like, why does this approach fail, right? You're, you're, you're always, like, thinking about what is really the barrier for the next step. If you're going after problems that, like, you really truly believe are important, right, then, then y- that, that, that makes it so, so much easier to find the motivation to persist with them over years.

    21. AM

      And i- in the development of, like, during the re- training phase of GPT-5, for example, are there any, were there any moments where there were, there was a hard problem, the or- original, initial attempts that were being made to crack that problem weren't working, and yet you found w- somebody persisted through that? Um, and what was it about those sto- any of those stories that comes to mind that worked well?

    22. JP

      Um-

    23. AM

      That you wish other people and other researchers did more of?

    24. JP

      I think on the path there, right, like along the sequence of models, like above the pre-trained models and the reasoning models, um, I think one very common theme is, um, bugs. Uh-

    25. AM

      Mm

    26. JP

      ... and, you know, both, like, just, like, yeah, silly bugs in software that can kind of stay in your software for, like, months and kind of invalidate all your experiments a little bit in a way that you don't know. Um, and, you know, identifying them can be, can be a very meaningful breakthrough for your research program. Uh, but also kind of bugs in the sense of, like, well, you have a particular way of thinking about something, and that way is a little bit skewed, which causes you to, uh, make the wrong assumptions, and identifying those wrong assumptions, thinking, rethinking from sc- from scratch. Uh, I think, um, you know, both for getting the first reasoning models working or getting the, uh, you know, larger pre-trained models working, I think, I think we've had, like, multiple issues like that that we've had to work through.

    27. SW

      As leaders of the research org, how do you think

  12. 27:2031:40

    Building and Sustaining a Winning Research Culture

    1. SW

      about what it takes to keep the best talent on your team and, on the flip side, creating a very resilient org that doesn't crumble if a key person leaves?

    2. MC

      The biggest, I think, uh, things that OpenAI has going for it in terms of keeping the best people motivated and exciting, excited is, like, we are in the business of doing fundamental research, right? We aren't the type of company that looks around and says, "Oh, what model did pers- you know, Company X build?" Or, "What model did Company Y build?" Um, you know, we have a fairly clear and crisp definition of what it is we're out to build. Um, we like innovating at the frontier. Um, we, we really don't like copying. And, um, I think people are inspired by that mission, right? You are really in, in, in the business of discovering new things about the deep learning stack. And, and, um, and I think we're, we're kind of building something very exciting together. Um, I think beyond that, a lot of it's creating very good culture, so we want a good pipeline for training up people to become very good researchers. Um, we, uh, I think historically have hired, um, you know, the, the, the best talent and, and the most innovative talent, so I just think, um, you know, we have a very deep bench as well. And, um, yeah, I think most of the, our leaders are very inspired by the mission, and that's what's kept all of them there. Like, when I look at my direct reports, um, they haven't been affected by the talent wars.

    3. SW

      I was chatting with a researcher recently, and he was talking about wanting to find the cave dwellers. Um-

    4. MC

      Mm

    5. SW

      ... and these are often the people who are not posting on social media about their work. Um, for whatever reason, they may not even be publishing. They're sort of in the background doing the work. Um, I don't know if you would agree with this concept, but how do you guys hire for researchers, and are there any non-obvious ways that you look for talent or, you know, attributes that you look for that are non-obvious?

    6. JP

      So I think, I think one thing that, um, we look for is having solved hard problems in any field. A lot of our most successful researchers, um, have started their journey with deep learning at OpenAI and have worked in other fields like, um, physics or-

    7. SW

      Mm

    8. JP

      ... um, computer science, theoretical computer science, or finance, uh, in the, in, in the past. Strong technical fundamentals coupled with the abil- the, um, intent to, like, work on very ambitious problems and, and actually stick with them. We don't purely look for, oh, you know, who did the most visible work or, or, or, or, or is the most visible on social media or-

    9. SW

      Yeah.

    10. AM

      As you were talking, I, I was thinking back to when I w- when I was a founder and I was running my own company and we would recruit for great talent engineers. Many of the attributes you described were ones that were on my mind then. Um, and Elon recently tweeted that he thinks this whole researcher versus engineer, uh, distinction is silly. Is that just a semantic... Uh, is he just sim- being, you know, semantically nitpicky, or do you think these two things are more similar than they actually look?

    11. MC

      Yeah. I mean, I, I do think they're-Can-- Like, researchers, they don't just fit one shape. Um, you know, we have certain researchers who are very productive at OpenAI who are just so good at idea generation and, um, you know, they don't necessarily need to show great impact through implementing all of their ideas, right? I think there's so much alpha they generate in just kind of coming up with, "Oh, let's try this," or, "Let's try this," or, "Maybe we're thinking about that." And there's other researchers who, you know, they are just very, very efficient at, um, taking one idea, um, rigorously exploring, you know, the space of experiments around that idea. So I think, you know, researchers come in very different forms. I think, um, maybe that first type wouldn't necessarily map into the same bucket as a, a great engineer. But, um, you know, we, we do kind of try to have a fairly diverse, um, set of research tastes and styles.

    12. JP

      Mm.

    13. MC

      Yeah.

    14. JP

      Mm. And, and say a little bit about what it takes to make, like a... create a frontier sort of winning culture-

    15. MC

      Hmm

    16. JP

      ... that can attract all kinds of shapes-

    17. MC

      Mm-hmm

    18. JP

      ... and, of researchers, and then actually grow them, thrive them, make them win together at scale. W-what is it a... What, what do you think are the

  13. 31:4038:36

    Balancing Product and Fundamental Research

    1. JP

      most critical ingredients of a winning culture?

    2. MC

      So I, I think actually the most important thing is just to make sure you protect fundamental research, right? Um, I think you can get into this world with so many different companies these days where you're just thinking about, oh, how do I compete on, you know, a, a chat product or some other kind of product surface? And, um, you need to make sure that you leave space and recognize the research for what it is, and also give them the space to do that, right? Like, you can't have them being pulled in all of these different product directions. Um, so I think that's one thing that we pay attention to within our culture.

    3. JP

      Especially now that there's so much spotlight on OpenAI, so much spotlight on AI in general, and, and the competition between different labs, uh, it would be easy to fall into a mindset of like, "Oh, we're racing to bi-beat this latest release" or something. And, and, um, you know, th-there's definitely like, um, uh, areas that people kind of start looking over their shoulder and start thinking about, oh, you know, what are these other things? And, and, uh, I see it as a large part of, um, our job to make sure that people have this comfort and space to think about, you know, what, what are things actually going to look like in a year or two? Um, like c- what are the actually big research questions that we want to answer, and, and how do we actually get models that like vastly outperform what we see currently rather than just like-

    4. MC

      Mm

    5. JP

      ... iteratively improving in the current paradigm?

    6. SW

      Just to pull on that thread more on protecting fundamental research, um, you guys are obviously one of the best research organizations in the world, but you're also one of the best product companies in the world.

    7. MC

      Yeah.

    8. SW

      How do you balance... And especially with, um, you've brought on some of the best product execs in the world as well, um, how do you balance that focus between the two and while protecting fundamental research, also continue to move forward the great products that you have out?

    9. MC

      Yeah. I mean, I think it's about kind of delineating a set of researchers who really do care about product and who really want-

    10. SW

      Mm. Yeah

    11. MC

      ... to be accountable to the success of the product. And, and they should, of course, very closely coordinate with the, the research work at large. Um, but I think just kind of people understanding their, their mandates and what they are rewarded for, um, uh, that, that's a very important thing.

    12. JP

      One thing that I think is also helpful is that, um, our product team and, and broader company leadership is, is, is bought into this vision, right? Where, where we are going with research. And so, uh, you know, nobody is assuming that like, oh, the product we have now is the product we'll have forever, and we'll just kind of wait for like, you know, new versions from research. Like, like, we, we are able to think jointly about what the future looks like. One of the things that you guys have done is let such a diversity of different ideas and bets flourish inside of OpenAI that you then have to figure out some way as research leaders to, to make it all make coherent sense as one part of a roadmap. And you've got, you know, people over here investigating the future of diffusion models and visual media, and over here you've got folks, you know, investigating the future of reasoning when it comes to code. H-how do you paint a coherent picture of all that? How does that all come together when, when there might be at, at least naively some tension between giving researchers the independence to go do fundamental research and then somehow making that all fit into one coherent research program? Our stated goal, um, for our research program has been getting to an automated researcher for, um, a couple years now. Uh, and so we've been, we've been, um, building most our projects with this goal in mind. Um, and so this still leaves a lot of room for, um, kind of bottom-up idea generation for fundamental research on, on various domains. But we are, you know, always thinking about how do these ideas come together eventually. Um, we are... You know, we, we believe, for example, that reasoning models go much further, and we have a lot of explorations on things that are not directly reasoning models, but we are thinking a lot about how they eventually combine and, you know, what does, what, what will this, uh, kind of innovation look like once you have something that is out there and thinking for, for, for months about a very hard problem? Um, and so I think this clarity of, of like our long-term objectives is important. Um, but yeah, but it doesn't nec- doesn't mean that we are, you know, prescriptive about like, "Oh, here are all the little pieces," right? Like, we definitely view this as a, as a question of, of exploration and learning about, about these technologies.

    13. MC

      Right. Yeah, I think you wanna be opinionated and prescriptive at a very kind of coarse level, but, you know, a lot of ideas can bubble up in the finer level.

    14. JP

      And has there, has, have there been any moments where the-

    15. AM

      Those things have been in tension at all recently? Well, uh, one provocative example could be recently, you know, this new image model came out, which is NanoBanana, right, from Google. It's ex-extraordinary value shown to, that like lots of everyday people, um, can unlock a lot of creativity when these models are good-

    16. JP

      Yeah

    17. AM

      ... at understanding editing prompts. Um, and, and I could see how that would create some tension for a research program that may not be prioritizing that as directly. Um, if, if, if one of your t- you know, somebody talented on your team came and said, "Guys, like this thing is so clearly valuable in the world out there, we should be spending, you know, more effort, more energy on this," how do you reason about that question?

    18. JP

      I think that's definitely a question that we've been kind of thinking about for quite a while a-at OpenAI. I mean, if you, if you look at GPT-3, right? Like, like o-once we kind of saw like, oh, like this is kind of where language models are going, we, we definitely like had a lot of discussions about, well, clearly there are going to be so many magical things you can do with AI, right? And you will, you will be able to go to this like, like extremely smart models that are, you know, out there pushing the frontiers of science, but you will also have this like incredible media generation and this incredibly, uh, you know, transformative ex-uh, uh, um, entertainment applications. Uh, and so like how do we prioritize among all these directions, uh, has definitely been something we've been, we've been thinking about for, for, for quite a while.

    19. MC

      Yeah, absolutely. And, and the real answer is like, we don't discourage someone from being really excited by that, and, and it's just if we're consistent in the prioritization, um, and our product strategy, then it just will naturally fall in.

    20. JP

      Right.

    21. MC

      And, and so it's just, for us, like we, we do encourage a lot of people to, to be excited about, you know, building this, um, you know, or building kind of like agentic products, you know? What-whatever kind of products that, that they're excited by. But I think it's, uh, important for us to also have a s- a separate group of people who you protect that their goal is to create the algorithmic advances.

  14. 38:3641:19

    Prioritization, Compute, and Resource Allocation

    1. SW

      How does that translate, uh, just to build on Anj's question, into a concrete framework around resourcing?

    2. MC

      Mm.

    3. SW

      Like, do you think about, okay, X percent of compute resources will go to longer term, you know, very important, but maybe a bit more pie-in-the-sky, uh, exploration versus there's also, you know, obviously current product inference, but sort of this thing in the middle where, uh, it's achievable in the short to medium term.

    4. MC

      Yeah. Um, so I think that's a big part of both of our jobs.

    5. SW

      Yeah, totally.

    6. MC

      You know? Just, uh, this portfolio management question of how much compute do you give to which project. And, um, I think historically we've put a little bit more on just the core algorithmic advances, uh, versus kind of the, the product research. Um, but it's something that you have to feel out over time, right? It's, it's dynamic. I think month to month there could be different needs. And so I think it's important to stay fairly flexible on that.

    7. SW

      And if you had 10% more resources, would you put it toward compute, or is it data curation, people? Where would you stick that from like a marginal, um- [laughs]

    8. MC

      [laughs] Good question. Um, honestly, yeah, I think, um, compute's a-

    9. JP

      Compute today. [laughs]

    10. SW

      [laughs]

    11. MC

      Yeah. [laughs] A fairly reasonable answer here.

    12. SW

      No, big time, yeah.

    13. JP

      Safe answer.

    14. MC

      Yeah, yeah. I mean, honestly, I, I do think kind of to your question of prioritization, right? It's like in a vacuum, any of these things you would love to like go and excel and win at. Um, I think the danger is you end up like second place at everything and-

    15. SW

      Mm

    16. MC

      ... you know, not like, you know, clearly leading at, at, anything. So I think prioritization is important, right? And you need to make sure there's some things you're clear-eyed on, this is the thing that we need to win.

    17. JP

      Yeah.

    18. SW

      Yeah.

    19. AM

      But, but I think it makes sense to talk about it for, uh, just a, a little bit more, which is compute sets so much of... Compute is destiny in a way, right? At a research organization like OpenAI. And so do, would you... A, a couple of years ago, I think it became very fashionable to say, "Oh, okay. We're, we're not gonna be compute constrained anytime soon because there's a bunch of CMs that are, you know, people are discovering, and we're gonna get more efficient, and all the algorithms-

    20. MC

      Mm-hmm

    21. AM

      ... are gonna get better. And then eventually like really we'll just be in a data-constrained bo- regime." And it seems like, you know, a couple years have come and gone, and we're still like, this is sort of very compute-constrained environment.

    22. JP

      Mm-hmm.

    23. AM

      Does that change anytime soon, you think? Or-

    24. JP

      I mean, I, I think like we've seen for long enough like how much we can do with compute. Um, yeah, I, I, I, I, I haven't really bought that much into the like we'll be data constrained-

    25. AM

      Right. [laughs]

    26. JP

      ... claim and, um, yeah, I don't, I don't, I don't expect that to change.

    27. MC

      Yeah, anyone who says that should just step into my job for a week. [laughs]

    28. SW

      [laughs]

    29. MC

      There's, there's no one who's like, "Ah, you know, I have all the compute that I need."

    30. AM

      Right.

  15. 41:1946:56

    The Intersection of Academia and Frontier AI

    1. AM

      the job of advancing fundamental research has historically been largely a mandate o- that universities have had. Partly for the compute reasons you just described, that hasn't been the case for frontier AI. You guys have spent, done such an incredible job kind of channeling the arc of frontier AI progress to help the sciences out.

    2. JP

      Mm-hmm.

    3. AM

      Um, and I'm wondering when those worlds collide, the fundamental world of university research today and the world of frontier AI, what, what comes out?

    4. MC

      So I guess I, I personally started as a resident at OpenAI, and it's a program that we had for, uh, people in different fields to come in, you know, learn quickly about, about AI and become productive as a researcher. And I think there's a lot of really powerful elements in, i-in that, uh, program. And, you know, the idea is just like, you know, could we accelerate something that looks like a PhD in-

    5. AM

      Right

    6. MC

      ... in as, as little time as possible?

    7. SW

      Mm-hmm.

    8. MC

      And I think a lot of that just looks like implementing a lot of, you know, very core results. And, you know, through doing that you're gonna make mistakes. You're gonna be like, "Oh, wow," like build intuition for if I, you know, set this wrong, like that's gonna blow up my network in this way. Um, and so you just need a lot of that hands-on experience. Um, I think, um, over time, you know, there have been curriculums developed at, um-Probably all of these large labs in, in like optimization and architecture and RL, and, um, yeah, probably no better way than to just kind of try to implement a lot of those things and read about them and think critically about them. Yeah.

    9. JP

      Yeah, I think maybe like one other nice thing that you get to experience at academia is like, yeah, just like persistence, right? Of like, oh, you know, you have a few years and you're kind of trying to solve a problem, and it's a hard problem, and you've never dealt with such a hard problem before. Um, and yeah, I do feel like this is a thing that's like, um, well, currently the pace of progress is very fast. Uh, maybe also the ideas tends to work out a little bit more often than they did in the past, um, because, uh, yeah, deep learning just wants to learn. Uh, [laughs]

    10. SW

      [laughs]

    11. JP

      And, um, getting your hands on, on a more challenging problem for, for a little bit, maybe, you know, being part of a team attacking like an ambitious challenge and, uh, and, and, you know, getting that feeling of, you know, uh, what, what it feels like to be stuck and what it feels like to finally be making progress, I think is, uh, is also something that's like very useful to learn.

    12. SW

      How does external perception, r-reception of a particular product launch impact how you prioritize something? Is that, uh, is it to the extent where, you know, perception and usage, uh, in the case where they're married, obviously there's probably a clear directive there. But in a case where maybe they're divorced a bit, does that impact how you think about roadmap or where you emphasize resources?

    13. JP

      So we, we generally like have some pretty strong convictions about the future, and so we, we don't tie them that closely to like the short-term reception of our products, right? Like, of course, we, you know, learn based on what is going on. We, you know, read other papers, and we, we, we, we look at like what other labs are working on. But, uh, but generally, like we, we act from a place of, of, of, uh, fairly strong belief in, and, and, uh, in what we're building. Um, and so of course, like, uh, you know, that, that is for like our long-term research program, of course. When it comes to, um, product, right, like, like I think the, the, the, the, the cycle of iteration is much, much faster.

    14. SW

      Mm-hmm. Yep.

    15. MC

      Yeah. I think, you know, the-- with every launch, you know, we are trying to aim it to be something that's wildly successful on the product side. And, you know, I, I think from a fundamental research perspective, we're trying to create models with all of the kind of core capabilities needed to build a very rich, you know, set of experiences and products. And, you know, there are gonna be people who have some vision of like one particular thing they could build, and, you know, we'll launch it, and everything we launch, we really hope it goes wildly successful.

    16. SW

      Yeah.

    17. MC

      And, you know, we get that feedback, and if it's, if it's not, like we'll kind of shape our, our product strategy a little bit. But yeah, we, we are definitely also in the business of launching very useful, wildly successful products.

    18. SW

      Yeah.

    19. AM

      It feels like because of the un- sort of completely unbridled pace of progress that we've just spent a lot of time talking about, a lot is gonna change over the next two years, right? It gets really hard to predict, I imagine, ten years out, let alone, you know, ten months out. And so my question, I guess, is through all that change that the frontier of AI is going to bring, what are some priors that you actually think should stay constant? Is there anything? Well, one clearly is that we don't have enough compute. [laughs]

    20. SW

      [laughs]

    21. AM

      Is there anything else that you think doesn't change, that you think would be strong, reasonably held priors as constants?

    22. JP

      I think more broadly than compute, there's physical constraints of, well, energy, but also like, you know, at some point not too far, like robotics will become a major focus. Um, and so, um, so I think, I think thinking about like the, the physical constraints, um, is go- is, is, is going to remain important. Um, but yeah, I do think on the intelligence front, I would not make too many assumptions.

  16. 46:5648:52

    Maintaining Speed and Learning at Scale

    1. SW

      Very few startups can get to the scale that you have, both from a, you know, employee perspective, but also revenue count, and maintain that breakneck speed that you probably had, I mean, seven, eight years ago when you, when you both joined. Um, what's the secret sauce to doing that? And how do you continue to maintain this pressure almost to, to ship as quickly as possible even though, you know, you're kind of on, you know, top now?

    2. MC

      I think one of the clearest markers that we have really good research culture, at least in my mind, is, you know, I've worked at different companies before, and there is a real thing, which is a learning plateau, right? You go to a company-

    3. SW

      Right

    4. MC

      ... you, you learn a lot for-

    5. SW

      Totally

    6. MC

      ... the first one or two years, and then you just find kinda like, you know, I, I know how to be fairly efficient in this framework, and, uh, my, my learning kinda stops. And I've really never felt that at, at OpenAI.

    7. SW

      Mm.

    8. MC

      Just like-

    9. SW

      Yeah

    10. MC

      ... like that experience you described of all these really cool results bubbling up, um-

    11. SW

      Yeah

    12. MC

      ... you're just learning so much week over week. And, um, and it, it is a full-time job to kinda stay on top of all of it. And, um, that's just been very fulfilling. So, um, yeah, no, I, I think that's a very accurate description. Just, um, we, we just wanna generate a lot of really high quality research, and, um, it's almost a good thing, like if you're generating enough that you're barely able to keep on top of it.

    13. SW

      Yeah, exactly.

    14. MC

      Yeah.

    15. JP

      I think definitely like the develop of technology I think is a driving force here, where, you know, maybe, yeah, maybe we would kind of, uh, become comfortable after like a few years working in a given paradigm, but we are always on the cusp of the, you know-

    16. SW

      Mm. Yeah

    17. JP

      ... new thing and, you know, trying to reconfigure our, our thinking around the kind of new constraints and new possibilities that we're gonna be faced with. Um-

    18. SP

      And so I think, I think that kind of creates this, this feeling of constant change and, and the, and the mindset of, of like always kind of learning the new thing.

  17. 48:5252:54

    Trust and Collaboration at OpenAI

    1. AM

      Well, you know, one thing that came up in our research, um, about things at OpenAI that have not changed through a lot of the change is the, is the trust that the two of you guys have in each other. 'Cause, uh, that g- I think there was an article or profile of you guys recently in the MIT Tech Review, and that was also one of the highlight themes, that your chemistry, your trust with each other, your rapport was something a lot of the people, um, at OpenAI have come to, um, treat as a, as a constant. So what, what's the backstory? How'd you guys build trust there? How did that, how did that happen? [laughs]

    2. SP

      [laughs]

    3. AM

      [laughs] It's like asking to, to... Have you ever seen that, um, um, When Harry Met Sally? Yeah. [laughs] I feel like you're on the couch, and now you gotta talk about- What's your favorite quote? Yeah. Yeah, exactly.

    4. SP

      Well, I, I, I do think, you know, we, we started working together a little bit more closely, um, when we kinda had the first seeds of working on reasoning. Um, I think, you know, we... At the time, you know, that wasn't a very popular research direction to work on, and I think, uh, both of us kinda saw glimmers of hope there. And-

    5. AM

      Yeah

    6. SP

      ... um, you know, we were kind of pushing in, um, i-i-in this direction, kind of figuring out how to make RL work. And, um, yeah, I think over time kind of growing a, a very small effort into increasing larger effort. And, um, and I think that's kind of where I, um, yeah, re-really got to kind of work with Jakub in depth. I think, um, I... He, he's just really a phenomenal researcher. I think, you know, um, any of these rank lists, like he should be number one. Um, like just his ability to, you know, take any very difficult technical challenge and, and almost like personally just kind of think about it for two weeks and, and just crush it. Um, uh, it's incredible that he has kind of the, the wide range that he does i-in terms of understanding, as well as that kind of depth that you can go and just personally solve a lot of these technical challenges.

    7. AM

      [laughs] Now you get to say some nice stuff about him. [laughs] I have to say anything nice about Jakub. [laughs]

    8. SP

      [laughs] Thanks, Mark. Uh-

    9. AM

      [laughs]

    10. SP

      Y- yeah, yeah. I, I think, I think the big kind of, the first like big thing that we did, did together was like we started, um, seeing like, okay, like we think this algorithm's going to work. And so, um, you know, I was thinking like, okay, like how do we, you know, direct people at this? And we were talking with Mark like, "Oh, we should establish a team that's actually going to make this work." And then, you know, Mark and, Mark went and actually did this, right? Like actually kind of like got a group of like people working on very different things, like got them all together and created a team with like incredible chemistry out of like this whole disparate group. And that was like such an impressive thing to me. Um, and, uh, yeah, I, I'm, uh, I'm really grateful and, and inspired to like kind of get to, you know, work with Mark and kind of experience that. Um, uh, yeah, I think, uh, this incredible capacity to both, you know, understand and en-engage and, and, and, and, uh, you know, think about the, the technical matter of the research itself. Uh, but then coupled with this like great ability to, um, lead and inspire teams and create an organizational structure that, you know, in this whole kind of mess of chaotic directions actually like, like is coherent and, a-and, a-a-and able to gel together. Uh, yeah, very, very inspiring.

    11. AM

      That's awesome. Well, on that note, um, we got some- Great note to end on, yeah. [laughs] Yeah. [laughs] Listen, some, some of the greatest discoveries in science, especially in physics, have often come from a pair of collaborators, often across universities, across fields, and it seems like you guys have, have now added to that tradition. And so we're just super grateful that you guys made the time to chat today. Thanks for coming by. Yeah.

    12. SP

      Thanks.

    13. AM

      Thanks for being with us.

    14. SP

      Thanks you both. [upbeat music]

Episode duration: 53:03

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode KSgPNVmZ8jQ

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome