Skip to content
OpenAIOpenAI

OpenAI x Broadcom — The OpenAI Podcast Ep. 8

OpenAI and Broadcom are teaming up to design our own chips—bringing lessons from building frontier models straight into the hardware. In partnership with Broadcom and alongside our other partners, we’re creating the next generation of AI infrastructure to meet the world’s growing demand. In this episode, OpenAI’s Sam Altman and Greg Brockman sit down with Broadcom’s Hock Tan and Charlie Kawwas to announce a new partnership focused on custom AI chips and systems that could redefine what’s possible in computing. Chapters 00:00 Announcing the partnership 03:06 The scale of AI infrastructure 06:03 Collaboration and innovation in chip design 08:49 Historical context and future vision 12:10 Role of compute in AI development 15:01 Optimizing for specific workloads 18:02 Journey towards AGI 21:00 Future of AI and compute capacity 23:50 Wrap-up and future projects

Andrew MaynehostSam AltmanguestHock TanguestGreg BrockmanguestCharlie Kawwasguest
Oct 13, 202528mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:003:06

    Announcing the partnership

    1. AM

      Hello, I'm Andrew Mayne, and welcome to the OpenAI Podcast. Today, we're excited to be breaking some news involving Broadcom and OpenAI. Joining me from OpenAI is Sam Altman and Greg Brockman, and from Broadcom, Hock Tan and Charlie Kawwas.

    2. SA

      [upbeat music] A lot of ways that you would look at the AI infrastructure build-out right now, you would say it's the biggest joint industrial project in human history.

    3. HT

      We're defining civilization's next generation operating system.

    4. GB

      Like, that is a drop in the bucket compared to where we need to go. That's a big drop, but [laughing] ...

    5. AM

      So what are we talking about today? What brought you all together?

    6. SA

      So today we're announcing a partnership between Broadcom and OpenAI. We've been working together for about the last eighteen months, designing a new custom chip. Uh, more recently, we've also started working on a whole custom system. These things have gotten so complex, you need the whole thing. And we will be starting in late next year, deploying ten gigawatts of these racks, of these systems, and our chip, which is a gigantic amount of computing infrastructure, to serve the needs of the world to use advanced intelligence.

    7. AM

      So this is going to entail both compute and chip design and scaling out?

    8. SA

      This is, uh, this is a full system. So we worked-- we closely collaborated for a while on designing a chip that is specific for our workloads. When it became clear to us just how much capacity, inference capacity the world was going to need, we began to think about: Could we do a chip that was meant, uh, just for that kind of a, a very specific workload? Broadcom is the best partner in the world for that, obviously. And then, to our great surprise, this was not the way we started, um, but as we realized that we were going to really need the whole system together to support this as these- as it's gotten more and more complex, it turns out Broadcom is also incredible at helping design systems. Um, so we are working together on that entire package, and this will be, uh-- this will help us even further incre- uh, increase the amount of capacity we can offer for our, our services.

    9. AM

      So, Hock, how did this come about? You know, when did this start? When did you guys first talk about working together on this?

    10. HT

      Well, other than the fact that Sam and Greg are great people to work with, it's a natural fit because OpenAI has been doing, continues to do the most advanced models, frontier models in generative AI out there. And but as part of it, you need, you need- you continue to need compute capacity, the best, latest compute capacity as you progress in a roadmap towards a better and better frontier model and towards super intelligence. And compute is key part, and that comes with on semiconductors, and as Sam indicated, more than semiconductors. And we are, even though I say it myself, probably the best semiconductor company out there. And more than that is AI is a very, very exciting opportunity for us in terms of we, we are-- my engineers are pushing the innovation envelope

  2. 3:066:03

    The scale of AI infrastructure

    1. HT

      and newer and newer generations of tech- of, uh, semiconductor technology. So for us, com, uh, com- collaborating with the best generative AI company out there is a natural fit.

    2. AM

      And this isn't just chips, it's going out to scale, like ten gigawatts, and I have trouble kind of even understanding that. What does that even mean when you're talking about ten gigawatts?

    3. SA

      First of all, you said it's not just chips, that Hock touched on this, too, but the vertical integration point is, is really important. We are able to think from, like, etching the transistors all the way up to the token that comes out when you ask ChatGPT a question and design the whole system, all of the stuff about the chip, the, the way we design these racks, the networking between them, how the algorithms that we're using fit the inference chip itself, a lot of other stuff, all the way to the end product. And one of the many reasons I'm so excited about that is by being able to optimize across that entire stack, we can get huge efficiency gains, um, and that will lead to much better performance, faster models, cheaper models, all of that. As you get that better performance and cheaper and smarter models, one thing that we have consistently seen is people just want to use way more. So we used to think like, "Oh, we'll optimize things by ten x, and we'll solve all of our problems," but, you optimize by ten x, and there's twenty x more demand. Uh, so ten gigawatts, ten incremental gigawatts, this is all on top of what we're already doing with other partners and, you know, all the other data centers and, and silicon partnerships we've done. Um, ten gigawatts is a gigantic amount of capacity, and yet, if we do as good of a job as we hope, um, even though it's vastly more than the world has today, we expect that very high-quality intelligence delivered very fast on a very low price, the world will absorb it super fast and just find incredible new things to use it for. So what is the hope with this? The hope is that the kinds of things people are doing now with this compute, um, you know, writing code, doing more and automating more and more of enterprises, generating videos in Sora, whatever it is, they will be able to do that, uh, much more of it and with much smarter models.

    4. AM

      It's amazing. Uh, so Greg and Charlie, when you think about historically when people have tried to develop, you know, chips or hardware to suit whatever was the current modem for using computing at that point, what examples have you looked upon historically to figure out how to plan forward? What's been inspiring you when you think about this?

    5. GB

      Well, I'd say the number one thing, honestly, is working with good partners. Um, I think it's, like, very clear that we, as a company, are not able to do everything ourselves, and getting into actually building our own chips for our own specific workloads, uh, was not something we could do from a total standstill without working with Hock and Charlie and Broadcom. Uh, so it's just been really incredible to lean on their expertise, um, together with our understanding of the workload. And it's been actually very interesting to see-... the places where OpenAI is able to do things very differently

  3. 6:038:49

    Collaboration and innovation in chip design

    1. GB

      from the rest of the industry or the way that things would historically be done. Uh, for example, we've been able to apply our own models to designing this chip, uh, which has been really cool. We've been able to pull in the schedule, we've been able to get massive area reductions, right? You take components that humans have already optimized, and, uh, just pour compute into it, and the model comes out with its own optimizations. And it's very interesting. We're at the point now where I don't think any of the optimizations we have are ones that human designers couldn't have come up with. Like, usually, our experts take a look at it later and say, "Yeah, like, this was on my list," but it was like twenty things that it would have taken them another month to get to. Um, and that's actually really, really interesting, that we were coming up on, on a deadline working with Charlie's team, and we were running optimizations. We had a choice of: do we actually take a look at what those optimizations were, or do we just keep going until the deadline and then take a look after? We decided, of course, you've got to just keep going. And so we've really been building up this expertise in-house to understand this domain, and that's something we actually think can help lift up the whole industry. But I think that we are heading to a world where, uh, AI intelligence is able to help humanity make new breakthroughs that just would not be possible otherwise, and we're going to need just as much compute as possible to power that. Um, like, one example of something very concrete is that, you know, we are in a world now where ChatGPT is changing from something that you talk to interactively, to something that can go do work for you behind the scenes. If you've used features like Pulse, you wake up every morning, it has, uh, some really interesting, uh, things that are related to, to what you're interested in. It's very personalized, and our intent is to turn ChatGPT into something that helps you achieve your goals. The thing is, we can only release this to the pro tier because that's the amount of compute that we have available, and ideally, everyone would have an agent that's running for them twenty-four/seven behind the scenes, helping them achieve their goals. And so ideally, everyone has their own accelerator, has their own compute power that's just running constantly, and that means, you know, there's ten billion humans. We are nowhere near being able to build ten billion chips, and so there's a long way to go before we are able to saturate just, not just the demand, but what humanity really deserves.

    2. AM

      So, Charlie, being very deeply technical and being with a company that's been at a number of forefronts of, you know, some of these revolutions, what's it been like working with a company like OpenAI and working with Greg on this?

    3. CK

      So for us, it's been absolutely exciting and refreshing because the beauty of the work we do together is focus on a certain workload. We started actually first looking at the IP, uh, and AI accelerator, which is what we call the XPU.

    4. AM

      Mm.

    5. CK

      And then we realized very quickly that we now can actually go to the workload all the way down to the transistor, and as Greg was just explaining, how we can both work together to go customize that platform for your workload,

  4. 8:4912:10

    Historical context and future vision

    1. CK

      resulting in the best platform in the world. Then we realized, as Sam was saying earlier on, it's not just that XPU or accelerator, actually, it's the networking that needs to go to scale it up, scale it out, and scale it across. And so suddenly we started seeing that we actually can drive next level of standardization and openness that not just only benefits us, I think it actually will benefit the entire ecosystem, and it gets Gen AI to an AGI much faster. So very excited about the technical capabilities of the teams we have, but also the vision, and I think the, uh, speed at which we've been moving.

    2. AM

      The-- I'm still kind of wrapping my head around the scale of it, because it's just from both trying to design something like a chip and to help, you know, figure out how you're going to get the maximum efficiency on this, to just the size of it, the infrastructure, what's involved in this. This is a global effort, and what comparisons have you been able to draw for this to other examples in history?

    3. GB

      I, I always think the historical analogies are tough, but this is as, as far as I know, I, I don't like, know what fraction, you know, building the Great Wall was of global GDP at the time. But a lot of ways that you would look at the AI infrastructure build-out right now, you would say it's the biggest joint industrial project in human history. And this is like-- this requires a lot of companies, a lot of countries, a lot of industries to come together, and a lot of stuff has to happen at the same time, and we've all got to kind of like, invest together. But at this point, given everything we see coming on the research front, given all of the value we see being created on the business front, I think the whole industry has decided this is like a, a very good bet to take. But it is huge. You go to one of these, even one-gigawatt data centers, and you look at the scale of what's happening there, and it's like a s- you know, tiny city. Like it's, this is... It's a big, complex thing. So, uh, it is just like incredible scale. To, to the point of this being a massive collaborative project, I feel like whenever I call, call Charlie, he's in a different part of the world, trying to secure capacity, trying to find a way to help us build what we're trying to do together.

    4. CK

      E- e- exactly. Actually, one of the coolest thing actually I was thinking about is what we're doing together in this wonderful partnership, we're defining civilization's next generation operating system, and we're doing it, as you're saying, at the transistor level, building new fabs, building new, um, manufacturing sites, all the way to building these racks and ultimately the data centers. You're talking about ten gigawatts of data centers.

    5. AM

      Yeah, I think it's an important thing to keep track of, is often people get fixated just on the chips themselves, and it's kind of like thinking the National Highway Project was about selling asphalt or railroads are about steel. In reality, it's the things become possible on top of that. And you've probably thought a lot about that, like what happens when-

    6. HT

      Well, I think this is like railroad, internet. That's one. I think this is becoming, over time, critical infrastructure or critical utility, and more than just critical utility for, say, h- ten thousand enterprises. This is critical utility over time, right, Sam, for eight billion people globally. That's I think, the big... It's like the industrial revolution of a different sort coming forth.

  5. 12:1015:01

    Role of compute in AI development

    1. HT

      But it doesn't- i- it cannot be done with just one pa- party, or we like to think, get done with two, but more than it, it needs-... a lot of partnerships, it needs in, uh, collaboration across an ecosystem. And also because of that, it's important to create, much as we say about developing chips for specific workloads, applications, and LLM, it also requires somewhat standards that are open-

    2. AM

      Mm-hmm

    3. HT

      - more transparent for all to use, because you need to build up a whole infrastructure at the end of the day for as, uh, to become a critical utility for six billion people in the world. And we're very excited, frankly, which is why we think we make great partners.

    4. AM

      Yeah.

    5. HT

      Because I think we share the same conviction.

    6. AM

      Mm-hmm.

    7. HT

      And m- m- more than that, it is about scaling computing to create breakthroughs in super intelligence and models. It's building the foundation of that.

    8. AM

      You guys have a lot on your plate. Why design chips now?

    9. GB

      Well, you know, this project, we've probably been working on it for eighteen months now, and it's moved incredibly quickly. Um, we've hired some really amazing people. Um, and that I think what we've found is that we have a deep understanding of the workload, and we work with a number of parties across the ecosystem, and that there's a number of chips out there that I think are, are really incredible. Um, and that there's a niche for, for each one. And so we've really been looking for like, for specific workloads we feel are underserved, how can we build something that will be able to accelerate what's possible? And so I think that, um, that, that ability to say that we are able to do the full vertical integration for something we see coming, but it's hard for us to work through other partners, like that's a very clear use case for this kind of project.

    10. HT

      Yeah. Actually, more than that, and Greg, you put it very well. Really, why you want to do your ch- your chip is computing is a big part of what's driving this journey towards, uh, super intelligence, toward creating better and better frontier model. It's really a lot of it down to computing, and not just any computing, computing that is effective, high performance, and efficient, given especially on power. And what Greg is saying is exactly what we learned and saw here. For instance, if you want to train, you, you design chips that are b- that are much stronger in capac- uh, in computing capacity, measured TFLOPS, as well as network. Because it's not just one chip that makes it happen, it's a cluster, as Charlie put it. But if you want to do inference, you put in more memory and memory access

  6. 15:0118:02

    Optimizing for specific workloads

    1. HT

      relative to compute. So you are actually over time, creating chips optimized for particular workloads, uh, applications, as we go, as we go along. And that, at the end of the day, is what will create the most effective, um, models, is a platform that you want to create end to end.

    2. GB

      And also, one piece of historical context is that when we started OpenAI, we didn't really have that much of a focus on compute. We felt that the path to AGI is really about ideas. It's really about tryouts and stuff. Eventually, we'll put the right conceptual pieces in place, and then AGI. And about two years in, in 2017, the thing that we found was that we were getting the best results out of scale. It wasn't something we set out to prove. It was something we really discovered empirically because of everything else that didn't work nearly as well. And, you know, the first results were scaling up our reinforcement learning in the context of the video game Dota 2.

    3. AM

      Mm-hmm.

    4. GB

      Did you guys pay attention to the Dota 2 project back in the day?

    5. AM

      Yes. [chuckles]

    6. GB

      It was a super cool project, and we really saw you scale up by 2X, and suddenly your agent is 2X better. It's like, okay, we have to push this to the limit. And at that point, we started paying attention to the whole ecosystem, right? There were all sorts of chip startups with novel approaches that were very different from GPUs, and we started giving them a ton of feedback, saying: Here's where we think things are going. It needs to be models of this shape. And honestly, a lot of them just didn't listen to us, right? And so it's, like, very frustrating to be in this position where you say: We see the direction the future should be going, but we have no ability to really influence it, besides sort of, you know, just like sort of trying to, to influence other people's roadmaps. And so by being able to take some of this in-house, we feel like we are able to actually realize that vision. Um, and again, in a way that like we hope that we can show a direction and other people will fill in, because the amount of compute required to bring our vision of AGI to the world, ten gigawatts is not enough. Like, that is a drop in the bucket compared to where we need to go.

    7. AM

      That's a big drop, but [laughing]

    8. GB

      [laughing]

    9. HT

      [laughing]

    10. AM

      The bucket's really big.

    11. GB

      Yeah.

    12. AM

      Uh, what becomes possible with this when you're building your own chips for inference and for training? You know, where can you take this?

    13. SA

      To zoom out a little bit, if you, if you simplify what we do in this whole process to, you know, melt sand, run energy through it, and get intelligence out the other end, you're not literally melting sand. [laughing]

    14. AM

      [laughing]

    15. SA

      Like, it's a nice visual.

    16. AM

      That's a good one. [laughing] That's a good one.

    17. GB

      That's all we have to do.

    18. AM

      I like that. [chuckles]

    19. SA

      The, what, what we want is the most intelligence we can get out of each unit of energy.

    20. AM

      Mm-hmm.

    21. SA

      And because that will become the gate at some point. And, and I hope what this whole process will show us, which is, you know, from the model we design to the chip, to the rack, we will be able to wring out so much more intelligence per watt. And then everybody that's using these models in all of these incredible ways will do so much with it. Um, that's what I hope for.

  7. 18:0221:00

    Journey towards AGI

    1. HT

      And you control your own destiny.

    2. GB

      Yeah.

    3. HT

      If you do your own chips, you control your destiny.

    4. AM

      Yeah, it's, it's interesting to think about how the things that we're doing today are pretty amazing and remarkable, but we're using stuff that wasn't necessarily designed specifically for the way we're doing it.

    5. SA

      Oh, I, I... I mean, the GPUs of today are incredible, incredible things. Uh, I'm-... very grateful, and we will continue to really need a lot of those. The, the flexibility and the ability to let us do fast research is, is amazing. But you are right that, that, that as we get more and more confident in what the shape of the future is going to look like, a very optimized system to the workload will let us wring more out per watt. That's great.

    6. CK

      And it's a long journey that takes decades. So if, if you go back to Hock's example, take railroads, it took about a century-

    7. SA

      Mm-hmm.

    8. CK

      -to roll it out as a critical infrastructure. If you take the internet, it took about thirty years. This is not going to take five years. It's going to take a long time. So I think as we collectively, especially with this partnership, continue to wring and figure out ways to wring out more tokens out of it, we'll discover that, oh, for this training or research, maybe a GPU is great, or maybe, you know what? We can take whatever we're doing with Greg. It's actually a platform that allows you, like a Lego block, to take in things in and out, and now suddenly we can get another XPU or an accelerator for next gen that's targeted at a training or an inference or a research.

    9. GB

      Yeah, and to, to the point that Sam said of GPUs have really come an incredible way. In twenty seventeen, when we started looking at all these other accelerators, uh, it was actually very non-obvious about wha- what the landscape would look like in five, ten years. And I think it's really a testament to the, you know, to, to companies like Nvidia, AMD, um, for how much the GPU has just moved forward and continued to be the dominant, uh, accelerator. Um, but at the same time, there's a massive design space out there, right? And I think that what we see is workloads that are not served through, uh, existing platforms, and that's where that full vertical integration is something unique.

    10. AM

      It's interesting, too, because the idea that you'd want to put inferences close to the user is something kind of relatively new. You know, we understood training, but then you think about, like, the number of people every day using these products and how much they need compute to do fun things or serious things. And when you start thinking about kind of like the scale of it, like we talked before, I keep coming back to it's a very big thing, um, where, you know, where does it keep going? Is it just a thing that we're going to continuously find new things to use compute for?

    11. SA

      The first cluster OpenAI had, uh, the first one that I can remember the energy size for, was two megawatts. Um-

    12. AM

      Adorable. [laughing]

    13. GB

      Yeah. We got things done with those two.

  8. 21:0023:50

    Future of AI and compute capacity

    1. SA

      I don't remember when we got to twenty. I remember when we got to two hundred. Uh, you know, and we will finish this year a little bit over two gigawatts, and these recent partnerships will take us close to thirty. Um, and the world has done far more than I thought they were going to do. Turns out you can, like, serve, you know, ten percent of the world's population with ChatGPT on... And do the research, and do Sora, and do our API, and a few other things on two gigawatts. But think about how much more the world would like to do than they get to do right now. If we had thirty gigawatts today, with today's quality of models, I think you would still saturate that relatively quickly, uh, in terms of what people would do, especially with the lower cost we'll be able to do with this. But the thing we have learned again and again is, m- let's say we can push GPT-6 to feel like, you know, thirty IQ points past GPT-5, something, something big. The um-- a- and that it can work on problems not for a few hours, but for a few days, weeks, months, whatever. The amount... And while we do that, we bring the cost per token down. The amount of economic value and sort of surplus demand that happens each time we've been able to do that goes up a crazy amount. So you can see, to pick a, I think, well-known example at this point, when ChatGPT could write a little bit of code, people actually used it for that. They would, like, very painfully paste in their code and wait, and they would say, "Do this for me," and paste it back in, and whatever. The models, you know, couldn't do much, but they could do a few things. The models got better, the UI- the UX got better, and now we have Codex. Codex is growing unbelievably fast and can now do, like, a few hours of work at a higher level of kind of capability. And when that's possible, the am- the demand increase is crazy. Maybe the next version of Codex can do, like, a few days of work at kind of one of the best engineer you know, level, or maybe that takes a few more versions. Whatever, it'll get there. Think how much demand there will just be for that, and then do it for every knowledge work industry.

    2. GB

      And one way I like to think of it is that intelligence is the fundamental driver of economic growth, of increasing the standard of living for everyone. And what we're doing with AI is actually bringing more intelligence and amplifying the intelligence of everyone. And so as these models get better, I think everyone's going to become more productive. The output of what is possible is going to just be totally different from what, what exists today.

    3. AM

      It's interesting, too, that going from a point when with GPT-3, which was pretty cost, you know, it was expensive comparatively, to where you're at, you know, a level of a GPT-5, and the fact that you can provide that freely to people. And is that a motivating factor for you, the fact that every time you create these new efficiencies, that it just benefits so many more people?

    4. GB

      Yes.

    5. CK

      Absolutely. [laughing] Absolutely.

  9. 23:5028:48

    Wrap-up and future projects

    1. AM

      Yeah.

    2. CK

      Absolutely. And, you know, and from our side on hardware, compute capacity, where the r- where to some, to some extent, the rubber hits the road on this, it's really incumbent on us to keep optimizing, pushing the en- uh, the envelope on leading-edge technology. Well, and there's still room to go, and there's room to grow even on where we are as we go from two nanometers go- going forward, less smaller than two nanometers, as we start doing all kinds of different technology. It is really great, exciting times, especially for the hardware s- and the semiconductor industry.

    3. SA

      What Broadcom has done here is really, like, quite incredible. Um, it-

    4. SP

      ... it used to be extremely difficult for a company like ours to think about making a competitive chip. In fact, so hard, we just wouldn't have done it, and I think a lot of other companies wouldn't have done it as well. And all of these sort of this customized chip and system to a workload just wouldn't be a thing in the world. But the fact that they have pushed so hard and so well on making it so that they can-- a company can partner with them, and they can do a miracle of technology chip quickly and at scale. Unfortunately, they do it for all of our competitors, too, [laughing] but, uh, hopefully our chip will be the best.

    5. HT

      Yes, of course. [laughing]

    6. SP

      Thank you. Uh, it's really quite incredible.

    7. GB

      Yeah. And, and I think also not just what they can do for us today, but looking at the upcoming roadmap, it's just so exciting, the kinds of technologies that they're going to be able to bring to bear for us to be able to utilize.

    8. HT

      Well, it's just-

    9. GB

      Oh, go ahead

    10. HT

      ... the excitement-

    11. GB

      Yeah

    12. HT

      -of, I mean, enabling joint and collaboratively models, ChatGPT five, six, seven, on and on, and each of them will require a different chip, a better chip, a more ex-, uh, a more developed chip, advanced chip, that we haven't even begun to figure out how to get there. But we will.

    13. GB

      And actually, the GPTs are definitely gonna be an increasing part of that.

    14. HT

      Yes.

    15. GB

      It will be very interesting.

    16. CK

      We're, we're actually looking forward to that because my software engineers now already use that from a software point of view, and it's delivering efficiencies of dozens of engineers.

    17. GB

      Really?

    18. CK

      Yes.

    19. GB

      Great.

    20. CK

      On the hardware side, we're not there yet, but, but, you know, the good news-

    21. HT

      We'll get there very soon.

    22. CK

      Right.

    23. GB

      We should talk.

    24. CK

      Yes.

    25. GB

      We've got some things we should-

    26. CK

      We should absolutely leverage this, but I was gonna say, with respect to compute, so when we started building these XPUs, you can maximum build a certain number of compute in eight hundred square millimeter. That's it. Now, today, we're actually working together to ship multiple of these in a two-dimensional space. The next thing we're talking about is stacking these into the same chip, so now we're actually going in the, you know, Y dimension, uh, or Z dimension, if you want to think three-dimensional. And then the, the last step we're actually also talking about is now we're gonna bring optics into this, which is actually what we just announced, which is a hundred terabits of, uh, switching with optics integrated all into the same chip. So these are sort of the technologies that will take compute, the size of the cluster, the, the total performance and wattage of the cluster to a whole new level, that I think it will keep doubling at least every six to twelve months.

    27. SP

      What kind of timeframe are we talking about? When are we gonna first start to see what's coming out of this relationship?

    28. GB

      End of, end of next year, and then we'll deploy very rapidly over the next three years.

    29. HT

      Absolutely.

    30. CK

      Yeah. Greg and I are talking about this at least once a week.

Episode duration: 28:49

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode qqAbVTFnfk8

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome