OpenAICodex and the future of coding with AI — the OpenAI Podcast Ep. 6
EVERY SPOKEN WORD
55 min read · 11,078 words- 0:00 – 1:15
Intro
- AMAndrew Mayne
Hello, I'm Andrew Main, and this is the OpenAI Podcast. In this episode, we're going to speak with OpenAI co-founder and president Greg Brockman, and Codex engineering lead Thibault Sottiaux, and we're going to talk about agentic coding, GPT-5 Codex, and where things might be heading in 2030.
- GBGreg Brockman
Just bet that the greater intelligence will pan out in the, in the long run.
- TSThibault Sottiaux
Uh, and it's just really optimized for, you know, what people are using, uh, GPT-5 within Codex for.
- GBGreg Brockman
How do you make sure that that AI is producing things that are actually correct?
- AMAndrew Mayne
We're here to talk about Codex, which first, I've been using it, um, since actually, since I worked here, the first version of this, and then now you guys have the new version of this. I was playing with it all weekend long, and I've been very, very impressed by this, and it's amazing how far this technology has come in a few years. I would love to find out the early story. Like, where did the idea of even using a language model to do code come from?
- GBGreg Brockman
Well, I mean, I remember back in the GPT-3 days-
- AMAndrew Mayne
Mm
- GBGreg Brockman
... seeing the very first signs of life, of take a docstring and a Python definition of, of a function-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... name, and then watching the model complete the code. And as soon as you saw that, you knew
- 1:15 – 4:00
The first sparks of AI coding with GPT-3
- GBGreg Brockman
this is going to work, this is going to be big. And I remember at some point we were talking about these aspirational goals of imagine if you could have a language model that would write a thousand lines of coherent code. [chuckles] Right? That was, like, a big goal for us. And the thing that's kind of wild is that that goal has come and passed, and I think that we don't think twice about it, right? I think that while you're developing this technology, you really just see the holes, the flaws, the things that don't work. Um, but every so often it's good to, like, step back and realize that, like, actually things have just, like, come so far.
- TSThibault Sottiaux
It's incredible how used we get to things improving all the time, and how it has just become like a, a daily driver, and you just use it every day, and then you reflect back to like a month ago, this wasn't even possible. Um, and this just continues to happen. Uh, I think that's quite fascinating, like how quickly humans adapt to new things.
- GBGreg Brockman
Now, now, one of the struggles that we've always had is the question of whether to go deep on a domain.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? Because we're really here for the G, right, for AGI, general intelligence.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And so to first order, our instinct is just push on, making all the capabilities better at once. Coding's always been the exception to that, right? We, we really have a very different program that we use to focus on coding data, on code metrics, on trying to really understand how do our models perform on code. Um, and that, you know, we've started to do that in other domains too, but, but for programming and coding, that that's been like a very exceptional focus for us. And, you know, for GPT-4, we really produced a single model that was just a leap on all fronts. Um, but we actually had trained, you know, the Codex model-
- AMAndrew Mayne
Mm.
- GBGreg Brockman
... and I remember doing, like, a Python sort of focused model. Like, we were really, really trying to, to push the level of coding capability back in, you know, 2021 or so. And, uh, you know, I remember when we did the Codex demo, that was maybe the first demonstration of what we'd call vibe coding today, right?
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
I remember building this interface and having this realization that for just standard language model stuff, the interface, the harness is so simple, right? You're just completing a thing, and, you know, maybe there's a follow-up turn or something like that, but, but that's it. For coding, that you actually... this text comes to life, right? You need to execute it, that it needs to be hooked up to tools, all these things. And so you realize that the harness is almost like equally part of how you make this model usable as the intelligence. And so that is, that is something that I think we, we kind of knew from that moment. Um, and it's been interesting to see as we got to more capable models this year and really started to focus on not just making the raw capability, like how do you win at programming competitions, but how do you make it useful, right? Training in a diversity of environments, really connecting to how people are going to use it, and then really building the harness, which is something that Thibault and his team have, like, really pushed hard.
- AMAndrew Mayne
Could you unpack, like, a harness, what that means in sort of simple terms?
- TSThibault Sottiaux
Yes. So it's quite simple. You have the model,
- 4:00 – 7:20
Why coding became OpenAI’s deepest focus area
- TSThibault Sottiaux
and the model is just capable of input/output, and what we call the harness is, how do we integrate that with the rest of the infrastructure so that the model can actually act on its environment?
- AMAndrew Mayne
Mm-hmm.
- TSThibault Sottiaux
Uh, so it's the set of tools, it's the way that it's, uh, looping, so the agent loop, as, uh, we refer to it as the agent loop. And, um, it's-- in, in essence, it's fairly simple, but when you start to integrate these pieces together and really train it end to end, you start to see, like, pretty magical behavior, uh, and an ability of the model to really act and create things on your behalf and be a true collaborator. So think about it a little bit as, you know, the harness being your body-
- AMAndrew Mayne
Mm-hmm
- TSThibault Sottiaux
... and the model being your brain.
- AMAndrew Mayne
Okay. It is, it's interesting to see like, yeah, how far it came, like GPT-3 days, where you literally had to write, like, you know, uh, commented code and say, like: "This function does this with its Python. Put your hashtag in front of that," whatever. And it's just interesting to see how the models have now become just naturally intuitively good at coding. And you mentioned that, you know, to trying to determine between a general purpose model or saying how important code is, was it just outside demand, people telling you one of these models a better code, or was this coming internally because you guys wanted to use this more?
- GBGreg Brockman
Like both.
- AMAndrew Mayne
Yeah.
- GBGreg Brockman
Absolutely both. And I remember, you know, in, I think 2022 is when we worked with GitHub to produce-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... GitHub Copilot. And the thing that was very interesting there was that for the first time, you really felt what is it like to have an AI in the middle of your coding workflow, and how can it accelerate you? And I remember that there were a lot of questions around the exact right interface. Do you want ghost text so it just does a completion?
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Do you want a little dropdown with a bunch of different possibilities? Um, but one thing that was very clear was latency was a product feature.... and the, the constraint for something like an autocomplete is that fifteen hundred milliseconds, right? That's like the time that you have to produce a completion. Anything that's slower than that, it could be incredibly brilliant, no one wants to sit around waiting for it.
- AMAndrew Mayne
Hmm.
- GBGreg Brockman
And so the, the mandate that we had, the clear signal we had from, from users and from, you know, the product managers and all the, uh, uh, all, all the people thinking about the product side of it, is get the smartest model you can, subject to the latency constraint.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And then you have something like GPT-4, which much, much smarter, but it's not gonna hit your latency budget. What do you do? Is it a useless model? Like, absolutely not. The thing you have to do is you change the harness, you change the interface. And I think that that's, like, a really important theme, is you need to kind of co-evolve the interfaces and the way that you use the model around its affordances. And so super fast, smart models is gonna be great, but the incredibly smart but slower models, it's also worth it. And I think that we've always had a thesis that the, um, y- you know, that the returns on that intelligence are worth it.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And it's never obvious in the moment because you're just like: Well, it's just gonna be too slow. Why would anyone w- want to use it? But I think that our approach has very much been to say just that, that the greater intelligence will pan out in the, in the long run.
- AMAndrew Mayne
It was, it was hard for me to wrap my head around where that was all headed back when working on the GitHub Copilot, because at that point, we're used to, like you said, the completion. Ask it to do a thing, it completes a thing. And I think I didn't really understand how much more value you would get out of building a harness, added- adding all these capabilities there, and it just seemed like all you just need is the model, but now you realize the tooling, everything else matters, can make such a big difference. And then you brought up the idea of modalities, and now we have,
- 7:20 – 11:45
What a “harness” is and why it matters for agents
- AMAndrew Mayne
um, CLI, Codex CLI, so I can go in the command line, I can do this. There's a plug-in for VS Code, so I can go use this there, and then also I can deploy stuff to the web and do that, and I don't think I fully kind of comprehend the value of that. And so, like, how is this something you're using? How are you kind of deploying these things yourself? Like, where are you finding the most, you know, utility out of it?
- TSThibault Sottiaux
I think just to go back a little bit, like the first signs that we saw is, like, we had a lot of developers at the company, outside of the company, like our users, use ChatGPT-
- AMAndrew Mayne
Mm
- TSThibault Sottiaux
... to help them debug, like, very complex problems. And one thing that we clearly saw, like, people are trying to get more and more context into ChatGPT, and you're trying to get bits of your code and stack traces and things, and then you paste that, and you present that to a very smart model to get some help. Uh, and interactions were starting to get more and more complex, up to some point where we realized, like, hey, maybe instead of the user driving this thing, maybe let the model actually drive the interaction and find its own context, and then find its way and be able to debug, you know, this hard problem by itself so that you can just sit back and, you know, watch the model do the work. So it's like sort of like reversing that interaction that, um, you know, led to this, I think, um, thinking a lot more about the harness and giving the model the ability to act.
- GBGreg Brockman
And we, we iterated on form factors.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? I mean, I remember at the beginning of the year, we had a couple of different approaches. We had, you know, sort of the async agentic harness, but we also had the local experience and a couple different implementations of it. And, uh, yeah, d-
- TSThibault Sottiaux
We, we actually started to play a little bit with this idea of, like, running it in the terminal, um, and then we felt that was not AGI-pilled enough.
- AMAndrew Mayne
Mm-hmm.
- TSThibault Sottiaux
Uh, we needed the ability to run this at scale and remotely and just close the laptop and have, you know-
- AMAndrew Mayne
Mm
- TSThibault Sottiaux
... the agent just, like, continue to do its work, and then you can maybe follow it on your phone and interact with it there. That seemed, like, very cool, so we pushed on that, but we actually had a prototype of it fully working in a terminal, and, um, people were using that productively at OpenAI. We decided to not, uh, launch this as a product. It didn't feel, like, polished enough.
- GBGreg Brockman
Mm-hmm.
- TSThibault Sottiaux
It was called 10x-
- GBGreg Brockman
Mm
- TSThibault Sottiaux
... uh, because we felt like it was giving us this ten x productivity boost. But then we decided to, like, you know, just experiment with different form factors and, like, really go all in with the async form factor-
- GBGreg Brockman
Mm
- TSThibault Sottiaux
... initially, and now we've kind of gone back a little bit on that and re-evolved and said, like: Hey, actually, this agent, we can bring it back to your terminal. We can bring it in your IDE. But the thing that we're really trying to get right is, like, this entity, this collaborator that's working with you and then bringing that to you in, in the tools that you're already using as a developer.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Yeah, and there are other shots on goal as well, right? So we had a version where there was a remote daemon that would connect to, uh, to a, a local agent-
- TSThibault Sottiaux
Mm
- GBGreg Brockman
... and so you kind of could, could, uh, could get both at once. And I think that part of the, the evolution has been that there's almost this matrix of different ways you could try to deploy a tool, right? There's this, like, async, it has its own computer off in the cloud. There's the local, that it's running synchronously there. Um, you can blend between these. Um, that there's a question of, you know, there's been a question for us of how much do we focus on trying to build something that is externalizable, right? That is, like, useful in the diversity of environments that people have out there versus really focused on our own environment-
- TSThibault Sottiaux
Mm
- GBGreg Brockman
... and try to make it so that things work really well for our internal engineers. And one of the challenges has been we wanna kinda do all of this, right? We ultimately want tools that are useful to everyone, but if you can't even make it useful for yourself, how are you going to make it extremely useful for everyone else? And so part of the challenge for us has been really figuring out where do we focus, and, like, how do we achieve the, the sort of biggest bang for the buck in terms of, of, of our engineering efforts? And, you know, for me, one of the things that's been an overarching focus has been we know that coding and building very capable agents is one of the most important things that we can do this year. Um, at the beginning of the year, we set a company goal of an agentic software engineer by the end of the year.
- AMAndrew Mayne
Hmm.
- GBGreg Brockman
And figuring out exactly what that means and how to instantiate that, and how to bring together all the opportunity and all the- the kind of compute that we have to bear on this problem, like, that has been a great undertaking for many, many people at OpenAI.
- AMAndrew Mayne
... So you mentioned that you had the tool ten x, and that was an internal tool, and that seemed to be something at some point you said, "Oh, this is really useful to other people." It's got to be hard to decide when to do that and when not to, and how much to sort of prioritize that. You know, we've seen Claude Code has become extremely powerful, which I imagine is probably a similar story,
- 11:45 – 16:10
Lessons from GitHub Copilot and latency tradeoffs
- AMAndrew Mayne
was something that was used internally and then became something deployed. When you start to think about next steps of, you know, where do you decide to take it next? Where do you decide to put the emphasis? You know, you mentioned before, you know, I can now run things in the cloud, run these web-- you know, do these kind of agentic-like tasks where I walk away, and my problem is just it's such a new modality. It's really, really hard for me to think about and-- but sometimes these things have to sit around for a while, and people sort of discover them independently. And have you found that internally, that somebody says, "Oh, now I get it?"
- GBGreg Brockman
I'd say absolutely, right? And I think that, you know, my perspective is that we kind of know the shape of the future-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... right, of the long term. It is very clear that you're going to want an AI that has its own computer, that is able to just run, you know, delegate to a fleet of agents-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... and be able to, to solve multiple tasks in parallel. You should wake up in the morning, you're sipping your coffee, you know, answering questions for your agent, like providing some review, being like: "Oh, no, this wasn't quite what I meant." Um, this workflow clearly needs to happen, but the models aren't quite smart enough for this to be the way that you interact with them.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And so having an agent that is really there in your terminal, in your editor, to help you with the s- way that you do your, your work that looks very similar to the way you would have done it a year ago, that's also the present. And so I think that the, the way that we've seen it is almost we're blurring together. Here's what the future looks like-
- AMAndrew Mayne
Mm
- GBGreg Brockman
... but we also can't abandon the present. And thinking about how do you bring AI into code review, and how do you make it so that it appears proactively and does work for you that's useful? Um, and then you have a whole new challenge as well of if you have a lot more PRs, like, how do you actually sort through those to the ones that are-- that are the ones you actually want to merge? And so I think we've kind of seen all of this opportunity space, and we've seen people start to change how they develop within OpenAI, how they even structure their code bases.
- TSThibault Sottiaux
Yeah, I think there are two things to that effect that really combine, um, and mean, you know, this is where we're at today, is: one, infrastructure is hard, and we would love for, you know, all, all of everyone's code and, like, tasks and packages to be, like, perfectly containerizable, and so we can run them at scale. That's not the case. Like, uh, people have very thorough and complex setups that probably only runs on their laptop, and we want to be able to leverage that and meet, you know, people where they are so that, you know, they don't have to configure things specifically for Codex. That gives you this very easy entry point, uh, into experiencing, you know, what a very powerful coding agent can do for you. And this also, at the same time, lets us experiment with, you know, what the right interface is. Uh, six months ago, we weren't playing with these kinds of tools, and this is all very new and evolving fast, and we have to continue to iterate here and innovate on, like, what the right interface and what the right, um, way to collaborate with these agents are. And we don't feel like we have really nailed that yet. That's going to continue to evolve, but bringing it, uh, to, like, a zero setup, extremely easy to use out of the box, you know, allows a lot more people to benefit from it and, like, play with it, and for us to get the feedback so that we can continue to innovate. That's very important.
- GBGreg Brockman
I remember at the beginning of the year, talking to one of our engineers, who I think is, is really fantastic, and he was saying that ChatGPT, we had this integration where it could a- automatically see the context in this terminal, and he's like, "It's transformative," because he doesn't have to, like, copy-paste errors. He just, like, can instantly be like: "Hey, like, you know, what's, what's the bug?" And it would just tell him, and it was great, right? And you realize that it was an integration that we built that was so transformative. It wasn't about a smarter model.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And I think that one thing that's very easy to get confused by is to really focus on only one of these dimensions and be like, which one matters? Because the answer is, they kind of both matter.
- AMAndrew Mayne
Mm.
- GBGreg Brockman
And the way I've always thought about this, I remember when we were originally releasing the API back in 2020, um, is there's two dimensions to what makes an AI desirable. There's intelligence, which you can think of as one axis, and then there's convenience, which you can think of as latency, you could think of as cost, you could think of as, um, the integrations available to it. And there's some acceptance region, right? Where it's like, if the model's incredibly smart, but it takes you, like, a month to run it or something, like, you still might, right? If what it's gonna output is such a valuable piece of code or, you know, cure for a certain disease or something like that, okay, fine. Like, it's worthwhile. If the model's incredibly not that intelligent, not that capable, then all you want to do is autocomplete,
- 16:10 – 22:00
Experimenting with terminals, IDEs, and async agents
- GBGreg Brockman
so it has to be incredibly convenient, zero cognitive ta- tax for you to think about what it's suggesting, that kind of thing. And where we are is, of course, somewhere on the spectrum now. We now have smarter models that are, you know, reasonably less convenient than autocomplete, but still, like, more convenient than you have to sit around and wait for a month for, for, for the answer to appear. And so I think that a lot of our challenge is figuring out when do you invest in pulling that convenience to the left? When do you invest in pushing the, the intelligence up? And it's a massive design space, which is what makes it fun.
- AMAndrew Mayne
Yeah. I don't know if you remember, but I, I made an app that was featured on the launch back in 2020, AI Channels.
- GBGreg Brockman
Uh-huh, of course.
- AMAndrew Mayne
And it was... And that was-- yeah, the challenge was GPT-3 was very capable, but I had to write these, like, six-hundred-word prompts to get it to do stuff, and because it's six cents per thousand tokens and the latency, I'm like, "I don't think this is the world for this right now."
- GBGreg Brockman
Yes.
- AMAndrew Mayne
And then GPT-3.5 and GPT-4, and then all of a sudden, you see all that capabilities, and it was hard for me to say why, but then you see that, all of a sudden, the things that come together. And you mentioned, you know, the idea of just having, you know, the model be able to see the context inside of the, you know, where you're working. And I remember when I was copy-pasting using ChatGPT into my workspace, and it reminded me of going into a grocery store and refusing to get a cart and just carrying everything to the checkout. I'm like-
- GBGreg Brockman
Mm
- AMAndrew Mayne
... "This is terribly inefficient." Once you put things on wheels, it works really well.
- GBGreg Brockman
Mm-hmm.
- AMAndrew Mayne
And I think we're seeing all kinds of those unlocks now. Now, the problem I deal with is, when I sit down to work on something, is: do I go into CLI? Do I go use the VS Code plugin? Do I go into Cursor? Do I use some other tool? And how do you guys figure this out?
- TSThibault Sottiaux
... Right now, we're still at the experimentation phase-
- GBGreg Brockman
Yeah
- TSThibault Sottiaux
- where we're trying different ways for you to interact with the agent and bring it where you're already productive. So, for example, Codex is now in GitHub.
- GBGreg Brockman
Mm-hmm.
- TSThibault Sottiaux
You can @mention Codex, and it will do work for you. If you do @codex, fix this bug, or move the tests over here, it'll go and, like, run off and, like, do it with its own little laptop, you know, on our data centers, and you don't have to think about it. But if you're working with files in a folder, um, you know, then you have that decision that: Are you gonna do it in your IDE? Are you gonna do it in terminal? What we're seeing is users are developing, like, power users are developing very complex workflows with the terminal more.
- GBGreg Brockman
Mm-hmm.
- TSThibault Sottiaux
And then when you're actually working on a file or a project, you prefer to do it in the IDE. It's, it's a bit more of a polished interface. You can undo things. You can see the edits. You know, it's not like-
- GBGreg Brockman
Mm
- TSThibault Sottiaux
... it's just scrolling by you. And then the terminal is just an amazing also vibe coding tool, where, you know, if you don't really care that much about the, the code that's being produced, you know, you can just generate a little app. It's much more about that interaction. It elevates the interaction more instead of focusing on the code, so it's more focused on, uh, the out- the outcome. And it just sort of, like, depends on what you want to do, but it's still very much in an experimentation phase right now, and we're trying different things out. Uh, and, you know, it's going to continue like that, I think.
- GBGreg Brockman
Yeah, I, I really agree with that, and I, I also think that a lot of our direction will be more integration across these things.
- TSThibault Sottiaux
Mm.
- GBGreg Brockman
Right? Because people are capable of using multiple tools, right? You already have your terminal, your browser, your, you know, GitHub web interface, your, you know, repo on your local machine. Um, each of these is something people have kind of learned when it's appropriate to reach for, for what tool, and I think that because we're in this experimentation phase, that these things can feel very disparate and very different, and like, you know, you have to kind of learn a new set of, you know, skills and the affordances of, of the relevant tool. Um, and I think that a lot of as we're iterating what's on us, is to really think about how do these fit together? And so you can start to see it, right, with the Codex IDE extension being able to run remote Codex tasks. And I think that ultimately our vision is that there should be a AI that has access to its own computer, its own clusters, but is also able to look over your shoulder, right? They can also come and help you locally, and these shouldn't be distinct things.
- TSThibault Sottiaux
Right. And it's like this one coding entity that is there to help you and collaborate with you. Like, when I collaborate with Greg, you know, I don't complain that sometimes you're on Slack-
- GBGreg Brockman
[chuckles]
- TSThibault Sottiaux
- sometimes I talk to you in person.
- GBGreg Brockman
Sometimes you complain. [laughing]
- TSThibault Sottiaux
[laughing] You know, sometimes we interact, like, through a GitHub, like, review. Like, this seems, like, very natural when you interact with other humans and collaborators, and this is also where, you know, how we're thinking about Codex as an agentic-like entity that is really meant to just supercharge you when you're trying to achieve things.
- GBGreg Brockman
Mm. So let's talk about some of the ways of using it, like agents.md. Do you want to explain that?
- TSThibault Sottiaux
Yeah. Agents.md is a set of instructions that you can give to Codex that lives alongside your code so that Codex has a little bit more context about how to best navigate-
- GBGreg Brockman
Mm
- 22:00 – 27:45
Internal tools like 10x and Codex code review
- GBGreg Brockman
for the tenth time, has it really benefited from the nine times that it went and solved a hard problem for you? And so I think that we have real research to do to think about: How do you have memory? How do you have an agent that really just goes and explores your codebase and really deeply understands it, um, and then is able to leverage that knowledge? And so this is one of the examples, and there are many, where we see great fruit on the horizon for further research progress.
- AMAndrew Mayne
Mm. It's a very competitive landscape now. There was a point where, you know, OpenAI kind of came out of nowhere for a lot of people, and all of a sudden, there was GPT-3, then there was GPT-4, and then, uh, I think Anthropic's building great models, and now Gemini, you know, from Google has gotten really good. How do you guys see the landscape? How do you see your placement there?
- GBGreg Brockman
I mean, I think that there's a lot of progress to be had. I, I, I, I focus a little less on the competition and a little more on the potential.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? Because, you know, we started OpenAI in 2015 s- thinking that AGI is going to be possible, maybe sooner than people think, and we just want to be a positive force in how it plays out, right?
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And that really thinking about: What does that mean? Trying to connect that to practical execution has been a lot of the task. And so as we started to figure out how to build capable models that are actually useful, right, that can actually help people, actually bringing that to people is this really critical thing. And you can look at choices that we've made along the way. For example, releasing ChatGPT and making Ch- ChatGPT free tier available-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... widely, right? That's something that we do because of our mission, because we really think about we want AI to be available and accessible and benefit everyone. And so in my view, the most important thing is to continue on that exponential progress and really think about how to bring it to people in a positive and useful way. Um, so I really look at where we're at right now is that these models, like, there's the GPT-4 class of pre-trained models, there's reinforcement learning on top of it to make it just much more reliable and smart, right? It's like you think about-... if you've just sort of read the internet, right? You've just observed a bunch of, you know, sort of human thought, and you're trying to write some code for the first time, you're probably gonna have a bad time of it.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
But if you've had the ability to actually try to solve some hard code problems, you have a Python interpreter, you have a, you know, uh, access to the kinds of tools that humans do, then you're going to be able to become much more robust and, and refined. Um, so we now have these pieces working together in concert, but we've got to keep pushing them-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
-to the next level. It's very clear that things like being able to refactor massive codebases, like no one's cracked that just yet. There's no fundamental reason we can't, and I think the moment you get that, um— and I think refactoring code is the, is one of the killer use cases for enterprise, right? It's, you know, if, if, if you could bring down the cost of code migrations by, you know, two x-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... I think you'll end up with ten x more of them happening. Think about the number of systems that are stuck in COBOL-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... um, and there's no COBOL programmers being trained, right?
- AMAndrew Mayne
Right.
- GBGreg Brockman
It's just like, it's a-- it's a strictly like, you know, building liability for, for the world to have this dependency. Like, the only way through is by building systems that can actually tackle that. So I just think it's a massive open space. The exponential continues, and we, we need to, to stay on that.
- AMAndrew Mayne
My favorite thing today that happened was there was a tweet from OpenAI, which was showing people how to use the CLI to switch from the completions API to the responses API- [laughing]
- GBGreg Brockman
[laughing]
- AMAndrew Mayne
-because it's-
- TSThibault Sottiaux
That's a great use. I expect to see more of that.
- AMAndrew Mayne
Yeah.
- TSThibault Sottiaux
You know, where you have special instructions given to Codex in order to go do, like, refactorings reliably, uh, and then you just set it off, and it does it for you. That's, like, a wonderful thing. Migrations are some [laughing] of the worst things. I mean, nobody wants to do migrations.
- AMAndrew Mayne
Right.
- TSThibault Sottiaux
Nobody wants to, like, change from, like, one library to the other, uh, and then make sure that everything still works. You know, if we can automate, like, most of that, that's going to be, like, a very beautiful contribution.
- AMAndrew Mayne
Yeah.
- GBGreg Brockman
I, I think there's a lot of other ground as well. Um, I think that security patching is a-
- AMAndrew Mayne
Mm-hmm
- 27:45 – 33:15
Why GPT-5 Codex can run for hours on complex tasks
- TSThibault Sottiaux
Um, and we released this internally first at OpenAI. Uh, it was quite successful, and, um, people were upset actually when it broke because they felt like they were losing that safety net, and it accelerated teams, and including the Codex team, tremendously. Um, the night before we released the IDE extension, uh, one of the top engineers on my team was, like, cranking out twenty-five PRs, and we were finding quite a few bugs automatically. Codex was finding quite a few bugs and, you know, we were able to put out an IDE extension that was almost bug-free the next day. Um, so the velocity there is incredible.
- GBGreg Brockman
It is very interesting that for the code review tool in particular, people were very nervous about having this enabled because I think our previous experience with every auto code review experiment that we've tried is that it's just noise.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? You just get an email from some bot, and you're like, "Ugh, another one of those things," and you ignore it. And I think we've had kind of the opposite finding from, from where we are now, and it really shows you when the capability is below threshold, it just feels like this thing is, like, totally net negative. I don't wanna hear about it. I don't wanna see it. Once you kind of crack above some threshold of utility, suddenly people want it, right? And get very upset if it gets taken away. And I think also our observation is if something kind of works in AI right now, one year from now, it'll be incredibly reliable, incredibly mission critical, and I think that that's where we're going with code review.
- TSThibault Sottiaux
Part of the interesting things there with code review as well is, like, bringing humans along and really have this be a collaborator, including in review. And one thing we talk a lot about is, like, how can we raise those findings so that you are actually excited to read, uh, this finding, and you might even learn something, uh, including, you know, when it's wrong. Like, you know, you can actually understand-
- GBGreg Brockman
Mm-hmm
- TSThibault Sottiaux
... its reasoning. Like, most of the time, like, actually, more than ninety percent of the time, it's right, uh, and you often learn something-
- GBGreg Brockman
Mm
- TSThibault Sottiaux
... uh, as the person who authored the code or someone who is helping review the code.
- GBGreg Brockman
Yeah, just, you know, circling back to what we were saying earlier about the rate of progress and sometimes stepping back and thinking about how things were earlier. Like, I remember for GPT-3 and for GPT-4, really focusing on the doubling down problem.... like, do you remember if the AI would say something wrong, and you'd point out the mistake?
- AMAndrew Mayne
Oh, it would, yeah, argue with you.
- GBGreg Brockman
Oh, yeah.
- AMAndrew Mayne
Yeah [laughing] .
- GBGreg Brockman
And it would try to, like, convince you-
- AMAndrew Mayne
Yeah, yeah
- GBGreg Brockman
... that it was right. Like, we're so far past that-
- AMAndrew Mayne
Yeah
- GBGreg Brockman
... being the core problem. Like, I'm sure it happens in some-
- AMAndrew Mayne
Yeah
- GBGreg Brockman
... obscure edge cases, just like it does for humans, but it's, it's really amazing to see that we're at a level where even when it's not quite zeroed in on the right thing, it's highlighting stuff that matters. It has, like, pretty reasonable thoughts, and I, the-- yeah, I always walk away from these code reviews thinking, like, "Huh, okay, yeah, that, that's a good point. I should be thinking about that."
- AMAndrew Mayne
We're now just going to launch a GPT-5, and as of the recording of this podcast, we now have GPT-5 Codex.
- TSThibault Sottiaux
Which we're tremendously excited about.
- GBGreg Brockman
Very excited.
- AMAndrew Mayne
Why should I be excited about this, gentlemen? Sell me on this.
- TSThibault Sottiaux
[laughing] So GPT-5 Codex is a version of GPT-5 that we have optimized for Codex, and we talked about the harness, and so it's optimized for the harness. We really consider it to be like one agent, where you couple the model very closely to the set of tools, and it's able to be even more reliable. One of the things that this model exhibits is an ability to go on for, like, much longer, uh, and to really have that grit that you need on, like, these complex refactoring tasks. But at the same time, for simple tasks, this actually comes way faster at you and is able to, uh, reply without much thinking. And so it's like this great collaborator where you can, you know, ask questions about your code, find where, you know, this bit of, uh, piece of code is that you need to change or, like, better understand, plan. But at the same time, once you let it go onto something, it will work for, like, a very, very long period of time. We've seen it, we've seen it work internally up to seven hours for, like, very complex refactorings. We haven't seen other models do that before. Uh, and we also have really worked tremendously on, like, code quality. Uh, and it's just really optimized for, you know, what people are using, uh, GPT-5 within Codex for.
- AMAndrew Mayne
So when you talk about working longer, and you say it worked up to seven hours, you're not just talking about it keeps putting things back into context, that it's actually making decisions, deciding what's important, and moving forward, or-
- TSThibault Sottiaux
Yes. So imagine like a really tricky refactoring. Um-
- AMAndrew Mayne
Right
- TSThibault Sottiaux
... we've all had to deal with those, uh, where, you know, you've decided that your codebase is unmaintainable. You need to make a couple of changes in order to move forward. Um, so you make a plan, and then you let the model go. Uh, you let Codex, GPT-5 Codex, go at it, and it will just, like, work its way through all of the issues, get the test to run, uh, get the test to pass, and just completely finish the refactoring. This is, like, one of the things that we've seen it do, like, for up to seven hours.
- AMAndrew Mayne
Wow!
- 33:15 – 38:50
The rise of refactoring and enterprise use cases
- GBGreg Brockman
and certainly some of the... There are some fun parts too, right? Like, you know, I think that really thinking about the architecture of things, it's a great partner, but I get to choose how I spend my time, right? And I get to think about how many of these agents do you want running on what task, how do I break down things? And so I view it as increasing the opportunity surface for programmers. And, y- you know, I'm a, I'm an Emacs user through and through. Uh, you know, I started using, uh, you know, VS Code and Cursor and, uh, Windsurf and these things, um, partly to, to just, just try things out, but partly because I like the diversity of, of different tools, but it's really hard to get me out of my terminal.
- AMAndrew Mayne
Wow.
- GBGreg Brockman
Um-
- AMAndrew Mayne
Wow.
- GBGreg Brockman
And so... But, you know, I, I have found that we're now above threshold, where I really find myself missing the like, I'm, like, doing some refactor. I'm like, "Why am I typing this thing?" Right? Like, you know, or it's like you're trying to remember exactly the syntax for a specific thing or, like, trying to, to, you know, sort of do these, these very mechanical things. I'm like, "I just want to, like, have an intern go do the thing," but I have that now in my terminal. And, uh, and I think it's really amazing that we're at the point that you have this core intelligence, um, and that you get to pick and choose when and how to use it.
- AMAndrew Mayne
Please add Whisper to the, uh, you know, the extension too, 'cause now I just love to talk to the model and tell it to do things.
- GBGreg Brockman
Yeah. Yeah, you should be able to video chat with your model. Like, I think we're heading towards-
- AMAndrew Mayne
Yeah
- GBGreg Brockman
... a real collaborator, a real coworker.
- AMAndrew Mayne
Well, yeah, let's talk about the future. Where do you see this headed? Where do you see the... What's exciting about the agentic future? How are we gonna be using these systems?
- TSThibault Sottiaux
We, we have strong conviction that the way that this is headed is large populations of agents, uh, somewhere in the cloud, uh, that we as humanity, as, you know, people, teams, organizations like supervise, uh, and steer-
- AMAndrew Mayne
Mm
- TSThibault Sottiaux
... in order to produce, like, great economical value. So if we're going, [chuckles] like, you know, a couple of years from now, this is what it's going to look like. It's millions of agents working in, you know, our and, like, companies' data centers in order to do useful work. Like, now the question is, like, how do we get there gradually? Uh, and how do we get to experiment on the right form factor and the right, uh, interaction patterns here? One of the things that is incredibly, uh, important to solve is the safety, security, alignment, uh, of all of this so that a- agents can perform useful work, but in a safe way, uh, and you get to always stay in control, uh, as, like, the operator-
- AMAndrew Mayne
Mm
- TSThibault Sottiaux
... as the human. Uh, and this is why for Codex CLI, the by default, the agent operates in a sandbox, um, and isn't able to edit files, like, randomly on your computer. Uh, and we're going to be continuing to invest a lot in making, you know, basically the environment safe, invest in, like, understanding when humans need to steer, when humans need to approve certain actions, giving more and more permissions so that your agent has its own set of permissions that, you know, you allow it to use, and then maybe escalate permissions when you allow it to do, like, exceptionally, you know, more risky things. Um, and so figuring out this, this entire system and then making it multi-agent and steerable by individuals, teams, organizations, and then aligning, you know, that-... with the whole intent of organizations, this is where it's headed for me. Um, it's a bit nebulous, but it's also very exciting, I think.
- GBGreg Brockman
Yep. Yeah, I think, I think it's exactly right. I mean, I think at a, at a, you know, zoomed-in level, there's a bunch of technical problems that need to be solved. Like Thibault is kind of getting at scalable oversight, right? How do you, as a human, manage agents that are out there writing lots of code, right? You probably don't want to read every line of code. Probably right now, most people do not read all the code that comes out of these systems. But how do you-
- AMAndrew Mayne
Of course, I do. [laughing]
- TSThibault Sottiaux
[laughing]
- GBGreg Brockman
Exactly. But how do you maintain trust, right?
- AMAndrew Mayne
Yeah.
- GBGreg Brockman
How do you make sure that that AI is producing things that are actually correct? And I think that there are technical approaches, and we've been thinking about the- these kinds of things since probably twenty seventeen is the first time we published some strategies for how you can have humans or weaker AIs start to supervise even stronger AIs-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... and kind of bootstrap your way to, uh, to making sure that as they're doing very capable, important tasks, that we can maintain trust and oversight and really be in the driver's seat. Um, so that's a very important problem, and it really is exemplified in a very practical way through thinking about more and more capable coding agents. Um, but I think there's also other dimensions that are very easy to miss because I think at each level of AI capability, people kind of overfit to what they see and think, "Oh, this is AI. This is what AI is going to be." But the thing we haven't quite seen yet is AIs solving really hard, novel problems.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? Right now, you think of it as, okay, like I need to do my refactor. You at least have a shape of like what that thing would be, right? It's like it'll do a lot of the work for you, save a lot of time. But what about solving problems that are fundamentally unsolvable, um, through any other means? And I think of this not necessarily, you know, just in the coding domain, but think of it in medicine, right?
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
You know, producing new drugs. Think of it in material science, producing new materials that have novel properties. And I think that there's a lot of new sort of, uh, uh, capability coming down the pike that is going to unlock these kinds of applications. And so, you know, for me, one big milestone is the first time that you have an artifact produced by an AI that is extremely valuable and interesting unto itself, not because it was produced by an AI-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... not because it was cheaper to produce, but it's because it's simply like a breakthrough. It's simply something that is just novel and, um, that AI, you don't even necessarily have it to be autonomously created by the AI, but just in partnership with humans, and that the AI is a critical dependency. And so I think we're starting to see signs of life on this kind of thing. We're seeing it in,
- 38:50 – 45:00
The future of agentic software engineers
- GBGreg Brockman
in life sciences, where hu- humans ask, uh, you know, human experimenters ask o-three for five ideas of experimental, uh, you know, uh, protocols to run. They try out the five of them. Four of them don't work, but one of them does. And the kind of feedback that we've been getting, and this was back in the o-three days, is that it's kind of the results are at the level of what you'd expect from like a, you know, third- or fourth-year PhD student, which is crazy!
- AMAndrew Mayne
Yeah. Yeah.
- GBGreg Brockman
Crazy. And that was o-three, right? GPT-5 and GPT-5 Pro, we're seeing totally different results there. There, we're seeing research scientists saying: "Okay, yeah, this is doing real novel stuff." Um, and sometimes it's-- again, it's not just on its own solving-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... these grand theories, but it's together in partnership, being able to just stretch far beyond where, where a human unassisted could go. And that, to me, is like one of the critical things that we need to continue to push on and get right.
- AMAndrew Mayne
One of the challenges I have when talking to people kind of about the future, and I want to hear you guys talk about this, is that people tend to imagine the future as kind of the present, but with like shiny clothes and robots.
- GBGreg Brockman
Yeah.
- AMAndrew Mayne
And, and they think about like, well, then what happens when robots do all the code and all that? And you brought up a fact that like there are the things you like to do and the things you don't care to do. Where are we in twenty thirty? What does it look like? It was five years ago, GPT-3. Now, five years from now.
- TSThibault Sottiaux
Twenty thirty [laughing] such a- we didn't, we didn't have these tools six months ago, uh, so it's hard to picture exactly what this is going to look like, uh, you know, five years from now. But one thing that is-
- AMAndrew Mayne
I'm gonna pop out of the bushes five years from now with this podcast and be like, "You said this." [laughing]
- GBGreg Brockman
Well, your agent will do it for you.
- AMAndrew Mayne
Yeah, the AI, [laughing] it's gonna.
- TSThibault Sottiaux
So one thing that's important is, like, un- the things that are... the pieces of code that are critical infrastructure and underpinning society, we need to, like, continue to understand and have the tools to understand. Um, and this is why also we, we were thinking about code review, is like-- and, and code review should help you, you know, understand that code and be this teammate that, you know, helps you de- dive into the code written by someone else, potentially help with AI.
- GBGreg Brockman
And, and I would actually argue that we already have a problem of there's lots of code out there that is not necessarily secure.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? This happens all the time. I remember, like, Heartbleed back-
- AMAndrew Mayne
Mm
- GBGreg Brockman
... I guess it's almost twelve years ago or something. Um, critical vulnerability in a key piece of software used across the internet, and you realize that that's not singular, right? That there's lots of vulnerabilities out there that no one has found yet.
- AMAndrew Mayne
All these, all these packages and stuff from npm and all these PyPi packages that are just sitting there that people put exploits into.
- GBGreg Brockman
And the way that it's always worked is that there's a cat-and-mouse game between attackers getting more sophisticated, defenders getting better, and I think that with AI, you're like: Well, maybe it's going to... like, which side will have advantage the most? Um, maybe it'll just sort of accelerate this, this, this cat and mouse. But I think that there's some hope that actually you can unlock fundamental new capabilities through AI, for example, formal verification-
- AMAndrew Mayne
Mm
- GBGreg Brockman
... that are sort of an end game for defense.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And I think that that, to me, is very exciting, is thinking about not just how do you continue this like, you know, sort of, uh, never-ending rat race, but how do you actually end up with increased stability, increased understandability? And I think that there's other opportunities like that for us to really understand our systems in a way that right now, it's almost, you know, we're, we're sort of at the edge of, of human, human understanding of the, of the software- traditional software systems that have been built.
- TSThibault Sottiaux
One of the reasons we built Codex is to improve-... um, the infrastructure and the code out there in the world, not necessarily to increase the amount of code in the world. And so this is like a very important point where it's also like helping find bugs, helping refactor, helping find more elegant, more performant, uh, performant implementations that achieve the same thing or actually are more general, but not necessarily ending up with like a hundred million lines of code that, you know, you don't understand. One thing that I'm really excited about is like how Codex can help teams, individuals, you know, just write better code, be better software engineers, and end up with simpler systems that are actually doing more things for us.
- GBGreg Brockman
I think part of the twenty thirty outlook is we will be in a world of material abundance, right? I think that AI is going to make it much easier than you could almost imagine to create anything you want.
- TSThibault Sottiaux
Mm-hmm.
- GBGreg Brockman
Right? And that will probably be true in the physical world, in addition to the digital world, in ways that are hard to predict. But I think it'll be a world of absolute compute scarcity.
- TSThibault Sottiaux
Mm.
- GBGreg Brockman
And we've seen a little bit of what this is like within OpenAI, right? The, the, the way that different research projects fight over compute, or that the success of the research pro- program is determined by the compute allocation, is something that is I-- you know, it's, it's, uh, it's hard to overstate, right? And I think that we're going to be in a world where your ability to produce and create whatever you imagine will be limited, partly by your imagination, but partly by the compute power behind it. And so one thing we think about a lot is how do we increase the supply of compute in the world, right? We want to increase the intelligence, but also the availability of that intelligence. And fundamentally, it is a physical infrastructure problem, not just a software problem.
- 45:00 – 51:30
Safety, oversight, and aligning agents with human intent
- GBGreg Brockman
and that, that will just continue. You know, on the compute scarcity point, one thing that I find very sort of suggestive is thinking about... You know, right now people talk about building big cl-- you know, big fleets of a million GPUs, of millions of GPUs-
- AMAndrew Mayne
Mm.
- GBGreg Brockman
-that level of, of GPUs. But if we, we reach a point, which is probably not in that far future, where you're gonna want agents running on your behalf constantly.
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
Right? Like, it's reasonable for every person to want a dedicated GPU just for them running their agent. And so now you're talking almost ten billion-
- AMAndrew Mayne
Yeah
- GBGreg Brockman
... GPUs that we need. We're orders of magnitude off of that. And so I think that part of our job is to figure out how to supply that compute, how to make it exist in the world, um, but how to make the most out of the, like, very limited compute that exists right now. And, uh, you know, that's an efficiency problem. It's also a increase that, that intelligence problem. Um, but yeah, I think it's, it's very clear that bringing this to fruition is going to be just like a lot of work and a lot of building.
- TSThibault Sottiaux
So one, one of the interesting things about agents and the relationship to GPUs and them acting is that, uh, it is very beneficial to have a GPU also close to you. Uh, because, you know, if when it's acting and doing two hundred tool calls over the span of like a couple of minutes, it's always- it's doing this back and forth between the GPU and, like, your laptop and executing those tool calls, getting that context back, and then continuing to reflect. And so bringing GPUs like to people, you know, being GPUs close to people, uh, is, you know, a great contribution there as well, and, you know, really benefits because it reduces the latency tremendously of the entire interaction and the entire rollout.
- AMAndrew Mayne
Gentlemen, we get the question that comes up periodically is about the future, about labor, about all of this. Um, number one, learn to code, not learn to code?
- TSThibault Sottiaux
I think it's a wonderful time to learn to code.
- GBGreg Brockman
I think, I think, uh, yeah, I agree. Definitely learn to code, but learn to use AI.
- AMAndrew Mayne
Yeah.
- GBGreg Brockman
That, to me, is the most important thing.
- TSThibault Sottiaux
There's something tremendously enjoyable about using Codex to learn about a new programming language. Uh, a lot of people on my team were new to Rust, uh, and we decided to build a core, uh, the core harness in Rust. Um, and it's been, it's been really great seeing like how quickly they can pick up a new language, uh, just by using Codex, asking questions, exploring a codebase that they don't know, and still achieving great results. Uh, obviously, we also have very experienced Rust engineers to continue to mentor and, you know, make sure that we have a high bar. Um, but it's just a really fun time to learn to code.
- GBGreg Brockman
I remember the way that I learned to program was by W3Schools tutorials, PHP, JavaScript-
- AMAndrew Mayne
Mm-hmm
- GBGreg Brockman
... HTML, CSS. And I remember when I was building some of my first applications, and I was trying to figure out how to... I didn't even know the word for it, serialize data, right?
- AMAndrew Mayne
Mm-hmm.
- GBGreg Brockman
And I came up with some sort of like, you know, sort of I, you know, structure that had some special, special sequence of characters that were serving as the delimiter. And what would happen if you actually had that sequence of characters in your data? Like, let's not talk about that.
- AMAndrew Mayne
[chuckles]
- GBGreg Brockman
So that's why I had to have a very special sequence. And this is the kind of thing where you're not gonna have a tutorial that will flag this kind of issue for you, but will Codex in its code review be like, "Hey, there's JSON serialization. Just use this library?" Absolutely. And so I think that the potential to accelerate, make it so much easier to code, so you don't have to sort of reinvent all these wheels, and that it can ask, ask the question for you or answer the question for you, that you don't even know that you needed to ask. Like, that to me, is why I think it's like a better time than ever to, to, to build.
- AMAndrew Mayne
I've learned a lot just by looking how it solves a problem. Found new libraries, found new methods and stuff. That's often... I like to sometimes give it like a crazy task. Like, how would you create your own language model with only a thousand lines of code, and what would you try to do? And sometimes it might fail, but then you go look at the direction it tried to do it, and you go, "Oh, I didn't even know that was a thing."
- TSThibault Sottiaux
... One of the things as well is that, you know, the people who are most successful coding with AI also have really studied, you know, fundamentals of software engineering-
- AMAndrew Mayne
Mm-hmm.
- TSThibault Sottiaux
-and put the right, uh, framework in place, right architecture, have thought about how to structure their codebase, and then are, you know, getting help from AI, but still, you know, following that general blueprint. Uh, and that, you know, really accelerates you and allows you to go, like, much further than, you know, you would be able to go if you actually didn't understand the code that's being written.
- AMAndrew Mayne
Since you've launched this, since you've made this available, GPT-5, since you've been able to deploy things with Codex, what have you seen as usage rates?
- TSThibault Sottiaux
Yeah, usage has been exploding. Uh, so we've seen more than a 10x growth in usage from across users, and the users that were using it already are using it much more as well. So we're seeing more sophisticated usage, um, and people are using it for longer periods of time as well. Uh, we have now included in the Plus and Pro plan, uh, with generous limits, and that's contributed a lot-
- AMAndrew Mayne
Mm.
- TSThibault Sottiaux
-to being successful.
- GBGreg Brockman
Yeah, I think that the vibes, I think, also have really started to shift as people, I think, are starting to realize how you need to use GPT-5, right? I think it's a little bit of a different flavor. I think that we have our own spin on the right harnesses and tools and the ecosystem of how these things fit together, and I think that once it clicks for people, then they just go so fast.
Episode duration: 50:39
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode OXOypK7_90c
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome