Skip to content
OpenAIOpenAI

State of the AI Industry — the OpenAI Podcast Ep. 12

OpenAI CFO Sarah Friar and Khosla Ventures founder Vinod Khosla argue the greatest challenges in AI right now are keeping up with demand and making sure more people get the benefit. They unpack what's driving big investments in compute and why this moment is different from other technology cycles — with meaningful advances in health, agents, and robotics still ahead. Chapters 00:00:00 — What’s the AI story of 2026? 00:07:28 — AI in healthcare 00:12:01 — Scaling compute to match revenue 00:18:05 — Difference between now and dot-com bubble 00:27:41 — Ads in ChatGPT 00:30:05 — Will consumers have more than one AI subscription? 00:36:41 — Winning in enterprise 00:39:44 — How can startups succeed? 00:44:05 — Robotics and beyond

Andrew MaynehostSarah FriarguestVinod Khoslaguest
Jan 19, 202649mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:007:28

    What’s the AI story of 2026?

    1. AM

      Hello, I'm Andrew Mayne, and this is the OpenAI Podcast. Today, our guests are Sarah Friar, CFO of OpenAI, and legendary investor, Vinod Khosla of Khosla Ventures. In this discussion, we're going to talk about the state of the AI ecosystem, whether or not we're in a bubble, and how startups and investors can succeed as AI progresses.

    2. SF

      Unlike something like Netflix, where they're running so many hours in the day, I think of it much more like infrastructure, like electricity.

    3. VK

      Demand is limited, not by anything other than availability of compute today. I think the conversation we need to have is: what will people do?

    4. AM

      2025 was about agents and vibe coding. Now it's 2026. What's the story of 2026?

    5. VK

      I think we matured in vibe coding in 2025. I don't think we've matured in agents. So agents, especially multi-agentic systems, will mature to the point of having real visible impact. Whether you're an enterprise and you have a multi-agent systems doing full tasks, like running an ERP system for you, um, you know, doing all the reconciliation every day, accruals every day, work-- tracking contracts every day. I think that on the enterprise side. But today, uh, on the consumer side, you know, it's still a hassle to ma-- plan a trip. That's a multi-agentic thing that looks across a lot of different things, from your food preferences, to the restaurant reservation, to airline schedules, to your personal calendar. Uh, those will start to mature, I think, a year from now. Um, so I'm pretty excited about that. I think models in robotics and real-world models that go beyond, well beyond robotics, like general intuition, uh, will all start to happen in, in the next year. So I think that, that-- those are areas to look for. There's usual functions, like memory in LLMs, um, continual learning in LLMs, um, redu- uh, reduction of the impact of hallucinations. Those are all areas I could go on. There's half a dozen areas in which AI doesn't do as well today that will be-- start to be addressed.

    6. SF

      Yeah. And I think at its baseline, what Vinod is saying is '26 is the beginning of closing this capability gap. So what we know is we've handed people massive intelligence, right? We've handed them the keys to the Ferrari, but they are only learning how to take it out on the road for the first time. Um, we need to give consumers more and more easy ask, uh, easy ways to go from ChatGPT as just a chatbot, call and response. Most people use it today just to ask questions. But how do we take it towards being a true task worker, that, that books that trip for them, or helps them get a second opinion on what they just heard from their doctor, or enables them to create a menu for their diabetic child, right? How do we help them really move from simple questions into actual outcomes that make my life better? And then on the enterprise side, it's that same continuum. How do we close the capability gap, right? One of the things we know from our state of the enterprise, AI and the enterprise report that our chief economist put out at the end of last year, is on the frontier versus just even the median corporation. The, the average number of messages or the median is about six X, which will tell you that's six X the usage from a company that's already on the frontier, and we know that frontier isn't even pushed to its max. So for us, it's this focus of how do we help consumers move along that continuum to true agentic task working? And then for enterprises, how do we create a much more sophisticated, vertically specialized outcome for enterprises that allows them to go from maybe a very simple ChatGPT implementation, the whole way to something that's transforming the most important part of their business? For a healthcare provider, it might be their drug discovery process. For a hospital, it might be the time to admit a patient to get that patient back into the community. For, um, a really large retailer, it might be just larger basket sizes, higher conversion rates, and much happier customers. So it's the basics of closing that capability gap.

    7. VK

      So I, I, I might add one other perspective. We've talked about the number of areas in which the technology will advance-

    8. SF

      Mm-hmm

    9. VK

      ... and capability will advance. I would venture to guess today, of the people using AI, whether it's personal or enterprise, some single-digit percentage are even using thirty percent of the capability of the AI.

    10. SF

      Yeah.

    11. VK

      So this m- percentage of people who are using thirty percent or fifty percent, let alone eighty percent of the AI's capabilities, will keep increasing. I think that's a ten-year journey before people learn to use AI.

    12. AM

      I've seen this, some people, kind of pundits, confuse adoption curves for capability curves.

    13. VK

      Yes.

    14. SF

      Yes.

    15. AM

      And that's, that's come up where you've seen people-

    16. VK

      So that's the point I'm making.

    17. SF

      And it's a force multiplier, because today we have over eight hundred million using ChatGPT today, eight hundred million consumers weekly using, but, you know, that number should be in the billions.

    18. AM

      Yeah.

    19. SF

      And then what percentage use are they using it for? It's like we've just turned electricity on in the home. We've wired up the home, and they've turned on the lights, but they have no idea that they could now heat their home, they could cook, they could curl their hair, right? There's so many things you now can do.

    20. AM

      An analogy I've used is that email didn't really get much better between 1990 and the year 2000. Neither did mobile, but usage went way up, and the problem wasn't like, "Well, we need better email, we need more better mobile." It's like people just need to learn all the things they could use it for.

    21. SF

      Right.... Yeah. And in a more sophisticated way, like mobile is always one that's interesting to me, because when mobile took off, people just took their desktop websites and turned them into mobile, and they were really hard to scroll, but I guess you at least had them in your pocket. But then you realized you had a GPS, so now you could have Uber, and now you could do things with location, or you had a camera at your fingertips. Okay, so now, yeah, I can take photographs of all my friends, but I can also snap, you know, a check and deposit it into my bank account, although we should fix the whole paper check thing, but that's a-

    22. AM

      Yeah

    23. SF

      -an aside.

    24. AM

      It still seems like a-

    25. SF

      It really changed. [chuckles]

    26. AM

      But I can just take a photo of this, and now I get money in my bank account.

    27. SF

      Yeah. But, you know, those, those-- that all existed in, in-

    28. AM

      Mm

    29. SF

      -the minute mobile was available to us, but just the, you know, the, the ability for human ingenuity to come to work on it. So I think you're right. I don't even know if we need more intelligence than we have today to vastly increase outcomes, but of course, the models are going to keep getting more intelligent as well.

    30. AM

      You mentioned health, and that's one of the really kind of high-stakes things we think about when it comes to just probably the most important thing. And it's kind of fascinating to think about the gist-- you know, a few years ago, we got ChatGPT, and we're using it for very simple applications, and now we're trusting it with HIPAA-compliant data. Do you look at that as sort of a marker of how fast or how well things have been accelerating? Are there other ones like that you think about to say, "Okay, now we know we're at some new level?"

  2. 7:2812:01

    AI in healthcare

    1. VK

      one of those areas I've long believed, uh, it'll revolutionize health, uh, by making, uh, expertise be a commodity in all areas of health. Uh, the problem with health is regulatory.

    2. AM

      Mm-hmm.

    3. VK

      So first, there's constraints on what AI can do. An AI can't legally write a prescription, even if it's better than human beings at writing a prescription. Um, that is not only the FDA, but it's actually beyond the FDA into the American Medical Association institutionally controls that function. So there will be incumbent resistance in a lot of areas. I think we can talk about it if you like. Uh, yeah, but diagnosing is still a constraint because, uh, the FDA controls that. There's no AI approved as a medical device yet. So that, that all-- fortunately, this administration is doing a very good job of moving quickly and taking the appropriate level of risk, so I'm pretty pleased to see what's happening there.

    4. SF

      On the health front, we see in our data, two hundred and thirty million people every week ask ChatGPT a health question.

    5. VK

      Yeah.

    6. SF

      Sixty-six percent of US physicians say they use ChatGPT in their daily work. I'll tell you at a personal level, my brother is an HDU doctor in the UK, so his job is, right, you hit the ER, they don't know how to triage you, so they send you to him. You kind of don't want to show up to him. He's expected to have-

    7. AM

      He's very good, though. [chuckles]

    8. SF

      He's very good at what he does-

    9. AM

      Okay

    10. SF

      ... but it means you're not in good shape. But he's expected to have an almost an encyclopedic knowledge of every disease that ever existed. So I always give the example, he works in Aberdeen, in Scotland. If you showed up with malaria, he will not think of that. That is not in his pattern recognition, and yet that could have happened. I don't know, you went on vacation somewhere, you got bitten by a mosquito, boom, you're showing up in an ER room in Aberdeen. What ChatGPT can do or what the model can do is really act as a great augmentation to the doctor, which is why I think sixty-six percent of them are using it, and that number is only growing, right? You know, it's probably already, um, much higher. And so I think it's just a great example of where something like health, we're getting the benefit of our doctors being able to have always the latest research in front of them, always the latest known, um, interactions, say, between someone's drug regime and what they're living through and experiencing as individuals. But it also puts some independence back into consumers' hands. So now I get the opportunity to, um, ahead of time, do some research on what my symptoms might be saying, so I can have a much more educated conversation with my doctor. It allows me to maybe get a second opinion or know that I want to go ask for a second opinion. Um, it also-- well, we go very fast to, you know, these extreme places, but just even things like, "Hey, I've got twenty minutes a day to exercise. I know I'm suffering from type one diabetes. What, what could I do in twenty minutes?"

    11. VK

      Mm.

    12. SF

      Or, "My daughter has a, a, an interesting, um, issue with the food she eats." And so it used to be a super just frustrating thing to go to a restaurant even because we'd have to almost ask the server so many questions, and now we can photograph a menu. Chat suggests what are likely the best dishes for her to order, and then we can have a bit more of a, of a terser conversation, but a bit more productive on what's going to work. And it has just changed how we think about just eating. Takes it away from all about the food to why we're going out for dinner together. And so I think there are all these just examples of something like health. It's already happening, and it's going to keep getting better and better. And then to Vinod's point, I think regulatory environment is going to have to catch up.

    13. AM

      It's, it's-- no matter what kind of system you're under, the cost of medical care is exceeding the GDP of every country, the rate at which increases. And it seems like we needed AI, we needed it now, and, you know, it can be helpful. And as you pointed out, it's the first time the cost of medical intelligence has dropped year over year. But that comes with a lot of demand for compute, and we have a lot more questions, uh, you know, that we want to have answered. And certainly, people can see the need for more compute, but the scale and scope

  3. 12:0118:05

    Scaling compute to match revenue

    1. AM

      at which OpenAI is investing in compute is incredibly huge. You know, we're talking, you know, numbers that are just really hard to fathom, um...... how does OpenAI determine that need? You know, what are the metrics you're looking at to think that, like, "Yes, we need to spend this much?"

    2. SF

      So first of all, we are trying to make sure we stay investing in compute to match the pace of our revenue, and we've seen a really strong correlation between in-period compute and in-period revenue. I'll give you an example: If you just go back in '23, '24, and '25, our compute was two hundred megawatts, six hundred megawatts, and we ended last year at two gigawatts. Against that, and it's really easy because the numbers match up, we exited at '23 at two billion in ARR, so two hundred megawatts, two billion. We exited '24 at six billion, so six billion, six hundred megawatts, and we exited last year a little over twenty billion. Twenty billion, two gigawatts. Actually, it's been accelerating. So that's just even if you look at the slope of the line, it says, more compute, more revenue. Now, there is definitely a timing mismatch, because I have to make decisions today about making sure we have compute in not even '26 or '27, but '28, '29, and '30. Because if I don't put in orders today and don't give the signal to create data centers, it won't be there, right? Today, we feel absolutely constrained on compute. There are many more products that we could launch, many more models that we would train, many more multimodality things we would explore if we had more compute today. So, for example, even in the last year, I think the overall hardware investments globally has gone up by something like two hundred and twenty billion dollars. That's just how much actual spending has gone up. If you look at chips, chip forecasts have gone up similarly, about three hundred and thirty-four billion dollars. So it's not just OpenAI. The signal from the whole environment is: AI is real. We are in a paradigm shift. We need to invest to give people the intelligence they need to do all the things we just talked about, for example. So back inside of OpenAI, we do spend a lot of time going very deep on what is our demand signal in consumer, in enterprise, in developers. We think about what's the mosaic first at the base. Like, on an infrastructure layer, how do we create max optionality? So we want to be multi-cloud, multi-chip, um, and that gives us, uh, an interesting layer at, at the infrastructure layer. One tick up at the product layer, we also want to become more multidimensional, so we used to just be one product, ChatGPT. Today, we are ChatGPT for consumer, with all of the blades inside it, healthcare and so on. ChatGPT for work, but we also have Sora as a new platform. Um, we have, uh, some of our transformational research projects. One tick up, we also then have a business model ecosystem that's becoming much more, um, multidimensional. Began with a single subscription because we'd launched ChatGPT, and we needed a way to pay for the compute. We now have multiple price points-

    3. AM

      First ChatGPT subscriber, by the way. That was-

    4. SF

      I love you for that.

    5. AM

      Yeah. [chuckles]

    6. SF

      Multiple subscriptions. We went to the enterprise and had SaaS-based pricing. We have credit-based pricing now for places where the high value is being, um, found, so people want to pay more to get more. Um, we're beginning to think about things like commerce and ads, and then, of course, longer term, I like models, like, for example, would we do, um, licensing models-

    7. AM

      Mm

    8. SF

      ... to really align-- Let's say in drug discovery, if we licensed our technology, you have a breakthrough, that drug takes off, and we get a licensed portion of all its sales, it's great alignment for us with our customer. So kind of if you think about those three tiers, I actually think of it like a Rubik's Cube.

    9. AM

      Okay.

    10. SF

      So we went from a single block, you know, one CSP, Microsoft, one chip, one product, one business model, to now a whole three-dimensional cube. And one of the things I love about a Rubik's Cube, I'm probably not getting the number exactly right, but I think it has forty-three quintillion different states it can be in. It always blew my mind when I was in university. [chuckles]

    11. AM

      Last time you counted, yeah. [chuckles]

    12. SF

      Um, so now just think about that cube spinning.

    13. AM

      Mm.

    14. SF

      So we pick a low latency chip going alongside something like coding. That's five X the pace that people expect. We can charge a high-end subscription for that. So it's almost like you line up the cube, and you get three colors on one side.

    15. AM

      Mm.

    16. SF

      Um, we could spin the cube again and say, low latency chip, um, faster image gen, more free users come in, but that creates more inventory for ultimately perhaps an ads platform. So you can start to see how the goal in the last twelve months has been creating more and more strategic options that allow me to keep paying for the compute we need to really achieve our mission: AGI for the benefit of humanity.

    17. VK

      So, you know, the way to simplify that-

    18. SF

      Mm-hmm

    19. VK

      ... demand is limited not by anything other than availability of compute today.

    20. SF

      Absolutely. Mm-hmm.

    21. VK

      Whether it's Sora or more broadly.

    22. SF

      Mm-hmm.

    23. VK

      And then there's price elasticity, elasticity, where demand is infinite for compute.

    24. SF

      Mm.

    25. VK

      So, uh, I, I think that's the way to think about it.

    26. SF

      Yeah.

    27. VK

      It just-- We haven't even started to exercise the price elasticity lever. It's just we can't fulfill demand.

    28. SF

      Right.

    29. VK

      Uh, and it's limited by compute. So all the people talking about bubbles and things, I, I think are on the wrong track. They have no sense of how large this change is and how much more demand elasticity there is a need for API calls.

    30. AM

      As one of OpenAI's earliest

  4. 18:0527:41

    Difference between now and dot-com bubble

    1. AM

      investors, you made a bet early on. You saw where this was headed, but-... you've saw the dot-com bubble, you watched what happened there, but you've also seen other things, the mobile revolution, you've seen this happen with other areas. And you mentioned the term broad, and is that sort of where your conviction comes from, is just how many different areas it touches?

    2. VK

      Yeah. Look, when we invested, we had one simple metric. There was no projections to look at, no product plans to look at, no ChatGPT to look at. It was very simply the idea, if we develop anywhere near close to human intelligence, let alone supersede human intelligence, the-- its, its impact is going to be huge. So it was this hand-baby approach, like the consequences of success are really going to be consequential, so why not try, try that? Uh, people-- there's also this funny notion of bubble. People equate bubble to stock prices, which has nothing to do with anything other than fear and greed among investors. So I always look at bubbles should be measured by the number of API calls.

    3. AM

      Mm-hmm.

    4. VK

      Uh, or in the dot-com bubble, which people refer to, it should be amount of internet traffic-

    5. AM

      Mm-hmm

    6. VK

      - not by what happened to stock prices because somebody got overexcited or underexcited, and in one day they can go from loving Nvidia to hating Nvidia because it's overvalued. Uh, those gyrations aren't reality.

    7. AM

      Mm-hmm.

    8. VK

      The reality is the underlying number of API calls. If you look at internet traffic during the dot-com bubble, prices may have s-- go- gone up violently and gone down violently. There's no bubble detectable in internet traffic. I would almost guarantee you, you won't see the bubble in number of API calls, and if that's your fundamental metric of what's the real use of your AI, usefulness of AI, demand for AI, you're not going to see a bubble in API calls. What Wall Street tends to do with it, I don't really care. I think it's mostly irrelevant. Great for press articles, because press has to fill their column inches, but it's not reality. So prices of things aren't reality, or stock prices-

    9. AM

      Mm-hmm

    10. VK

      ... private company valuations. The reality is, what's the actual demand for AI, which is the number of API calls.

    11. SF

      Right. And if I think if I hark back to that moment where you were looking at nineteen ninety-nine, the, the value people were getting from the internet at the time was actually very-- it w- it was so young, so nascent, that y- you couldn't really see how it was changing their lives. I do think that with AI, it's happened so fast-

    12. VK

      Yeah

    13. SF

      ... that change, it's very real. Like, as a CFO, forget about being the CFO of OpenAI, but as a CFO, what I see happening in my organization is truly taking tasks that previously I would have kept having to add more and more people doing fairly mundane things. Like, let's take something like revenue management. Um, so in, in a team that does revenue management, they ha-- one of the things they do every day is they have to download all the contracts that we signed the day before or through the week, and they have to read all of those contracts to make sure there's no terms sitting in it that are unexpected, that are effectively non-standard terms. Because a non-standard term means that there could be a revenue recognition change that has to happen, and that's a very big deal for a [chuckles] finance team. That's the number one thing usually your auditors come in to audit you on. The pace at which we are growing, right, the number of contracts every day is going up in multiples. So my only choice in a pre-OAI or pre-AI world would have been hire more people.

    14. AM

      Mm-hmm.

    15. SF

      And imagine what those people's jobs are like. You come to work every day and you read a contract, and then you read the next one, and the next one. It is so mundane and such drudgery, and it's not why people, you know, went to school and learned about the accounting field or thought about being a finance professional, but that's kind of the job we hand them as an entry-level job. Today, using our own tools here at OpenAI, I now have, overnight, all of those contracts are pulled out of a system. They are put into a tabular database, the Databricks database, in our case. Um, the a-- the agent or the intelligence is able to go through. It shows me exactly what is non-standard and why. It suggests what therefore the rev rec is, but it also suggests the insight, which is, you know, should this term even be here? Did the salesperson just give away something they shouldn't have? In which case, you know, I go and I coach them.

    16. AM

      Mm-hmm.

    17. SF

      Um, is it actually telling me something about my business that's starting to shift, in which case this non-standard term is actually should become a standard term, and I'm actually-- what I'm experiencing is a shift in my business model, which might actually be a good thing. Or perhaps I want to find a different way to help get the customer what they're looking for, the salesperson what they're looking for, but maintain my revenue recognition, my current business model, right? So I now, my more junior entry-level people are over on the right of that discussion, and they're kind of refinding the job they loved.

    18. AM

      Mm-hmm.

    19. SF

      That, to me, is why it's not a bubble, because the value is real and tangible. Like, it also means I probably can have a smaller team, I can have a much more high-performing team, a much higher morale on my team, better retention rates, right? All of these I can put into, like, numbers to say: My business is now healthier. And I think that's the piece when the press is trying to lead with the bubble conversation or whatever. They just miss that we are investing with demand, if anything, behind demand at the moment. A bubble, to me, suggests you're investing ahead of demand, and there's going to be a gap.

    20. VK

      ... and you look at productivity numbers, they're going up in the companies that are adapting AI, especially the newer set of tech-oriented companies. The numbers are just absolutely amazing.

    21. SF

      Yeah.

    22. VK

      So one of my favorites is a little company called Slash.

    23. SF

      Mm-hmm.

    24. VK

      About a hundred and fifty million ARR. They have one person in accounting, only a controller, because they adapted an AI-oriented ERP system. Uh, they replaced NetSuite with it, but, uh, and, and it's just amazing what they can do, and the CEO was apologizing to me he might have to hire a second person.

    25. SF

      [chuckles]

    26. VK

      Uh, and they're moving really rapidly. I just saw a story, somebody replaced ten SDRs with one SDR and AI, essentially that the one SDR remaining supervises.

    27. SF

      Yeah.

    28. AM

      It, it's-- I've been hearing two stories about where instead of hiring somebody that's in an area that doesn't create growth, they can now then, when they hire, hire people that are creating a lot more growth for the company, and that's why you're seeing a lot of these tech companies just build so fast.

    29. VK

      You know that old phrase, "The future is here now, but it's not evenly distributed"?

    30. SF

      Mm-hmm. Yes.

  5. 27:4130:05

    Ads in ChatGPT

    1. AM

      certainly, the argument can be made that with ads, you can increase the benefits to people, you can provide more services, more AI, you can help pay for the compute, and people get more out of those tiers with that. But that brings up the question, though, of trust, and when people think about AI initially even asking questions, people worried about: What is ChatGPT do with my information? Once you have ads in play, people worry about that because it's often just a big question of-

    2. SF

      Mm

    3. AM

      ... how does that affect the rest of the product and the org?

    4. SF

      Yeah. So I think you started in the right place, which is today, ninety-five percent of our users use our platform for free-

    5. AM

      Mm

    6. SF

      ... on the consumer side, and that's absolutely where our mission is, right? AI for the benefit of humanity, not the benefit of humanity who can pay, right?

    7. AM

      Mm-hmm.

    8. SF

      So access is very important. From an ads perspective, I think, number one, we have to just make sure everyone understands you're always gonna get the best answer the model can provide you, not the paid-for answer, and I think other platforms have fallen back into that, where you're not sure, is this a sponsored link, or is this truly the best outcome? We have a North Star, which is that the model will always give you the best answer. I think the second thing to understand is that there can be a lot of utility in ads, so we wanna make sure people know when it is an ad that they're working with. But for example, if I do a search for a weekend getaway to pick your favorite city, I don't know, San Diego, um, an ad for Airbnb might actually be very helpful, and you might even wanna have a discussion with the ad or with the advertiser, in that case, in a ChatGPT setting that's very rich, but you're clear that it's in an advertising setting. And I think this is where there's-- there has to be more innovation on what feels endemic to the platform, not just kind of the old world of stick, you know, banner ads on things. Um, and I think the third and final thing for me is, again, there always has to be a tier where advertising doesn't exist.

    9. AM

      Mm-hmm.

    10. SF

      So we give the user some choice and some control. Um, but we're very mindful of your data. When we released Health, we were very clear, your data is off to one side. It's not being used to train on, and so on, and I think we just need to keep giving users that, kind of, that trust is everything for OpenAI, and that we're going to stand by those principles, even when it comes to things like ads.

    11. AM

      Mm-hmm. On the consumer side, is it gonna be a world where you're gonna have a lot

  6. 30:0536:41

    Will consumers have more than one AI subscription?

    1. AM

      of subscriptions to different AI services?

    2. VK

      I, I think you'll have every model. Uh, most people will have more than one subscription. Media is a good example. Most people have more than one subscription-

    3. AM

      Mm

    4. VK

      ... in media, and so that's a good proxy for consumer behavior. Uh, different people will pick different choices, including free choices, which also-- which, which is ad-supported media, too. So even the same services you can get for pay or for free.... Yeah, I, I think you'd see a wide range of diversity.

    5. SF

      How do you think about, though, the expense of going to a different platform? So I like ChatGPT memory. I'm finding it more and more helpful because as I ask about one thing, it remembers something we talked about maybe weeks ago, months ago. Pulse, which is today not widely distributed, but it's the morn-- it's the way I wake up in the morning now.

    6. VK

      It's amazing.

    7. SF

      So I can actually-- It's so amazing. And when you start connecting it to things like your calendar, so it's not just saying, "You know, you, Sarah, are very interested in AI data centers," which clearly it must think I'm the most boring person on Earth-

    8. VK

      [chuckles]

    9. SF

      -because this is what I see a lot of. But it also says, "Hey, on your calendar, you're going to be sitting down with Vinod today. You know, remember a couple of these things." Like, it's so helpful. But if I am multi-homing, I'm losing the benefit, which is not the same as if I subscribe to The Wall Street Journal, The Economist, and The New York Times. They're not really losing out if I go read in other places in the same way, or I'm not losing out.

    10. VK

      Yeah. So I, I do think memory is an important-

    11. SF

      Mm-hmm

    12. VK

      ... question. Whether there'll be one purveyor or more than one purveyor of the models-

    13. SF

      Mm-hmm

    14. VK

      ... on each model, there'll be multiple services-

    15. SF

      Mm

    16. VK

      -that may offer different trade-offs.

    17. SF

      Yeah.

    18. VK

      So even whether you're talking health or media-

    19. SF

      Mm-hmm

    20. VK

      ... even on the OpenAI models, there's multiple m- people providing services.

    21. SF

      Mm-hmm.

    22. VK

      So that's what I was thinking of multi-homing, but obviously, I don't think OpenAI will be one hundred percent of the market.

    23. SF

      Mm-hmm.

    24. VK

      I hope so.

    25. SF

      I was going to say, I hope so too, but [chuckles]

    26. VK

      I'm okay with that. But- [chuckles]

    27. AM

      It's an interesting business model. I think it's hard for people to wrap their heads around because, like, Netflix is a great company, but there's only so many hours on the planet that people can watch Netflix, right? And mobile is great, right? I only-- I'd only need so many minutes of mobile per week or whatever to do that. With AI and intelligence, you can have more intelligence. I can buy more and get better answers and do this, and I think that's-- I think I'm still trying to wrap my head around about where, where that goes, the idea that, like, you start at, like, you know, one level of free, you know, use it for free, then you go to a smaller tier, and then as it becomes more useful, you start increasing that. Where does it go?

    28. SF

      So I think unlike something like Netflix, where they're running so many hours in the day, I think of it much more like infrastructure, like electricity.

    29. AM

      Mm-hmm.

    30. SF

      How much electricity do you use in the day? I don't know. I walked into a room today, and there was a fan blowing.

  7. 36:4139:44

    Winning in enterprise

    1. AM

      and how is OpenAI going to-

    2. AM

      ... compete and win in that area?

    3. SF

      So I think we're already winning in this area. [chuckles] Um, what I see is, you know, ninety percent of corporations are saying they either are using OpenAI or intend to use over the next twelve months, right? Um, I think the second is Microsoft, and Microsoft's using our technology. So I actually think we have-- this is where the consumer is a really potent part of the enterprise flywheel. So, as I said earlier, when someone-- you know, you-- back in the day, when you first started bringing your iPhone to work and corporates didn't want you to do that, you just discovered you can't say no to the tidal wave that is consumer preference. So something I'm already using, that I've already got in my pocket, and I get to work, my expectation is work is at least as good, if not better. And so that's what's helped drive our inter-- our, uh, actual enterprise business, the fastest company ever to get to one million businesses on a platform, and we did that in about a year and a half. Um, but where to from here? 'Cause clearly, we're just scratching the surface. So some of it is certainly meeting customers in terms of their vertical, so that we talk to them in their language, and we learn this art of enterprise selling, which is, "Let me not tell you all about my products, but let me understand your problem. Like, what is your board forcing on you, Mr. or Mrs. CEO? What is the thing your customers most want that you can't deliver? Okay, let's start putting intelligence against that." We can then drop that down into some light vertical specialization to quite heavy vertical specialization, things like RLing models that are very pertinent to a use case. Like, let's say in an energy company, it might be really understanding that particular oil well or all the seismic data they have to say: What's the recovery we're going to get out of this gas field? Like, that is deep specia-specialization. And then I think it gets the whole way to some of these big transformational research projects that we have begun, where we're actually almost taking over someone's whole business and helping them rethink it in a smarter, faster, better way that ultimately drives their key business metrics. So it's a journey. I think most corporates have started with wall-to-wall ChatGPT. That's an easy starting point. They've done some coding, um, and in many cases, a lot of coding. Like, when I talk to corporates, they're now-- CEOs are starting to say things like, "Sixty percent of all my production code was built by, you know, a-an agent." And I'm like: "You didn't even know what, you know, production code meant twelve months ago, but now you're saying that? That's good, 'cause it means you're tracking it." Um, but on agents, it's just starting. Like, we only see about fourteen percent of all kind of customers, when you go out and just survey US corporates, are using something agentic today, fourteen percent, when I just explained what's happening in my finance organization. So I think we are just getting going, but I couldn't

  8. 39:4444:05

    How can startups succeed?

    1. SF

      be more excited about the opportunity. It's huge.

    2. AM

      Okay, but if I'm a startup-

    3. SF

      Mm-hmm

    4. AM

      ... and I look at everything OpenAI is doing, I might be asking: Is there room for me? What do I gotta do?

    5. VK

      Look, models will keep getting better and do more and more, but I do believe there's lots of room to build on top. You know, no one company can do everything on the planet. There's m-- billions of people who are working, that-- whose job AI, AI can help with. I don't think OpenAI will specialize in every one. So I think the careful thing to do is, is be clear where the models will go, OpenAI or others-

    6. SF

      Mm-hmm

    7. VK

      ... and what they will be able to do, and how do you use that best to then specialize into a more interesting world?

    8. AM

      So-

    9. VK

      Like, some sort of specialization where you add something that's additional to the base models. And, and frankly, just intelligence isn't the only thing-

    10. SF

      Yeah

    11. VK

      -to provide a solution. There's lots of other stuff that goes around solution beyond intelligence. So I think there's lots of opportunity to build on top of these models, and the more powerful they get, the number of opportunities to add to it dramatically increases.

    12. SF

      How do you think about-- So I think a lot about use cases where there's already a lot of, um, data that's being aggregated, perhaps by that startup, by that company, that, you know, today, I think ninety-five percent of the world's information actually sits behind corporate firewalls, university firewalls, and so on. So there's-- Even though we talk about the vast training that's occurred, again, we're just getting going.

    13. AM

      Yeah.

    14. SF

      But I think companies that have already built businesses that have aggregated that data, have access to it, and then on top of that, have managed complex workflows. So I often give the example of our procurement system. Procurement system, per se, not that complicated, but what it does very well is it understands things like delegation of authority. So it knows what the board has approved in terms of approval limits.

    15. AM

      Yeah.

    16. SF

      So it knows that when this software contract comes in, it's x-- over x amount, so only I can approve it, or if it's beneath that, but it knows a, a VP can approve it. It doesn't know that Andrew's a VP, but it knows to touch the HRS system and check what's his level, and so the whole procurement flow-

    17. AM

      Yeah

    18. SF

      ... can happen in a way where I have compliance and governance, and hopefully makes just the whole company run faster. Those are places I get interested for startups. So where have you got access to unique data with a complex workflow? It feels like there's more of a moat around that, that we want to work alongside you, but w-- you know, the general purpose model is not gonna do all of that itself.

    19. VK

      Yeah, no, I, I, I completely buy that.

    20. SF

      Yeah.

    21. VK

      I think there's lots of opportunity.

    22. SF

      Mm-hmm.

    23. VK

      I've seen quite a few startups around just permissioning around data.

    24. SF

      Yeah, yeah, yeah.

    25. VK

      Like, who can do access to what information?

    26. SF

      Yeah.

    27. VK

      For example, I've seen a whole bunch of startups around customizing to each company.... the models for their history and their priorities and, uh-

    28. SF

      And the agent, uh, uh, the whole identity side of agents-

    29. VK

      Yeah.

    30. SF

      I think we're just starting to understand, um, both the risk that can happen when you have agents talking to agents talking to agents, but then also, how are you going to permission that? And then start to think about it, like agentic commerce, like the, the, the complexity that's coming is also quite big. So to suggest there's no more opportunity as a startup, I think it's never been probably more interesting or fun to be a startup.

  9. 44:0549:41

    Robotics and beyond

    1. VK

      not talked about the whole new world of robotics and real-world models, and all that. That's a whole space by itself that we probably-

    2. AM

      Well-

    3. VK

      ... don't have time for.

    4. AM

      Well, do we? We've got time for it. [chuckles]

    5. SF

      We've got plenty of time. I'd love it.

    6. AM

      I, I-

    7. SF

      I wanna go there.

    8. AM

      Yeah, 'cause we're-- we, we talked about where we're headed here, and you, you, you famously talked about kind of the world of twenty fifty, and things are moving fast, models are getting faster and more capable. And where do you see things like robotics headed?

    9. VK

      Well, uh, I think two years ago, when I gave a talk at TED, I said the robotics business, both bipedal and other robots, will be a larger business in fifteen years than the auto industry is today. We think of auto industry as one of the larger businesses-

    10. AM

      Mm-hmm, mm-hmm

    11. VK

      ... on the planet, and this other thing will be larger. I don't think there's very many automotive companies who are thinking of the world that way.

    12. AM

      Mm-hmm.

    13. VK

      They're thinking about how to use a robot in their assembly line, not that that business is larger than their current business, all driven by the intelligence of robots.

    14. SF

      Mm-hmm.

    15. VK

      So massive opportunities for startups there, and we are seeing a lot of activities.

    16. SF

      Yeah, and we-- and I think sometimes we underestimate... So when you think about robots in the home, right? People-- very fertile area, a lot-- no one's really had a breakthrough, though. There's so many different issues around the complexity. Actually, sometimes the more time I spend in AI, they actually-- the more respect I have for the human condition, [chuckles] in a way, um, because our ability to move around the world and do... You know, if you watch, like, the people in robotics getting so excited about a robot folding clothes, you know, perhaps my eighteen-year-old, I'd be just as excited about, but for-

    17. VK

      Mm-hmm. [chuckles]

    18. SF

      -the average human, I assume they can fold clothes. Um, but I think- [chuckles]

    19. AM

      That's like the hello world of robotics, now it's folding clothes.

    20. VK

      Yeah, exactly.

    21. SF

      Yes.

    22. AM

      Yeah.

    23. SF

      But you do get a little stuck in your head that they have to somehow be a human. But it turns out there may just be these breakthrough moments, like, for example, um, uh, companionship in the home, right? We have an aging population. What's one of the biggest-- You know, we talk about epidemics in the world. Loneliness is probably one of the biggest academ- epidemics. What does someone living alone, maybe has just lost a spouse, value most? Just someone to converse with in a way that feels intuitive and human. We see people using ChatGPT more and more for this conversation, but is there a humanoid-esque breakthrough? Or it turns out you don't need it to make coffee, or fold clothes, or do the dishes, although that would be good too, but it might just be something a little bit more simple that still adds a lot of value, and is just the, the first crawl of crawl, walk, run, of this kind of future that Vinod is talking about, where that whole complex is X times-

    24. VK

      Mm

    25. SF

      ... more valuable ever than we saw in automotives.

    26. AM

      I think that it's, it's interesting because we can sort of think of kind of like our present and put robots in places and do things-

    27. SF

      Mm-hmm

    28. AM

      ... like that. It's really hard to think of when you really have extremely low-cost labor, manufacturing-

    29. SF

      Mm-hmm

    30. AM

      ... et cetera, and then the world you can build from there, because, you know, we can look at that's a good solution for now, but when the cost of building a wonderful state-of-the-art assisted living facility, where you can put a bunch of people together-

Episode duration: 49:41

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Z3D2UmAesN4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome