Skip to content
The Twenty Minute VCThe Twenty Minute VC

Aidan Gomez: What No One Understands About Foundation Models | E1191

Aidan Gomez is the Co-founder & CEO at Cohere, the leading AI platform for enterprise, having raised over $1BN from some of the best with their last round pricing the company at a whopping $5.5BN. Prior to Cohere, Aidan co-authored the paper “Attention is All You Need,” which introduced the groundbreaking Transformer architecture. He also collaborated with a number of AI luminaries, including Geoffrey Hinton and Jeff Dean, during his time at Google Brain, where the team focused their efforts on large-scale machine learning. ----------------------------------------------- Timestamps: (00:00) Intro (00:45) Childhood & Backround (04:29) Is More Compute the Only Path to Better Performance? (08:07 ) Can Anyone Afford to Stay in the AI Race Besides Tech Giants? (13:44) Is AI Heading Toward a Race to the Bottom? (16:55) Will Companies Keep Building Their Own Chips? (18:30) Is Model Progression Outpacing Compute Advancement? (19:41) Early Challenges in Accessing Compute Chips (23:48) Are We Underestimating the Short-Term Impact of AI Advancements? (27:06) Is It Too Late for Startups to Enter the AI Model Space? (27:55) AI Development: The Exponential Rise in Costs (30:40) Will Cloud Giants Continue Acquiring Smaller AI Model Providers? (35:10) Is OpenAI Prioritizing AGI Over Practical Products? (48:29) What's the Biggest Overlooked Factor in AI's Future? (50:09) Concerns About a Future Where AI Replaces Human Interaction (54:20) What Will AI Do in Three Years That It Doesn't Do Today? (55:48) Quick-Fire Round ----------------------------------------------- In Today’s Episode with Aidan Gomez We Discuss: 1. Compute vs Data: What is the Bottleneck: Does Aidan believe that more compute will result in an equal increase in performance? How much longer do we have before it becomes a case of diminishing returns? What does Aidan mean when he says “he has changed his mind massively on the role of data”? What did he believe? How has it changed? 2. The Value of the Model: Given the demand for chips, the consumer need for applications, how does Aidan think about the inherent value of models today? Will any value accrue at the model layer? How does Aidan analyze the price dumping that OpenAI are doing? Is it a race to the bottom on price? Why does Aidan believe that “there is no value in last year’s model”? Given all of this, is it possible to be an independent model provider without being owned by an incumbent who has a cloud business that acts as a cash cow for the model business? 3. Enterprise AI: It is Changing So Fast: What are the biggest concerns for the world’s largest enterprises on adopting AI? Are we still in the experimental budget phase for enterprises? What is causing them to move from experimental budget to core budget today? Are we going to see a mass transition back from Cloud to On Prem with the largest enterprises not willing to let independent companies train with their data in the cloud? What does AI not do today that will be a gamechanger for the enterprise in 3-5 years? 4. The Wider World: Remote Work, Downfall of Europe and Relationships: Given humans spending more and more time talking to models, how does Aidan reflect on the idea of his children spending more time with models than people? Does he want that world? Why does Aidan believe that Europe is challenged immensely? How does the UK differ to Europe? Why does Aidan believe that remote work is just not nearly as productive as in person? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Aidan Gomez on Twitter: https://twitter.com/aidangomez Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #aidangomez #cohere #openai #venturecapital #founder #computing

Aidan GomezguestHarry Stebbingshost
Aug 19, 20241h 3mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:45

    Intro

    1. AG

      The reality of the matter is, there's no market for last year's model. It's definitely true that if you throw more compute at the model, if you make the model bigger, it'll get better. For folks who have a lot of money, that's a really compelling strategy. I think we'll continuously exist in a world of multiple models, some focused and verticalized, others completely horizontal. There's gonna be a, a consolidation in the space, for sure. It's really dangerous when you make yourself a subsidiary of your cloud provider.

    2. HS

      Ready to go? (instrumental music plays) Aidan, I am so excited for this. So I was going through the prep, uh, first, before I was writing the schedule,

  2. 0:454:29

    Childhood & Backround

    1. HS

      and I was thinking, "Well, where do we start?" And then I saw one of the notes, and Aidan, it says that you grew up or were brought up in rural Ontario in a house your grandfather or father built by hand. What was that like as a starting point, and can you take me there?

    2. AG

      Yeah. I, uh, I grew up in the middle of, you know, nowhere in Ontario. It was a big, uh, 100 acre lot which had a ... I- it's all forested, and it's a maple forest. And so it was super cool to grow up in, like, the most Canadian environment ever. But i- it was very distant from technology, for sure.

    3. HS

      But you loved gaming, didn't you?

    4. AG

      I did love gaming. I did love gaming. Um, so I love technology from, from scratch. It's just it was really hard to access it. Like, we couldn't get internet. Uh, we could do dial-up, but I had dial-up for years after people had gotten high-speed internet.

    5. HS

      (laughs)

    6. AG

      Um, and so all my friends, you know, they were online gaming, doing all this sort of stuff. And I was just so jealous. Or, or not jealous, but just, like, missing out on this wave of technology, the internet that was coming about and becoming popular. So it made me obsessed with tech. I would, like, sit at home with our computer with shitty, uh, dial-up internet, and I would just try to make it faster. I would try to make the most out of what I did have. And eventually, that led to wanting to learn how to code and understand how the web works and, you know, can I make this stuff faster? Can I load the internet faster? 'Cause I was watching pixels go line by line. And that's really what pushed me into CS, just sort of, like, forced to learn how this tech works so that I could get more out of it.

    7. HS

      There's this weird understanding that I have now from meeting so many incredible founders, and it's this incredibly high correlation between those that gamed in their early years and those that achieved success. I just ... Why do you think gaming is such a contributor to successful founders?

    8. AG

      Video games teach something to you. Um, you're much more willing to grind, right? To just do repetitive, difficult, painful things towards some broader goal. So that sort of resilience, um, I think is important. And then also the, the fact that you, you can respawn. Like, you get to try again, you get a second attempt. That optimism or that, that framing is really important. I think in a lot of cultures, you get one shot. Y- you know, you have, you have a reputation, and if you fuck it, it's done, it's over for you. But maybe what gaming can give people is a sense of you can fuck up and you can try again and you can get better. And the second time, you fuck up less than the first time. (laughs) And the third time, you fuck up less than the second time. And so that notion of progress through failure I think is probably something very significant for founders.

    9. HS

      And I, I also always believe in the power of, like, game design and that progressive overload, the way that games are designed to be easier at first, you feel great, you pick up confidence.

    10. AG

      Mm-hmm.

    11. HS

      You would never start a game with a really hard first level where people fail, and it's like, "This is impossible, I'm not gonna do it."

    12. AG

      Yeah, yeah. I mean, there's analogies. So in, in machine learning, that's called curriculum learning, like, that you want to start, "Okay, let's first teach the model to do something very simple. Let's make it a little bit more complex and build on that knowledge." What's funny is that curriculum learning has actually failed in machine learning. We, we don't really do curriculum learning. It's just throw the hardest material and the easiest material all at the same time and let the model figure it out. But yeah, for humans, it's, it's so effective, it's such an important piece of how we learn. Um, it's interesting to see that hasn't taken

  3. 4:298:07

    Is More Compute the Only Path to Better Performance?

    1. AG

      off.

    2. HS

      You said about kind of just throwing it at the model. I just wanna just dive in at the deep end bluntly, because I think it's a question that everyone's asking, which is, like, everyone just says, "Just throw more compute," and that is the single biggest rate limiter that we have today. We just need more compute and performance will increase. Do you think that is true, there is a lot more room to run there, or it is other elements that are now holding back performance?

    3. AG

      I mean, it, it's definitely true that if you throw more compute at the model, if you make the model bigger, it'll get better. It's kind of like, it's the most trustworthy way to improve models. It's also the dumbest, right? Like, i- if all else fails, just make it bigger. Uh, and so for folks who have a lot of money, that's a really compelling strategy. Super low risk, you know it's gonna get better. Just scale the model up, pay more money, pay, pay for more compute, and go. But yeah, I, I mean, I, I believe in it. I just think it's, uh, extremely inefficient. There, there are much better ways. If you look at the past, let's say, like, year and a half, so between ... I guess by now it would be, like, between ChatGPT coming out or, or GPT-4 coming out, uh, and now. GPT-4, if it's true what they say and it's 1.7 trillion parameters, this big MoE, we have models that are better than that model that are, like, 13 billion parameters. (laughs) Right? And so the scale of change, uh, like, how quickly that became cheaper is absurd. Like, it's, it's, uh, kind of surreal. Um, and so yes, you, you can achieve-... that quality of model just by scaling, but you probably shouldn't.

    4. HS

      Does that continue in that pre- same progressive advancement? And what I mean by that is, do we continue to see that same scaling advantages or does it actually plateau at some point? As you said there, you know, we always hear about Moore's Law. At some point, it just becomes a better calculator for the iPhone. (laughs)

    5. AG

      Yeah, I mean, I think it, it certainly requires exponential input. You know, you need to continuously be doubling your compute in order to sustain linear gains in, in intelligence. Um, but I, I think that probably goes on for a very, very, very long time. Um, it'll just keep getting smarter. But you run into, like, economic constraints, right? Uh, not a lot of people bought the original GPT-4, certainly. Not a lot of enterprises.

    6. HS

      Hm.

    7. AG

      'Cause it was huge. It was massive. Super inefficient to serve, so costly. Not smart enough to justify that cost. And so I think there's a lot of pressure on making smaller, more efficient models, smarter via data and algorithms, methods, rather than just scaling up due to market forces.

    8. HS

      (laughs)

    9. AG

      Just pressure on price.

    10. HS

      Will we live in this world of unbundled, verticalized models which are much more efficient and smaller, designed for specific use cases? Or will there be much larger, three to five models, which kind of rule it all?

    11. AG

      There will be both. There'll be both. Like, uh, the, the one pattern I think we've seen emerge over the past couple years is that people love prototyping with a generally smart model. They don't wanna prototype with a specific model. They don't wanna spend the time fine-tuning a model to make it specifically good at the thing that they care about. What they wanna do is just grab, you know, an expensive big model, prototype with that, prove that it can be done, and then distill that into a, an efficient focus model, a specific thing they care about. So that, that pattern has really emerged. So I, I think we'll continuously exist in a world of multiple models, some focused and verticalized, others completely horizontal.

  4. 8:0713:44

    Can Anyone Afford to Stay in the AI Race Besides Tech Giants?

    1. AG

    2. HS

      You spoke about kind of the cost and needing to double compute to keep that same kind of linear level of intelligence. The cost is exorbitant. And like, unlike... Maybe I'm wrong here and I'm too young to remember past technology cycles, but almost unlike anything we've seen before in technology. We, I think it was three billion a year that OpenAI is spending. Like, how can you afford to maintain your position in this race unless you are Microsoft, Amazon, Google, or Facebook?

    3. AG

      I think if you're just doing the scaling project, y- you have to be one of those, or you have to be a subs- an effective subsidiary of one of those companies. Um, but there's a lot more to be done.

    4. HS

      Mm-hmm.

    5. AG

      Like if you, if you're not completely adherent to scale is the only path forward, if you believe that there are data innovations, there are model and method innovations.

    6. HS

      C- can we just... W- what are data innovations and what are model and method innovations?

    7. AG

      Yeah, so, uh, pretty much all of the major gains that we've seen in the, uh, open source space have come from data improvements. Uh, so models getting much better by taking higher-quality data, uh, from the internet, better scraping algorithms, parsing those web pages, pulling out the right parts, up-weighting specific parts of the internet. 'Cause there's lots of, like, repetition and junk, right? And so pulling out the most valuable, um, knowledge-rich parts of the internet and emphasizing them to the model. Uh, synthetic data, the ability to create new data that is super scalable, so you can get many, many billions of words or, you know, hundreds of millions of pages of this stuff. But it's the, no humans involved. Just written by models. Those innovations, the ability to increase the quality of data, have led to, I think, most of the gains that we're, we're seeing right now.

    8. HS

      Okay. So that's data innovation.

    9. AG

      Mm-hmm.

    10. HS

      Method and model innovation.

    11. AG

      Yeah, so this is stuff like new RL algorithms, um, you know, there's lots of rumors about Q* and what that might be. Uh, ideas around search, searching for the solution. S- the status quo with models is, I ask you a question, and you're a model, I, I ask you a question, and the model is expected to respond immediately with the right answer. That's an incredibly high burden to place on the model, right? Like, you, you couldn't do that to a human. You couldn't ask a human a hard question and expect them to just regurgitate the answer immediately. They need to work through it, they need-

    12. HS

      Have you ever been in a board meeting? (laughs)

    13. AG

      (laughs) Yeah, right, right. Yeah, no. Sometimes we do, sometimes we do. No, but I, I think, yeah, there's like this very obvious next step for models which is... you need to let them think and work through problems. You need to let them fail. They need to try something, fail, understood why they failed, roll that back, and make another attempt. And so, at present, there's no notion of problem-solving in models.

    14. HS

      And when we say problem-solving, that is the same as reasoning, correct?

    15. AG

      Yeah, yeah.

    16. HS

      Why is that so hard, and why do we not have any notion of that today?

    17. AG

      I think it's not that reasoning is hard. Uh, I think it's that, um, there's not a lot of training data that demonstrates reasoning out on the internet. Um, the internet is a lot of the output of a reasoning process.

    18. HS

      Mm-hmm.

    19. AG

      Like, you don't show your work when you're writing something on the web. You sort of present your conclusion, uh, or present your idea, which is the output of loads of thinking and, and experience and discussion. Um, so we just lack the training data. It's just not freely available. You have to build it yourself. And so that's what companies like Cohere and, and OpenAI and Anthropic, et cetera. That's what we're doing now, is collecting data that demonstrates human reasoning.

    20. HS

      How do you think about competing against, like, OpenAI's incredible UGC play?

    21. AG

      Yeah, no, that's super difficult. And especially with enterprises, they never let you train on their data. And so we can't train on any of our customers' data.... super private. Their perspective is their data is their IP, there's too many secrets in there, IP, and so they're just not willing to do it. And I, I'm super empathetic to that position. Um, and so for us, our focus is synthetic data, we push a lot on that, as well as having like a human annotation force, and Scale is a partner for that. We have our own folks in-house, but that's the burden that's placed on us because we're not a consumer company. We have to generate this data ourselves. The benefit is, we're more focused, so we have less surface area to cover, so it's not the entire world showing up and asking us to do potentially anything. It's like enterprises with very clear patterns for the type of stuff they wanna do. It's like they wanna automate certain finance functions, or they wanna automate certain, you know, HR functions. And so the scope is reduced dramatically, which lets us really focus in on, on those pieces.

    22. HS

      What will the synthetic data market look like in 10 years, and will it be won by two to three providers?

    23. AG

      I mean, I've heard that the current LLM API market is dominated by synthetic data.

    24. HS

      Mm.

    25. AG

      That's mostly what people are doing. They're creating data from these big expensive models to fine-tune smaller models that are more efficient.

    26. HS

      (laughs) .

    27. AG

      Uh, so they're ostensibly like distilling the bigger models. I don't know how sustainable that is, um, as a market, but I, I definitely think there's always gonna be a new task, or new problem, or, or new demand for data, and whether that comes from models or whether it comes from humans, we're gonna have to,

  5. 13:4416:55

    Is AI Heading Toward a Race to the Bottom?

    1. AG

      to meet the demand.

    2. HS

      One thing I'm, uh, kind of concerned about, Bunney, or I look at with, with hesitation is you see OpenAI price dumping. You see, you know, M- Meta releasing for free, and Mark, you know, uh, dispelling or pronouncing the value of open source and an open ecosystem. Are we seeing this real diminishing value of these models, and is it a race to the bottom and a race to zero?

    3. AG

      I think if you're only selling models for the next little while, it's gonna be a really tricky game. It's gonna be really tough. Um, it won't be a small market. There will be a lot of...

    4. HS

      Can I be really stupid? Who's only selling models, and who's selling models and something else?

    5. AG

      I don't wanna name names, but let- let's say Cohere right now only sells models. We have an API, and you can access our models through that API.

    6. HS

      Okay.

    7. AG

      I think that that will change soon. There are gonna be changes in, in the product landscape and what we offer to sort of push, uh, not away from that, but to add onto that picture and that product suite. But if you're only selling models, it's going to be difficult because it's gonna be like a zero margin business, because there's so much price dumping. People are giving away the model for free. Um, it'll still be a big business. It'll still be a pretty high number, because people need this tech, it's growing very, very quickly. But the margins, at least now, are gonna be very, very tight. Um, and so that's why there is a lot of excitement at the application layer.

    8. HS

      Mm-hmm.

    9. AG

      Um, and I think that discourse in the market is probably right to point out that value is accruing beneath, like at the chip layer, 'cause everyone is spending insane amounts of money on chips to, to build these models in the first place. And then above, at the application layer, where you see stuff like, uh, let's say ChatGPT, which is charged on a, like per user basis, you know, 20 bucks a month type thing. That seems to be where, at this phase, value is accruing. I think that the model layer is an attractive business in the long term, um, but in the short term, with the status quo, it is a very low margin, sort of commoditized, uh, business.

    10. HS

      Can I ask you, if we just kind of break it down, you mentioned kind of the chip layer there. How do you think about your spend today on chips and how that has changed over time as a percent of spend?

    11. AG

      Yeah, it's gotten way, way more. (laughs) .

    12. HS

      (laughs) .

    13. AG

      Uh, yeah, so it's a huge chunk of our spend now. Um, way too much.

    14. HS

      And you have a direct relationship with NVIDIA

    15. NA

      ... and Model-

    16. AG

      Yeah, yeah. A- and loads of chip players, like we're, we're close with, um, NVIDIA, AMD, uh, in conversations with lots of startups that are building new chips. We also run on, uh, TPUs from, from Google.

    17. HS

      And that's 'cause you don't wanna have a single point of failure?

    18. AG

      Yeah, well, I, it's mostly because market demands it. Like our customers wanna be able to run on tons of different platforms. They want optionality, they don't wanna get locked into one, and so we need to provide a really diverse base of platforms to run on. Similarly to how, uh, we've been very avoidant to get locked into one cloud, and we wanna be available on every cloud. It's because market demands it. Like customers want choice, they don't wanna get verticalized lock in to

  6. 16:5518:30

    Will Companies Keep Building Their Own Chips?

    1. AG

      one provider.

    2. HS

      Totally get you. Do you think everyone will be kind of verticalizing their own stack in terms of building out their own chip capabilities? We've seen Apple recently talk a lot about kind of their own verticalization and owning the chip layer too. Do you think that'll be a continuing trend or not?

    3. AG

      I think it will be. Um, right now, chips are just exceptionally high margin-

    4. HS

      Mm-hmm.

    5. AG

      ... and there's very, very little choice in the market. Um, that's changing. Uh, I think it's gonna change, um, faster than other people think. But yeah, I, I'm, I'm very confident...

    6. HS

      And then you're also seeing the stockpiling of GPUs change a lot.

    7. AG

      Mm-hmm.

    8. HS

      Even before there was this like real supply chain shortage.

    9. AG

      Yes, yeah, yeah.

    10. HS

      And now, it's not so much.

    11. AG

      No, yeah. The s- the shortage is going down. I think the, it's becoming clear there are going to be more options available, and not just on the inference side. I think everyone, like inference is already quite heterogeneous. You actually already have loads of options on the inference side, which is like not the training of the models, but the serving. On the training side, the picture has been, it's essentially one company that creates the chips that you can use to train big models. That's still true today. Um, but act- actually it's not true today. There's two companies. Um, you can definitely train big models on TPUs.... those are actually now a, a usable platform for super large-scale model training, and I think Google has proven that quite convincingly. And then there's NVIDIA. Um, but I think soon AMD, Trainium, these platforms are gonna really be ready for primetime.

  7. 18:3019:41

    Is Model Progression Outpacing Compute Advancement?

    1. AG

    2. HS

      The question that I have is when you look at the spend on the mo- like, the models and actually compute and you see... What worries me is actually model progression is moving so much faster than data center buildout and kind of compute progression. And so it's like when you look at a year's time, are we gonna be running the newest, latest models on H100s or whatever the 18-month-old compute is?

    3. AG

      Mm-hmm.

    4. HS

      And is there a misalignment between model advancement and compute advancement?

    5. AG

      I mean, the supply chain thing is, like, really, really interesting. I, I think-

    6. HS

      Do you need to build out your own data centers?

    7. AG

      No, we partner with folks.

    8. HS

      Is there ever a time when that changes?

    9. AG

      You know what? We're an economically rational actor, and so if it's cheaper for us to build out our own data centers, we'll go do that. Um, we've run the numbers, and, and we feel confident that, um, the price we're getting from our providers makes that not a really attractive path. But yeah. I mean, the oth- the other reason we do it is if there were a chip to come out that was really attractive in its cost profile but no provider would procure it

  8. 19:4123:48

    Early Challenges in Accessing Compute Chips

    1. AG

      for us.

    2. HS

      Did you have any challenges in access to significant amounts of compute chips in the early days? Today, has that changed?

    3. AG

      We've been around for, like, five years now, and so, uh, it was, like, well before, uh, the whole thing started popping off. Uh, so we, we were lucky. We-

    4. HS

      Did you expect it to pop off?

    5. AG

      I mean, I wouldn't have started the company if I didn't (laughs) expect it alone.

    6. HS

      But, like, a bit, like, over-

    7. AG

      Not, not in the way that it happened.

    8. HS

      'Cause it was a tr-

    9. AG

      Totally, totally. It, it happened later and much more suddenly-

    10. HS

      Yeah.

    11. AG

      ... than I expected. Yeah.

    12. HS

      'Cause you co-authored a piece in 2017-

    13. AG

      Mm-hmm.

    14. HS

      ... around Transformers.

    15. AG

      Yeah, yeah.

    16. HS

      Yeah, and so you were expecting it to pop off relatively quickly, I take it.

    17. AG

      No, not, not at that moment. In 2017, I was kinda, like, I was the intern on this Transformer paper, uh, and I thought, "Oh, this is just research, you know. We just create new architectures, improve translation scores by 3%, and that's, that's what it is." Um, I didn't expect all that came of that, that architecture, uh, the Transformer and, um, the community's love for it, and, and real, like, consolidation onto the Transformer as a platform for building AI. That, I didn't expect. Um, with language modeling and the whole scaling project, I thought the world would catch on way faster to that piece. It, it started to become really obvious, but then it was two, three years before everyone woke up, and it, it sort of hit the world.

    18. HS

      What was that turning point? Was it ChatGPT and the-

    19. AG

      It totally was, yeah.

    20. HS

      Yeah.

    21. AG

      It was ChatGPT. It was putting the technology directly in front of the user. So you could... You don't have to explain it to your mom or dad or whatever. You can, like, you know, sit down, talk to this thing. Experience what it's like to talk to these models.

    22. HS

      Do you think chat is the best interface for consumers?

    23. AG

      For some stuff. I think for other stuff, GUI, like, you know, like, a user interface, (laughs) a traditional visual one is quite good. Um, I think it really depends. Chat as an interface onto everything I don't think makes sense. I don't want to have to type out explicitly my instructions to get stuff done. Like, sometimes I just wanna click some buttons and go through a GUI and get the job done. So yeah, I, I don't think like, you know, GUIs are dead and that we should replace everything with a textbox, but I do think it provides this really compelling interface. Certainly, voice does. Like, voice, voice is magical. Certainly, it, it was magical the first time I saw a model write text back to me as compellingly as a human. That happened, like, in 2017. Shortly after we submitted the paper, we started training language models on Wikipedia, um, and we sampled from those models, and it could write Wikipedia pages as convincing as a human page. And so that, that was a very magical moment that computers kinda woke up and started speaking back to us. Um, and then the next time was dialog as an interface. So not just I submit an instruction, the model returns a response, but having a conversation over chat with the model.

    24. HS

      OpenAI are investing a lot in voice.

    25. AG

      Mm-hmm.

    26. HS

      Do you think their confidence in voice as the next kind of, uh, interface with consumers is right and justified?

    27. AG

      Absolutely.

    28. HS

      Yeah.

    29. AG

      Absolutely. Like, anyone who has tried having a voice-based conversation with one of these models, it's like a stunning experience. I, like you're, you're kind of left in shock when you hear the model exhibiting emotion and inflection, and, you know, you hear it breathe, i- inhale before it speaks. You hear its lip smacking, like y- there's something so incredibly compelling (laughs) about that experience. It's, it's hard to, it's hard to describe until you try it for the first time. Um, it's such a compelling interface.

  9. 23:4827:06

    Are We Underestimating the Short-Term Impact of AI Advancements?

    1. AG

      It's so compelling.

    2. HS

      To what e- I've always... Was brought up on the idea that actually we always overestimate things in the short term and underestimate them in the long term. To what extent do you think that's the case here? Or actually, voice is coming and coming pretty quickly. GPT-5 is coming and coming, you know, whether that's in three to six months, still coming pretty quickly. To what extent are we actually underestimating the short term?

    3. AG

      I think it's, it's... Oh, okay. There's, like, two, two things happening. One, it's, it's getting harder...It's getting harder to deliver gains in the models. It is getting more difficult, more arduous, more costly, because there was a time where the models were dumb enough that I could pull... I say dumb enough, uh, (laughs) but the models were-

    4. HS

      Not sophisticated enough.

    5. AG

      ... yeah, sufficiently, uh, unintelligent that I could pull anyone off the street. Any human was more intelligent than the model and had something to teach it, right? I could just grab someone, say, "Talk to this model. Find errors," and they will, and improve it. Eventually, uh, the models, like, it was just kind of hard to get people to, the average person to find knowledge gaps or that type of thing, and so you had to start going to domain experts. Um, and initially, like, cheap kind of junior ones, like students of computer science could teach the model something, students of biology could teach the model something, and then the model started getting really good and kind of matching that level of knowledge. And you're, you're just going into more specific and more scarce pools of talent to get them to teach the model their knowledge, um, and so it gets more high-friction, more expensive to, to teach the model the incremental new knowledge.

    6. HS

      When does it not become worth it? If you know what I mean. If you think about, like, I always think about language learning, which is like, you can learn something like, uh, 95% of a language in six months, but to get a 98% proficiency, it takes five years.

    7. AG

      Mm-hmm.

    8. HS

      Uh, I've kind of bastardized that step, but it's about that. To what extent does one go, actually, for that extra incremental 0.5% increase, it's another billion dollars. That no longer is efficient.

    9. AG

      Yeah, I mean, fortunately, costs are falling super fast on, on everything. Like, co- like the compute costs, dollars per flop, um, just the, the scale of model-

    10. HS

      Dollars per flop?

    11. AG

      Yeah. Like, how much a flop costs.

    12. HS

      Oh, right.

    13. AG

      Or flops per dollar. So a, a flop is a unit of compute and, uh, a model-

    14. HS

      (laughs) Sorry. Dude, that's hilarious. A flop, like, for me in the UK, is like a-

    15. AG

      Flop, yeah.

    16. HS

      ... yeah, like a flop, like a mistake.

    17. AG

      Yeah, yeah, yeah.

    18. HS

      Like, "Oh, you completely flopped that one." I was like, "Dollars per flop?" Is this like a new thing? (laughs)

    19. AG

      No, no, no, no, no. It, it's, uh ... No, it's a super old thing. It's, uh, floating-point operations.

    20. HS

      (laughs)

    21. AG

      So it's literally, like, one clock cycle of a-

    22. HS

      That's amazing. I'm glad I clarified. (laughs)

    23. AG

      Sure. Yeah, yeah, yeah, no. Um, and so if you have, like, 10 billion parameters, that basically equates to some number of flops and if you 10X that, if you have 100 billion parameters, that equates to 10X that number of flops, approximately. The, the price for a flop goes down super, super quickly, uh, over time. And so that's what's unlocked much larger models today compared to 2017 and, um, you know, even, even two

  10. 27:0627:55

    Is It Too Late for Startups to Enter the AI Model Space?

    1. AG

      years ago.

    2. HS

      Given that, do you not think that actually it is not too late for a new startup to enter the model space? 'Cause everyone's like, "Oh, it's, it's far too late for a startup to enter the model space."

    3. AG

      Mm-hmm.

    4. HS

      But actually, given the decreasing cost barrier, does that not mean it's actually more accessible than ever for startups to do?

    5. AG

      Yeah, so it becomes cheaper to build last year's model by, like, a factor of 10 or 100 each year. Um, we just get better data, cheaper compute.

    6. HS

      Yeah.

    7. AG

      So yeah, it definitely lowers the barrier to the previous generation of models. The, the reality of the matter is nobody cares about the previous generation. Nobody wants them. There's no market for last year's model. It's, like, useless in comparison to this year's model. Any sort of, like, technological development, um, really makes the last generation obsolete super quickly.

  11. 27:5530:40

    AI Development: The Exponential Rise in Costs

    1. AG

    2. HS

      I think the difference is, like, it costs you $10 million to build V1, so I'm saying as a software product.

    3. AG

      Mm-hmm.

    4. HS

      And then to make V2, that update a little bit better, it's like, another one or two million dollars. But here it's like three billion to build one and then five billion to build two. The increment is not increment. It is order of magnitude.

    5. AG

      I don't, I don't know if it's always the pattern that it's cheaper to build the next generation. I, I think with, like, chips for example and, and other very complicated pieces of technology, um, it does get more expensive to generate each new generation. And we still do it 'cause it's worth it.

    6. HS

      Okay. So going back to your statement there, sorry, 'cause I went off on a tangent there. So, like, no one gives a shit about last year's model.

    7. AG

      Mm-hmm. Well, you were, you were asking, um, you know, does the, do the improvements sustain? And I was saying it's getting harder to improve these models. It's getting higher friction. The second weird effect is that because these models are getting smarter, humanity's ability to distinguish between them, or not humanity but each individual's ability to distinguish between them becomes way harder. You, you can't tell the difference between generations 'cause you're not enough of an expert in medicine, mathematics, physics, to, to actually feel the change. Like, the model is already kind of as good as it can get with all the basic level knowledge, which is what you and I have, and so when we interact with it, we get the same experience between generations. But in reality, those generations are changing dramatically in much more specific capabilities or raw intelligence. And yeah, you were asking is it worth it. Is it worth it to keep spending so much money to push forward? And I think absolutely it is. Absolutely it is. It, it's worth it to someone.

    8. HS

      Why?

    9. AG

      It's, even if for you and I, like, as consumers, when we're using this stuff, like, we don't care if it knows, uh, C star algebras and, like, quantum physics. It doesn't matter to us. It, it has no impact on our experience with this technology. But that's really helpful for a researcher in quantum physics. And so we'll make more progress there by providing tools. It's the same question around just technology in general. Like, we have abundant food, we have, you know, super cheap, uh, cars now and, uh, we have phones in, in all of our pockets. We're kinda good. Like, we've got it. Should we really invest in the next generation of technology that focuses on-... you know, creating a new material for a spaceship so that it can get up into orbit more efficiently. Yeah, we should. We should. And it might not matter to you, but you don't give a shit if the spaceship gets up into orbit (laughs) cheaper. But it matters to someone a lot, and they're willing to pay, and there's a market for it. And that's how progress

  12. 30:4035:10

    Will Cloud Giants Continue Acquiring Smaller AI Model Providers?

    1. AG

      sustains itself.

    2. HS

      That continuing progress, we ... Going back to it, obviously it costs, and will continue to cost a lot of money. You, you said before a really interesting two words, which was effective subsidiaries. And we've seen a lot of companies be kind of bought, acquired, whatever that is, subsumed in. I think everyone realizes now that cloud is the cash cow that keeps on giving when you look at kind of the continuing growth rates and profitability of Azure and Google Cloud and everything in between. And actually, you'll just see the majority of those smaller model providing companies bought up by these la- large cloud providers. Do you agree with that as a probable likelihood for the next three to five years?

    3. AG

      Three years, yeah, yeah. I think there will be a culling of the space. I, I think it's already happening. I think a lot of the model builders that-

    4. HS

      Well, A- Adapt's gone to Amazon. We had David on the show. He was fantastic. Love David.

    5. AG

      He's great.

    6. HS

      Really, really good. Um, Inflection, obviously gone to Microsoft.

    7. AG

      And I think there's more, more coming. Um, there's gonna be a, a consolidation in the space, for sure. And it's really dangerous where you ... when you make yourself a subsidiary of your cloud provider.

    8. HS

      Why?

    9. AG

      Well, it's, it's just not good business, right? Like, it's ... So to raise money as a company, just generally, you need to go and convince some investors who they only care about ROI, uh, on that capital. And they give you the money, and you go create some value using it. But when you're, you're doing this raising from cloud providers thing, the math is super different.

    10. HS

      Do you think venture investors will make money from the model investments we've seen over the last few years?

    11. AG

      Uh, Cohere's investors will. (laughs) They'll make a lot of money. That was normal back then.

    12. HS

      Do you look back on that and feel great for making these people who believed in you a load of money? Are you like, "Fuck, that was cheap, and I shouldn't have given away that much"?

    13. AG

      No, I mean, I, I think that everyone who was there at that point is still here. They're still fighting. So our, our first investor was, uh, Radical Ventures' Jordan Jacobs there, and he's still on our board. He's still ... I, I call him like the fourth co-founder of Cohere. Like, he, he's built the company alongside us, and is still, you know, very active and present in, in building the company. Um, so I, I don't regret it.

    14. HS

      What was the latest price?

    15. AG

      The media reports that it was, uh, a, a little over 5.5.

    16. HS

      D- like, does that cause you stress? And what I mean by that is just like, you know, when you look at revenues to valuation. I know we're not in that game, but at some point, everyone is in that game.

    17. AG

      Of course.

    18. HS

      Does that, does that make you go, "Fucking hell, we got a long way to go"? You know, when I look at men's health, I'm like, "Fucking hell, I got a long way to go." (laughs)

    19. AG

      (laughs) I, I think it's definitely pressure. It's good pressure. Like, we have to ... Yeah, like you say, everyone gets into the revenue multiples game at some point. At some point, it converges to public market multiples. Um, I think we are actually in a dramatically better position than a lot of our comparables.

    20. HS

      Why?

    21. AG

      Because w- our valuation is not at the crazy state that a lot of others are. That's my belief. We still have to grow into it, but I'm very confident that the market is strong. A lot of people need these models. On the margin side, it's under pressure right now because of price dumping and because of free models being given out. Um, but that will change over time and then Cohere, our product stack, will also evolve.

    22. HS

      Which one do you most respect?

    23. AG

      I would say OpenAI.

    24. HS

      Why?

    25. AG

      They, they paved the way, like a ... Just sort of like a irrational conviction to this vision of scaling. And I think that, that was driven by ... I, I remember talking to Ilya, um, about this stuff way before GPT-1, you know, like, uh, in the transformer times, around that time, um, because he was big in the Toronto scene. He studied under Geoff, came from Toronto, his family's in Toronto. And this, this idea of scaling was in his head. It was in his head back then, years before he actually started pursuing it properly. And that conviction led to the world that we live in today, like, this objectively magical technology that's emerged and is now sitting available to everyone. Um, I really admire

  13. 35:1048:29

    Is OpenAI Prioritizing AGI Over Practical Products?

    1. AG

      Ilya.

    2. HS

      Yeah, Ethan Mollick from Wharton said on the show yesterday, and OpenAI really only cares about, uh, AGI and the pursuit of AGI. And so they abandon products like Code Interpreter and a lot of other really useful products because they're focused on AGI. So it's not a criticism, but just like, that's their focus. Do you agree with that, or do you think they are actually kind of dual minded in terms of both going for the long-term AGI and also being much more cognizant of creating short-term valuable products for enterprise and for consumers more broadly?

    3. AG

      Uh, at least like lately, or in the new OpenAI, um, they're like a product company. They're like hardcore building a consumer product. That is their objective, and it's working. People love that product. It's, you know, a household name at this point. Um, so I think in the consumer space, they're going to be a product company, and I think that, um, they have to become one in order to foot the bill to build what they wanna build. Um, but I ... If you look at some of the departures, I, I would say it seems like the AGI effort is starting to take a backseat to, to product, into the, the consumer offering.

    4. HS

      Something that I worry about is, and I use Canva as an example of this, which is like, are we going to see companies be able to make more revenue per user from adding AI to their products? And every company is an AI pr- company now, whether it's Zendesk with customer support, Notion with note-taking, Canva with design, and it's all AI.You know, Canva bluntly said on the show recently that they are having margin compression because they don't charge more per seat.

    5. AG

      Mm-hmm.

    6. HS

      But they have AI infused in all of their product, and so you can create, you know, a-anything with AI in their products and obviously each query costs money. And so it's costing them more money and they're making the same revenue. Will we actually be able to make more revenue per user or will it just create a better customer experience?

    7. AG

      There's two different camps right now. Some people are pricing the exact same with AI features and using it to drive expansion in their business, and then the other folks like Microsoft, like Salesforce, like, uh, like Notion as well, um, they're charging for the AI features and getting a bigger business as a product. Both of those strategies are, are fine a- and super reasonable. For folks like Canva who are keeping the same price, um, I mean, I think it's a good bet. They wanna grow their user base, they wanna expand their user set, just give them the most useful product possible. At the moment, don't worry about margins because the cost of AI is falling super, super quickly. Um, I think that's reasonable.

    8. HS

      Okay. Fantastic. Can I ask, on enterprises, you know, Canva is obviously making a hard push for enterprise, you sell into amazing enterprises, what's the number one blocker today for why enterprises don't adopt?

    9. AG

      It's mostly trust in the technology. So security. Um, everyone is very sketched out by the current state of things, uh, who to train on my data.

    10. HS

      So sketched out means concerned?

    11. AG

      Yeah, yeah.

    12. HS

      Right.

    13. AG

      Um...

    14. HS

      Not like a flop.

    15. AG

      (laughs) Yeah. Yeah, well, they're hoping that they don't have a flop.

    16. HS

      Okay.

    17. AG

      Uh, so they're, they're really scared that someone's gonna take their data, train on it, and put them in some sort of, like, security vulnerability, um, or that they'll lose IP. I think that's a very valid concern because people have been training on user data.

    18. HS

      Is there anything you can do to, uh, reassure them other than, "Hey, we'll just not use it and use synthetic data"?

    19. AG

      Yeah. So our, our deployment model is set up to do that. We focus on private deployments, like inside their VPC, on prem, like, so what that means is just, like, it's on their hardware completely privately. We're not asking them to send data over to us, we'll process it and give you back the response from the model. We're saying we'll bring our models to where your data is and, you know, we can't see any of it.

    20. HS

      Will we see any movement back to on prem in this new world?

    21. AG

      When I speak to folks... It's super conflicted. In financial services, yeah, people are pulling away from cloud. They're pulling away from cloud, they're building out their own data center capacity. Uh, everywhere else still seems to be, "We need to migrate to cloud. It doesn't make sense for us to have these data centers." I don't know. I think that it probably depends on the vertical that you're looking at.

    22. HS

      What do they just get totally wrong about AI? I think at the enterprise, um, education curve is still very early. What do they just not understand about it?

    23. AG

      There's a lot of fear around AI being wrong. There's a hallucination-

    24. HS

      Mm-hmm.

    25. AG

      ... in, in these models and everyone views that as some sort of like, you know, the technology is doomed, you know, sometimes it hallucinates. It doesn't reflect reality. The models definitely do hallucinate. The hallucination rates have been dropping dramatically but they'll, they'll always have some chance of making s- stuff up or getting something wrong. But we exist in a world with humans and humans hallucinate constantly. We get stuff wrong, we, you know, misremember things, and so we exist in a world that's robust to error. And so I, I think there's too much emphasis on that.

    26. HS

      We don't have hallucination benchmarks though, do we?

    27. AG

      We do, yeah.

    28. HS

      We do?

    29. AG

      Yeah, yeah. Like, Victara has one and there, there are other hallucination benchmarks.

    30. HS

      Okay.

  14. 48:2950:09

    What's the Biggest Overlooked Factor in AI's Future?

    1. AG

      to push through.

    2. HS

      What do you think is the biggest thing that people are not seeing about the community right now in AI and how we're looking at the next 12 to 24 months?

    3. AG

      I think there's, there's sort of like a meme that's going around of people saying, "We've plateaued. Nothing's coming. It's slowing down." I actually really think that's wrong. Um, and, and not just from like a, "We need to 10X compute and that type of thing," perspective and...... trust me, it'll get better. Um, but, but from a methods perspective? So when I was talking about, like, reasoners and, and planners and, um, models that can try things, fail, and recover from that failure, um, and carry out tasks that take a long time to accomplish, these are like, uh, for, for the technologist, obvious things that just don't exist in the technology today.

    4. HS

      Mm-hmm.

    5. AG

      They just haven't had time to turn our focus there and add that capability into the model. Um, for the past year, year plus, folks have been focusing on that, and it will be ready for, for production. And so we'll see that come out, and I think that will be a big change-

    6. HS

      I see.

    7. AG

      ... in terms of capability.

    8. HS

      So for me as an investor, Aidan, help me. You're now an investor with 20VC. Where's the opportunity for us?

    9. AG

      I think the, the product space, the application space is still extremely attractive. There will be new products that come out of this technology, um, that transform social media. Like, people love talking to these models. Uh, like, uh, the, the usage time is insane.

  15. 50:0954:20

    Concerns About a Future Where AI Replaces Human Interaction

    1. AG

    2. HS

      Do you think this is good, Aidan? You grew up i- in a very wholesome, natural environment, and, you know, you mentioned your family also being in the UK, I'm sure you see them more now you're in the UK. I do not want my kids growing up in a world where they're speaking to agentic systems more than they are humans, and, like, gaining fulfillment from speaking to a model.

    3. AG

      You might actually (laughs) be wrong (laughs) . I think you might want your children to be speaking to an extremely empathetic, extraordinarily intelligent and knowledgeable, safe intelligence that can teach them things and, uh, have fun with them, and doesn't get tired of them, doesn't snap at them, doesn't bully them, doesn't pick on them, um, imbue them with insecurities. There is no replacement to humans. There's no replacement. There's no world where suddenly we all start dating chatbots and w- you know, human birth rates plummet. I, I don't think that happens, right? Like, I wanna have a child. I can't do that with a chatbot.

    4. HS

      (laughs)

    5. AG

      (laughs) You know, like, a, a human partner-

    6. HS

      Yet.

    7. AG

      (laughs) Yet. A human partner is way more, infinitely more valuable to me than, than whatever it, like, however compelling a chatbot is. A human is so much more valuable. I, it's the same reason why I don't think, um, we'll be able to replace humans in, in the workplace. It'll be an augmentation. Humanity will become more productive and do more. It's not that there will be less humans doing the work. We c- you can't replace humanity. Think about, like, sales, right? If I'm getting sold to by a bot, I, I'm not buying. It, it's that (laughs) simple. I don't wanna talk to a machine. I, like, for certain, like, simple purchases, maybe. But for the purchases that count, the ones that matter to me and my company, I would want a human accountable on the other side of that, that deal. When something goes wrong, I need someone, a human, who has authority to, to be able to intervene. I, I really think that the fears around displacement and, um, replacement, both on the consumer side, where we're all gonna get addicted to chatting to these chatbots, and on the, the workplace, the end of work, you know, there's gonna be mass unemployment, I can't see that happening.

    8. HS

      I think there's always a recognition that there's always kind of this kind of mild displacement in new technology adoptions, which is kind of standard, but you do see some form of displacement. But not to the extent where we're like 80% of us are... I mean, I'm sure you look at your grandparents and say, "You stick your computer in there with email," and they're like, "What will we do all day?"

    9. AG

      Mm-hmm.

    10. HS

      "This is crazy." And so I, I completely agree with you there. I do worry on the lower end of the spectrum, though, being like a, you know, Klarna losing, whatever, 70, 80% of their customer service team.

    11. AG

      There will be localized displacement, for sure. Um, but in the aggregate, it'll be growth, not displacement. Um, so for sure, there are certain roles that are vulnerable to the technology. It's kinda hard to come up with them. Like, customer support is definitely one. But at the end of the day, there still needs to be humans there to do that, uh, just not as many as there are today. Um, but customer support is a tough role. It's, like, really psychologically ugly. You get people screaming at you. Like, the reality of it, if you've ever listened in on call recordings of, of what it's like, th- that's an, a really emotionally taxing job.

    12. HS

      Yeah. It's a bit, it's, uh, very much like content moderation on large social networks.

    13. AG

      Yeah.

    14. HS

      That is emotionally scarring in many respects.

    15. AG

      Every day, you wake up, you go to work, you get screamed at, and have to apologize for hours. And I'm-

    16. HS

      And that's just 20VC. (laughs)

    17. AG

      (laughs) I, I think that side of things, maybe we let the models handle those conversations, and the humans can come in and, and help with, um, you know, the actual customer support conversations that humans would enjoy dealing with, right? Like, ones about they have a problem that needs solving, uh, and they're not angry about it (laughs) , there's just an opportunity to make this person's

  16. 54:2055:48

    What Will AI Do in Three Years That It Doesn't Do Today?

    1. AG

      life better.

    2. HS

      What does AI not do today that you think it will do in three years that will be completely transformative?

    3. AG

      I mean, AI in general, I think robotics is, like, the, the place where there will be big breakthroughs. The cost needs to come down, but it's been coming down. Um, and then we need models that are much more robust.

    4. HS

      Why are you bullish on robotics being in the area of big breakthroughs?

    5. AG

      Just because a lot of the barriers have fallen away. Like, before, like, reasoners and planners, um, inside of these, these robots, like, the software behind them, they were brittle and they had to, like, you had to, like...... program each task you wanted it to accomplish, and it was super hard-coded to a specific environment. So you have to have a kitchen that is laid out exactly like this, with the handle-

    6. HS

      Exactly the same dimensions-

    7. AG

      Yeah.

    8. HS

      ... nothing different.

    9. AG

      Yeah. Um, so it was very brittle. And on the research side, using foundation models, using language models, they've actually come up with much better planners that are more dynamic, that are able to reason more naturally around the world. Um, and so I think there's... I know this is already being worked on. There's, like, 30 humanoid robotic startups and that type of thing. But soon, someone's gonna crack the nut of, um, general purpose humanoid robotics that are cheap and robust. And so that will be a, that'll be a big shift. Um, I don't know if that comes in the next five years or 10 years, but it's gonna be somewhere

  17. 55:481:03:26

    Quick-Fire Round

    1. AG

      in that range.

    2. HS

      Hey, like, I could talk to you all day. I wanna do a quick fire round. So I say a short statement, you give me your immediate thoughts. Does that sound okay?

    3. AG

      Yeah, yeah. Let's do it.

    4. HS

      So, what have you changed your mind on most in the last 12 months?

    5. AG

      I think the importance of, the importance of data. I underrated it dramatically. Uh, I thought it was just scale. A lot of proof points have happened internally at Cohere that have just totally transformed my, my understanding of what matters in building this technology.

    6. HS

      So that is the quality of data?

    7. AG

      Yeah, quality. Like, like a single bad example, right, amongst, like, billions, like i- i- it's so sensitive. (laughs) It, like, it, it is a bit surreal how sensitive the models are to their data. Um, everyone underrates it.

    8. HS

      How much money have you raised now?

    9. AG

      In total, about a billion.

    10. HS

      Fucking hell.

    11. AG

      I know, yeah. (laughs)

    12. HS

      That's a lot of, that's a lot of money. Uh, what was the easiest round to raise?

    13. AG

      Maybe the first one.

    14. HS

      What was that, the fastest as well?

    15. AG

      Yeah. It was kind of like a conversation, and, you know, "Here's a few million bucks. Give it a try." (laughs) Uh, so I think that one was probably the easiest. When you're trying to raise half a billion dollars, it's a little bit more involved.

    16. HS

      Do you slightly pinch yourself when you see $500 million go in an account? Because I, I manage funds today, but we get capital calls.

    17. AG

      Uh-huh.

    18. HS

      And so it's not like, "Here's 500 million." It's like, you call it over several years, and-

    19. AG

      Yeah. (laughs)

    20. HS

      ... you just get a woof.

    21. AG

      And, and the interest on it is fantastic. (laughs)

    22. HS

      Yeah.

    23. AG

      But, um, uh, yeah, I do pinch myself. I mean, my, my brain-

    24. HS

      The interest on that is 25 million a year.

    25. AG

      I don't know what the specific number is, but it's, it's a lot. It's a big number. Um, I definitely... My brain is broken. Like, Cohere has broken my brain when it comes to economics and money. Half a billion does not feel like a lot. Like, relative to my competitors, it's not a lot, right?

    26. HS

      Does that worry you?

    27. AG

      No, I mean, it's part of our strategy. Like, if we wanted to go take that deal, we could go take that deal. But our strategy has been to pursue independence and, and doing this ourselves.

    28. HS

      If you can have any board member in the world, who do you have and why them?

    29. AG

      Mike Volpe and Jordan Jacobs. (laughs) My existing board members.

    30. HS

      Why is Mike such a good board member? Many people say this.

Episode duration: 1:03:26

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode FUGosOgiTeI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome