
Cohere Founder, Nick Frosst: How To Compete with OpenAI & Anthropic, and Sam Altman’s AI Disservice
Nick Frosst (guest), Harry Stebbings (host), Narrator
In this episode of The Twenty Minute VC, featuring Nick Frosst and Harry Stebbings, Cohere Founder, Nick Frosst: How To Compete with OpenAI & Anthropic, and Sam Altman’s AI Disservice explores cohere’s Nick Frosst: Pragmatic AI, Enterprise Focus, Not AGI Hype Nick Frosst, cofounder of Cohere and former first hire in Geoff Hinton’s Google Brain group, argues that today’s LLMs are transformative for work but fundamentally not a path to AGI as popularly described. He strongly criticizes Sam Altman’s and others’ rhetoric about near‑term existential risk as academically disingenuous and harmful to productive AI discourse and policy. Frosst explains Cohere’s strategy: tightly focused on enterprise use cases, efficient, small-footprint models, and agentic systems that operate safely on internal tools and data, rather than broad consumer products. He also explores talent wars, regulation, open vs. closed models, sovereignty, labor displacement, and why Cohere is betting on long‑lived, infrastructure‑like enterprise AI rather than hype-driven AGI narratives.
Cohere’s Nick Frosst: Pragmatic AI, Enterprise Focus, Not AGI Hype
Nick Frosst, cofounder of Cohere and former first hire in Geoff Hinton’s Google Brain group, argues that today’s LLMs are transformative for work but fundamentally not a path to AGI as popularly described. He strongly criticizes Sam Altman’s and others’ rhetoric about near‑term existential risk as academically disingenuous and harmful to productive AI discourse and policy. Frosst explains Cohere’s strategy: tightly focused on enterprise use cases, efficient, small-footprint models, and agentic systems that operate safely on internal tools and data, rather than broad consumer products. He also explores talent wars, regulation, open vs. closed models, sovereignty, labor displacement, and why Cohere is betting on long‑lived, infrastructure‑like enterprise AI rather than hype-driven AGI narratives.
Key Takeaways
AGI and existential-risk hype is misleading and counterproductive.
Frosst argues that claims about near-term AGI and AI posing imminent existential threats were obviously wrong when made, distort policy debates, and crowd out discussion of real, near-term issues like labor shifts and inequality.
Get the full analysis with uListen AI
Enterprise-focused LLMs must be trained and architected differently from consumer chatbots.
Cohere optimizes for workplace augmentation (e. ...
Get the full analysis with uListen AI
Efficiency—especially small, capable models—is a core competitive edge.
Cohere trains models like Command-A to run on as few as two GPUs, enabling real-world deployment for customers constrained by infrastructure, and spends orders of magnitude less on compute than some rivals while still delivering production-grade models.
Get the full analysis with uListen AI
Benchmarks and leaderboards poorly capture real enterprise value.
Most popular evals (math reasoning, ARC AGI, etc. ...
Get the full analysis with uListen AI
LLMs will significantly reshape white-collar work but won’t independently make breakthroughs.
Frosst believes many routine text-and-tool-heavy tasks (e. ...
Get the full analysis with uListen AI
Policy will determine whether AI amplifies or mitigates inequality.
Drawing parallels to the Industrial Revolution, Frosst argues that AI’s economic outcomes depend heavily on labor policy, worker protections, and how productivity gains are shared, not on the technology alone.
Get the full analysis with uListen AI
AI should be treated as infrastructure, driving national and regional sovereignty strategies.
He sees sovereign models (e. ...
Get the full analysis with uListen AI
Notable Quotes
“I don’t think Sam Altman has done a service to the world by talking about how close AGI is.”
— Nick Frosst
“When you’re talking about a 25-year-old marketer, some portion of their work is just turning existing text and tools into another form. That’s where models shine. But most of their work is understanding culture and what will resonate. That’s not in the dataset of text from the internet.”
— Nick Frosst
“Not AGI, ROI. ROI, not AGI.”
— Nick Frosst
“You can’t think the technology is magic. You can’t think we’re doing spells. You have to know how a language model works and what that means.”
— Nick Frosst
“I used to be a real technological optimist… I wouldn’t describe myself as a technological optimist over the past 10 years.”
— Nick Frosst
Questions Answered in This Episode
If LLMs are fundamentally sequence models, what technical breakthroughs—if any—would be needed to move closer to the kind of AGI people imagine?
Nick Frosst, cofounder of Cohere and former first hire in Geoff Hinton’s Google Brain group, argues that today’s LLMs are transformative for work but fundamentally not a path to AGI as popularly described. ...
Get the full analysis with uListen AI
How should enterprises practically evaluate and compare models when public benchmarks don’t map well to their real-world use cases?
Get the full analysis with uListen AI
What specific labor and social policies does Frosst think would best ensure AI-driven productivity gains are broadly shared rather than concentrating wealth?
Get the full analysis with uListen AI
How might sovereign AI strategies in Europe, Canada, and China reshape competition with U.S. AI platforms over the next decade?
Get the full analysis with uListen AI
At what point does enterprise AI automation cross the line from “augmentation” into net job destruction, and how should leaders plan for that transition?
Get the full analysis with uListen AI
Transcript Preview
I don't think Sam Altman has done a service to the world by talking about how close AGI is. I think he has made several predictions now that are wrong, and that were obviously wrong at the time he made them. I think AI will probably lead to the end of the world. You know, he's made allusions to things. He did a world tour where he spoke to every major leader, the world over, to tell them, "Hey, this technology is gonna pose as an existential threat." And I think that was academically disingenuous, and I think did a disservice to the technology he loves.
Ready to go? Nick, I'm so excited for this, dude. When I had Aidan on the show-
Mm-hmm.
... he was like, "You've gotta have Nick on. He's the real star of the show." And he introduced us way back then.
Mm-hmm. Hmm.
So, I'm so excited that we could make this happen.
Yeah, man. I'm happy to be here.
Now, before we dive into Cohere, I have to ask, you were Geoff Hinton's first hire at Google Brain.
Mm-hmm.
And so then you're put in a room with Geoff Hinton. You get to work with him every day. What was the biggest lesson from working with Geoff, a legend of the industry?
Yeah. I, I learned... Yeah. I, I loved working with Geoff. Um, I learned everything I know about research, um, from those, those... I think we were there for four years, three years? Um, I think I was very surprised at how creatively and playfully he approaches research. Um, when we would discuss, like, algorithms or, or, or, like, optimizers or, or, uh, loss functions, we would discuss them often in, like, through physical analogy. So, we'd spend a lot of time talking about, like, imagine there's like a ball here and like an elastic band to this thing, and a pulley here, and like this is what the... you know, it's on this kind of a surface. And like a lot of it was descriptions in the natural physical world. And that was very, yeah, like playful. And a lot of it was approached with like, "Oh, what would happen if..." You know, with, with curiosity. Um, and I didn't ex-... Uh, when working with him, I didn't expect that. Um, I expected hi- it to be much more like, you know, just, "Here's the equation. Let's, let's figure out what the derivative is and let's, let's go from there." Whereas instead, a lot of it's based on like intuition.
When you look at Google Brain and you look at DeepMind, a lot think that really kind of Google were asleep at the wheel, given them not being at the forefront in what was the consumerization of it with ChatGPT.
Mm-hmm.
Do you think that's fair?
I don't know. I mean, uh, it's certainly interesting. Look, like the transformer was invented at Google, right? Like, there was... Uh, in 2017, Aidan, uh, um, amongst with, um, many other brilliant people in Google Brain published the transformer as an architecture. Um, it wasn't, it wasn't then commercialized very quickly within Google. It wasn't scaled up very quickly within Google. Um, a lot of that work had to be done elsewhere and years later. So, that's interesting. And like w- why that is, like what, what, what systems are in place to make that be the case? I, I don't know. I, I, I will say there's still a ton of brilliant people in DeepMind, I think, now. It's, it's just cons- sub- subsumed the rest of it. Doing great work, um, and they continue to make good products. Uh, it is interesting that all the people who worked on the transformer left to continue to work on the transformer.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome