
No Priors Ep. 91 | With Cohere Co-Founder and CEO Aidan Gomez
Sarah Guo (host), Aidan Gomez (guest), Elad Gil (host)
In this episode of No Priors, featuring Sarah Guo and Aidan Gomez, No Priors Ep. 91 | With Cohere Co-Founder and CEO Aidan Gomez explores cohere CEO Aidan Gomez on Enterprise AI, Reasoning, and Non-AGI Futures Aidan Gomez, co-founder and CEO of Cohere and co-author of the Transformer paper, discusses how Cohere focuses on serving enterprises rather than competing for consumer chatbots. He explains the importance of robust foundation models, but emphasizes that enterprise success also depends on security, deployment flexibility, product structure, and helping customers avoid common implementation mistakes. Gomez outlines key use cases like RAG-based Q&A, summarization, and domain-specific assistants, and argues that reasoning-focused models and inference-time scaling will structurally change how AI capability is delivered and priced. He is skeptical of imminent AGI takeoff narratives, instead seeing a long, practical refactor of the economy using already-powerful but imperfect models, with model commoditization overstated and specialized model builders retaining leverage.
Cohere CEO Aidan Gomez on Enterprise AI, Reasoning, and Non-AGI Futures
Aidan Gomez, co-founder and CEO of Cohere and co-author of the Transformer paper, discusses how Cohere focuses on serving enterprises rather than competing for consumer chatbots. He explains the importance of robust foundation models, but emphasizes that enterprise success also depends on security, deployment flexibility, product structure, and helping customers avoid common implementation mistakes. Gomez outlines key use cases like RAG-based Q&A, summarization, and domain-specific assistants, and argues that reasoning-focused models and inference-time scaling will structurally change how AI capability is delivered and priced. He is skeptical of imminent AGI takeoff narratives, instead seeing a long, practical refactor of the economy using already-powerful but imperfect models, with model commoditization overstated and specialized model builders retaining leverage.
Key Takeaways
Enterprises should start with simple customization before touching pre-training.
Gomez recommends a gradient of specialization: begin with fine-tuning and prompting changes, then move to post-training (SFT/RLHF), and only consider continuation pre-training for very large organizations with massive proprietary datasets and stringent performance needs.
Get the full analysis with uListen AI
Most failed enterprise AI POCs stem from RAG and prompting details, not model limits.
Cohere repeatedly sees failures because teams mis-format retrieved context, store data poorly, or assume models are human-like; structured APIs and more robust models can greatly reduce these failures.
Get the full analysis with uListen AI
Focus in-house efforts on AI systems that deliver unique competitive advantage.
Gomez advises enterprises to buy generic tools (e. ...
Get the full analysis with uListen AI
Security, privacy, and deployment flexibility are decisive for regulated industries.
In healthcare and finance, data often cannot leave a specific VPC or on-prem environment; Cohere’s ability to deploy in multiple environments is framed as a key differentiator and a prerequisite for accessing the most sensitive, valuable data.
Get the full analysis with uListen AI
Reasoning models shift improvement from pure training capex to inference-time spend.
Instead of waiting months for a new larger model, customers can pay for more inference-time compute to get smarter behavior on demand, changing both pricing models and infrastructure design priorities across the stack.
Get the full analysis with uListen AI
Scaling of general capabilities is flattening, with gains moving into niche domains.
Gomez sees the biggest future improvements in specialized areas like math, science, and other expert domains, where progress is increasingly limited by the availability of high-quality, often expert-generated data.
Get the full analysis with uListen AI
Model commoditization is overstated; current low pricing is often unsustainable dumping.
He argues that only a handful of players can build top-tier models, while the world is undertaking a decade-long technological ‘repaving,’ so even if some models are temporarily underpriced, long-term economics will reward specialized model producers.
Get the full analysis with uListen AI
Notable Quotes
“We’re not going to build a ChatGPT competitor. What we want to build is a platform and a series of products to enable enterprises to adopt this technology and make it valuable.”
— Aidan Gomez
“People overestimate the models. They think they’re like humans, and that has led to a lot of repeat failures.”
— Aidan Gomez
“Even if we didn’t train a single new language model, there’s a half decade of work to go integrate this into the economy.”
— Aidan Gomez
“We’re pretty far along. We’re certainly past the point where if you just interact with a model, you can know how smart it is.”
— Aidan Gomez
“There’s a total technological refactor that’s going on right now and will last the next 10 to 15 years, and it’s kind of like we have to repave every road on the planet, and there’s four or five companies that know how to make concrete.”
— Aidan Gomez
Questions Answered in This Episode
For an enterprise just starting with LLMs, how should they practically decide between using off-the-shelf copilots, fine-tuning, or engaging in continuation pre-training with a provider like Cohere?
Aidan Gomez, co-founder and CEO of Cohere and co-author of the Transformer paper, discusses how Cohere focuses on serving enterprises rather than competing for consumer chatbots. ...
Get the full analysis with uListen AI
What concrete product patterns or APIs best encode the RAG and prompting best practices Cohere has learned from repeated customer failures?
Get the full analysis with uListen AI
How will inference-time reasoning models change the economics of AI infrastructure and the design of future chips and data centers?
Get the full analysis with uListen AI
In which specific scientific or technical domains does Gomez expect the next major capability jumps from reasoning-focused training, and what data bottlenecks are most severe there?
Get the full analysis with uListen AI
If model commoditization is a misconception, what durable moats will distinguish leading model providers over the next decade beyond raw model quality (e.g., data, distribution, compliance, support)?
Get the full analysis with uListen AI
Transcript Preview
(instrumental music plays) Hi, listeners, and welcome to No Priors. Today, we're hanging out with Aidan Gomez, co-founder and CEO of Cohere, a company valued at more than 5 billion in 2024, which provides AI-powered language models and solutions for businesses. Aidan founded Cohere in 2019. But before that, during his time as an intern at Google Brain, he was a co-author on the landmark 2017 paper, Attention Is All You Need. Aidan, thanks for coming on today.
Yeah, thank you for having me. Excited to be here.
Maybe we can start, uh, just a little bit with the personal background. Um, how do you go from growing up in the woods in Canada to, um, you know, working on the most important technical paper in the world?
A lot of luck and, and chance. Um, but yeah, I happened to go to school at the place where Geoff Hinton, uh, taught. And so, um, obviously, Geoff recently won the Nobel Prize. He's kinda, like, uh, attributed with being the, the godfather of, of deep learning. In U of T, the school where I went, he was a legend, and pretty much everyone who was in computer science studying at the school wanted to get into AI. Uh, and so in some sense, I, I feel like I was raised into AI. Like, as soon as I stepped out of high school, um, I was steeped in an environment that really saw the future, uh, and wanted to build it. Um, and then from there, it was a bunch of happy accidents. So I, I somehow managed to get an internship with Lukasz Kaiser, uh, at, at Google Brain. Um, and I found out at the end of that internship I wasn't supposed to have gotten that internship. It was supposed to have been for PhD students. And so they were, like, throwing a goodbye party for me, the intern, um, and Lukasz was like, "Okay, so Aidan, you're going back. How many, how many years have you got left in your PhD?" Uh, and I was like, "Oh, I'm going back into third year undergrad." Uh, and he was like, "We don't do (laughs) undergrad internships." So I think it was a bunch of, like, really lucky mistakes, uh, that led me, led me to that team.
Working on really interesting, important things at Google, what, uh, convinced you that you should start Cohere?
Yeah, so I bounced around. Like, when I was working with Lukasz and Noam and the Transformer guys, I was in Mountain View, and then I went back to U of T, uh, started working with Hinton and my co-founder, Nick, in Toronto, uh, at Brain there. And then I started my PhD, and I went to England, um, and I was working with, uh, Jakob, who's another Transformer paper author, in Berlin, and collaborating with Justin-
Mm-hmm. We had Jakob on the podcast.
Oh, nice. Yeah, yeah, yeah. Okay. Fan of the pod. Good, good. Um, so yeah, I, I was working with Jakob in Berlin, and then I was also collaborating remotely with Jeff Dean and Sanjay on Pathways, which was, like, their, you know, bigger-than-a-supercomputer training program. Uh, the idea was, like, wiring together supercomputers to create a new larger unit of compute that you could train models on. And at that stage, GPT-2 had just come out, and it was pretty clear the trajectory of the technology. Like, we were on a very interesting path, and these models that were ostensibly models of the internet, models of the web, um, were gonna yield some pretty interesting, interesting things. So I, I called up Nick, I called up Ivan, my co-founders, and I said, "You know, maybe we should figure out how to build these things. I, I think they're gonna be useful."
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome