No Priors Ep. 3 | With Stability AI’s Emad Mostaque

No Priors Ep. 3 | With Stability AI’s Emad Mostaque

No PriorsMay 3, 202345m

Sarah Guo (host), Emad Mostaque (guest), Elad Gil (host)

Emad Mostaque’s personal journey and motivation for working in AIOpen-source foundation models vs. closed corporate AI ecosystemsStability AI’s strategy in media, language, and computational biologyGlobal infrastructure, national models, and emerging-market leapfroggingFuture of media, creativity, and multimodal generative toolsDemocratic, social, and educational impacts of pervasive AIAI safety, regulation, data rights, and defense/misinformation concerns

In this episode of No Priors, featuring Sarah Guo and Emad Mostaque, No Priors Ep. 3 | With Stability AI’s Emad Mostaque explores stability AI’s Emad Mostaque Bets On Open Models Powering Humanity’s Future Emad Mostaque traces his path from hedge funds to founding Stability AI, driven by personal experiences with autism, COVID research, and a belief that AI should be open infrastructure available to everyone. He explains how Stability catalyzed the open-source ecosystem in image, language, and biology models, and why he thinks foundation models will ultimately be open while private firms focus on instruction-tuning and fine-tuning. Mostaque outlines Stability’s multimodal work across media and computational biology, its close cooperation with governments and academia, and its focus on deployable, customizable models rather than ever-larger ones. He also discusses global adoption dynamics, democratic implications, and why we need regulation and standards around large models, data usage, and AI-driven manipulation while prioritizing “intelligence augmentation” over the pursuit of AGI.

Stability AI’s Emad Mostaque Bets On Open Models Powering Humanity’s Future

Emad Mostaque traces his path from hedge funds to founding Stability AI, driven by personal experiences with autism, COVID research, and a belief that AI should be open infrastructure available to everyone. He explains how Stability catalyzed the open-source ecosystem in image, language, and biology models, and why he thinks foundation models will ultimately be open while private firms focus on instruction-tuning and fine-tuning. Mostaque outlines Stability’s multimodal work across media and computational biology, its close cooperation with governments and academia, and its focus on deployable, customizable models rather than ever-larger ones. He also discusses global adoption dynamics, democratic implications, and why we need regulation and standards around large models, data usage, and AI-driven manipulation while prioritizing “intelligence augmentation” over the pursuit of AGI.

Key Takeaways

Open-source foundation models will likely dominate the deep-learning layer.

Mostaque argues that governments, academia, and coordinated open efforts will have more compute and broader incentives than any single private company, making core base models an open infrastructure layer while businesses compete on instruction-tuning, fine-tuning, and applications.

Get the full analysis with uListen AI

Smaller, optimized, and combined models can rival or surpass giant monoliths.

He highlights work like Chinchilla, FLAN, and Med-PaLM to show that better data, instruction-tuning, and specialized fine-tuning can make smaller models highly capable, and predicts a future where many interoperating models outperform a single gargantuan system.

Get the full analysis with uListen AI

Media and creative workflows are already being structurally reshaped by generative AI.

From Stable Diffusion to video and audio models, studios are saving millions, creators gain powerful tools, and visual communication becomes accessible to non-experts—transforming everything from film production to everyday meme-making and art therapy.

Get the full analysis with uListen AI

Emerging markets may leapfrog the West with AI-native education and healthcare.

By building AI-first systems in places with minimal legacy infrastructure, Stability and partners are already demonstrating rapid gains (e. ...

Get the full analysis with uListen AI

Computational biology and medical AI will be a major frontier for open models.

Projects like OpenFold, DNA diffusion, BioLM, and planned open Med-PaLM–style models aim to standardize and democratize powerful tools for protein folding, drug discovery, and medical reasoning, potentially bringing top-tier care insights to anyone with a phone.

Get the full analysis with uListen AI

Regulation and standards for large models and AI-generated content are urgently needed.

Mostaque calls for registering very large training runs, building international oversight teams, standardizing content authenticity/metadata, and enabling opt-out/opt-in for training data to mitigate dual-use risks, manipulation, and unfair use of creators’ work.

Get the full analysis with uListen AI

The near-term goal should be intelligence augmentation, not just AGI for its own sake.

He emphasizes using AI to enhance human capability—improving education, democratic participation, and personal decision-making—rather than single-mindedly chasing AGI, and suggests a “hive” of diverse, human-aligned models as a safer, more useful direction.

Get the full analysis with uListen AI

Notable Quotes

I don't care about AGI except for it not killing us.

Emad Mostaque

Foundation models will all be open source for the deep learning phase.

Emad Mostaque

We are the only independent multimodal AI company in the world.

Emad Mostaque

If you're not embracing this in the West, you're gonna fall behind.

Emad Mostaque

Small models will outperform large models massively… you will see ChatGPT-level models running on the edge on smartphones in five years.

Emad Mostaque

Questions Answered in This Episode

If open foundation models become infrastructure, how will smaller startups and individual developers compete or differentiate in such an ecosystem?

Emad Mostaque traces his path from hedge funds to founding Stability AI, driven by personal experiences with autism, COVID research, and a belief that AI should be open infrastructure available to everyone. ...

Get the full analysis with uListen AI

What governance structures or international bodies could realistically oversee and regulate large-scale AI training runs across borders?

Get the full analysis with uListen AI

How can we design opt-out and opt-in data mechanisms that are technically robust yet simple enough for ordinary creators and users to control?

Get the full analysis with uListen AI

In education and healthcare, what safeguards are needed to ensure AI-first systems in emerging markets empower citizens rather than creating new dependencies or inequities?

Get the full analysis with uListen AI

What practical steps can democratic societies take now to use AI for better collective decision-making without ceding excessive authority to opaque systems?

Get the full analysis with uListen AI

Transcript Preview

Sarah Guo

(music plays) Emad, welcome to No Biased.

Emad Mostaque

Thank you for having me on, Sara. You're loud.

Sarah Guo

Let's start with a personal story. You have a background in computer science, and you were working in the hedge fund world. Uh, that's a, a hard left turn, or it looks like it, from, um, that world to being a driving force in the AI state of the art. How did you end up working in this field?

Emad Mostaque

Uh, yeah, I've always been interested in kind of AI and technology. Um, so on the hedge fund, I was one of the largest investors in video games and artificial intelligence. But then my real interest came when my son was diagnosed with autism, and, uh, I was told there was no cure or treatment. And I was like, "Uh, well, let's try and see what we can do." So I built up a team and did AI-based literature review, this was about 12 years ago, of the existing, um, treatments and papers to try and figure out commonalities, and then did some, uh, kind of, uh, biomolecular pathway analysis of neurotransmitters for drug repurposing, and came down to a few different things that could be causing it. You know, worked with doctors to treat him, and he went to mainstream school, and that was fantastic. Went back to running a hedge fund, won some awards, and then I was like, "Let's try and make the world better." And so the first one was, uh, non-AI enhanced education tablets for refugees and others. Um, that's Imagine Worldwide, my co-founder's charity. And then in 2020, COVID came, and I saw something like autism, a multi-systemic condition, the existing mechanisms that extrapolated the future from the past wouldn't be able to keep up with, and thought, "Could we use AI to make this understandable?" And so I set up an AI initiative with the World Bank, UNESCO and others, to try and understand what caused COVID, um, and try and make that available to everyone. Then I hit the institutional wall (laughs) in a variety of places and realized that, uh, the models and technologies that had evolved were far beyond anything that happened before, and there were some interesting arbitrage opportunities, uh, from a business perspective, and more than that, a bit of a moral imperative to make this technology available to everyone, 'cause we're now going to very narrow superhuman performance, and, uh, everyone should have access to that.

Sarah Guo

Uh, it's an amazing journey and, uh, congratulations on all the impact you've already had. So as you say, um, or as you imply, the AI field in recent years has been increasingly driven by labs and private companies, and one of the most obvious paths to performance progress is to just make models bigger, right? Scaling data parameters, GPUs, which is very expensive. Um, and then in, in reaction, just to set the stage a little bit, there's been some efforts over the, uh, previous years to be more community-driven and open and build alternatives like Eleuther. How did you start engaging in that, and how did Stability change the game here?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome