No Priors Ep. 14 | With Sarah Guo and Elad Gil

No Priors Ep. 14 | With Sarah Guo and Elad Gil

No PriorsApr 27, 202333m

Sarah Guo (host), Elad Gil (host), Narrator

Progress and competitive dynamics of open-source vs proprietary foundation modelsAutonomous agents, memory, and persistent context in AI systemsDebates over AI regulation, risk types, and political incentivesAI investment hype cycles and how durable winners emergeHigh-potential AI application areas (compliance, voice/dubbing, legal, healthcare operations)Vertical and domain-specific foundation models vs general-purpose modelsIncumbent vs startup advantage and the role of regulation in other industries (especially healthcare and pharma)

In this episode of No Priors, featuring Sarah Guo and Elad Gil, No Priors Ep. 14 | With Sarah Guo and Elad Gil explores aI’s Open-Source Surge, Regulation Risks, and Where Value Emerges Next Sarah Guo and Elad Gil answer listener questions on the evolving AI landscape, focusing on open-source models, autonomous agents, regulation, and the current investing hype cycle.

AI’s Open-Source Surge, Regulation Risks, and Where Value Emerges Next

Sarah Guo and Elad Gil answer listener questions on the evolving AI landscape, focusing on open-source models, autonomous agents, regulation, and the current investing hype cycle.

They predict open-source systems will reach roughly GPT-3.5 quality within a year, while frontier players like OpenAI and Anthropic remain a generation or two ahead.

The discussion distinguishes near-term technology risks from long-term species-level risks, arguing against broad AI regulation now except for areas like export controls, weapons, and advanced robotics.

On the business side, they see large opportunities in AI applications (compliance, healthcare operations, voice/dubbing, vertical models) and expect both incumbents and startups to capture substantial value.

Key Takeaways

Open-source models are catching up fast but will likely trail leaders by 1–2 generations.

Guo expects an open-source model at roughly GPT-3. ...

Get the full analysis with uListen AI

Autonomous agents plus long-term memory unlock qualitatively new use cases.

Chaining LLM calls with planning, reflection, and persistent context can automate multi-step tasks (e. ...

Get the full analysis with uListen AI

Differentiate technology risk from species-level existential risk when thinking about regulation.

Gil argues most near-term dangers (cyberattacks, accidents) can be mitigated by turning systems off, while true existential risk likely requires embodied, robotic AI that can operate in the physical world at scale.

Get the full analysis with uListen AI

Broad AI regulation now would mostly entrench incumbents and slow innovation.

They note that in heavily regulated sectors like healthcare, education, and housing, prices have risen and innovation slowed; they suggest focusing regulation narrowly on export controls, defense applications, and advanced robotics for now.

Get the full analysis with uListen AI

In hype cycles, being right on the wave is easier than picking the winners.

Drawing parallels with past waves (social, mobile, crypto), they stress that most startups will fail even if the overall trend is real; the goal is not to be first, but to be “last standing” with a model or product that actually endures.

Get the full analysis with uListen AI

High-value near-term opportunities lie in compliance, operations, and domain-specific apps.

Both see significant ROI in areas like legal (e. ...

Get the full analysis with uListen AI

Healthcare and pharma illustrate how regulation and incumbency can suppress new giants.

Gil points out that no new major biopharma giants (outside Moderna) have emerged since the late 1980s, arguing that regulatory burden and incumbents’ incentives have stifled generational newcomers—an instructive warning for AI policy.

Get the full analysis with uListen AI

Notable Quotes

There’s nothing out there today in open source that is like GPT‑4 or 3.5… but I’d bet there’s a 3.5‑level model in the open source ecosystem within a year.

Sarah Guo

The future is here, it’s just not equally distributed—and autonomous agents are one of those things.

Elad Gil

Fundamentally, my view would be: let’s not regulate right now, at least most things.

Elad Gil

Alignment research is very tied to capability research, so it’s sort of impossible to say, ‘We’re going to stop making progress on research but figure out how to control this stuff.’

Sarah Guo

You don’t want to be the first to market, you want to be the last standing.

Elad Gil (citing Peter Thiel)

Questions Answered in This Episode

How should policymakers practically distinguish between technology risk and species-level risk when designing AI regulation frameworks?

Sarah Guo and Elad Gil answer listener questions on the evolving AI landscape, focusing on open-source models, autonomous agents, regulation, and the current investing hype cycle.

Get the full analysis with uListen AI

What specific technical breakthroughs in memory and context would make autonomous agents reliably useful for complex, real-world workflows?

They predict open-source systems will reach roughly GPT-3. ...

Get the full analysis with uListen AI

In which domains are vertical, domain-specific models likely to outperform general-purpose LLMs, and how should startups decide which side to bet on?

The discussion distinguishes near-term technology risks from long-term species-level risks, arguing against broad AI regulation now except for areas like export controls, weapons, and advanced robotics.

Get the full analysis with uListen AI

How can startups build defensible AI applications when they rely on commoditizing foundation models that many competitors can also access?

On the business side, they see large opportunities in AI applications (compliance, healthcare operations, voice/dubbing, vertical models) and expect both incumbents and startups to capture substantial value.

Get the full analysis with uListen AI

What lessons from the regulatory capture and stagnation in pharma should AI founders and regulators internalize now to avoid repeating those mistakes?

Get the full analysis with uListen AI

Transcript Preview

Sarah Guo

Hey, everyone. Welcome to No Priors. Today, we're gonna switch things up a bit and just hang out and answer listener questions about tech and AI.

Elad Gil

The topics people wanted us to talk about include everything from the evolution of open source models to the Balkanization of AI, Elon AI, which I think will be super interesting to cover, regulating AI, and AI hype in the investing environment. Let's start with the march of progress for, uh, open source models. Um, I guess, Sara, what have you been paying attention to, and what are some of the more interesting things that you view happening right now?

Sarah Guo

Yeah. So there's nothing out there today in open source that is like GPT-4 3.5 or Anthropic Claude quality, right? So there is a, there's one player out in front, and that's OpenAI, but I think the landscape has changed a lot over the last couple of months. Like, Facebook LLaMA's quite good. Um, many startups are just using it despite its licensing issues, assuming Mark won't come after them, and then you have a number of other releases that have happened, right? Uh, Trmro just released a pre-training dataset, which seems quite good. Stability just refused, um, released DiffusionXL in the image gen space. Um, and so I, I think the, like, larger dynamic is that there's been an increasing number of people and teams that now know how to train large models. The cost of a flop is only gonna go down. Um, there's a lot of investment in, like, distilling models, and, uh, a lot of researchers would claim that you and I know that it's gonna be 5X cheaper to train the same size model the second time around, like, once you've made your mistakes and know what you're doing. And then you have these other accelerants, like you can use these models to annotate your datasets and increasingly do advanced, like, self-supervision. So if VCs are going to continue to fund foundation model efforts, including open source, um, foundation model efforts, like... If I were a betting woman, and I am, I'd bet there's a 3.5 level model in the open source ecosystem within a year, um, and I, I didn't personally believe that would be true, like, a few months ago.

Elad Gil

I guess that puts it about two to three years behind when GPT-3.5 came out, though. And so do you think that's gonna be the ongoing trend, that there'll be a handful of companies that are ahead of open source by, you know, one or two generations?

Sarah Guo

Yeah. I, I think that's, like, the status quo, so if we just straight line project, I imagine that will continue to happen, and the real question is, like, can you stay in the lead if you are OpenAI and, um, and, like, get, uh, get paid for it? Or is that the, is that the objective of the organization anyway? Like, I, I think, you know, if you have a, um, a great leader and a lot of resources and a lot of really talented people, that's not something I wanna bet against.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome