Skip to content
No PriorsNo Priors

No Priors Ep. 14 | With Sarah Guo and Elad Gil

This week on No Priors, Sarah and Elad answer listener questions about tech and AI. Topics covered include the evolution of open-source models, Elon AI, regulating AI, areas of opportunity, and AI hype in the investing environment. Sarah and Elad also delve into the impact of AI on drug development and healthcare, and the balance between regulation and innovation. 00:00 - The March of Progress for Open Source Foundation Models 06:00 - Should AI Be Regulated? 13:49 - Investing in AI and Exploring the AI Opportunity Landscape 23:28 - The Impact of Regulation on Innovation 31:55 - AI in Healthcare and Biotech

Sarah GuohostElad Gilhost
Apr 26, 202333mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AI’s Open-Source Surge, Regulation Risks, and Where Value Emerges Next

  1. Sarah Guo and Elad Gil answer listener questions on the evolving AI landscape, focusing on open-source models, autonomous agents, regulation, and the current investing hype cycle.
  2. They predict open-source systems will reach roughly GPT-3.5 quality within a year, while frontier players like OpenAI and Anthropic remain a generation or two ahead.
  3. The discussion distinguishes near-term technology risks from long-term species-level risks, arguing against broad AI regulation now except for areas like export controls, weapons, and advanced robotics.
  4. On the business side, they see large opportunities in AI applications (compliance, healthcare operations, voice/dubbing, vertical models) and expect both incumbents and startups to capture substantial value.

IDEAS WORTH REMEMBERING

5 ideas

Open-source models are catching up fast but will likely trail leaders by 1–2 generations.

Guo expects an open-source model at roughly GPT-3.5 level within a year, driven by cheaper compute, more experienced teams, better distillation, and self-supervision, while companies like OpenAI maintain the frontier.

Autonomous agents plus long-term memory unlock qualitatively new use cases.

Chaining LLM calls with planning, reflection, and persistent context can automate multi-step tasks (e.g., creating an entire dropshipping business), especially once systems remember interactions across sessions and users.

Differentiate technology risk from species-level existential risk when thinking about regulation.

Gil argues most near-term dangers (cyberattacks, accidents) can be mitigated by turning systems off, while true existential risk likely requires embodied, robotic AI that can operate in the physical world at scale.

Broad AI regulation now would mostly entrench incumbents and slow innovation.

They note that in heavily regulated sectors like healthcare, education, and housing, prices have risen and innovation slowed; they suggest focusing regulation narrowly on export controls, defense applications, and advanced robotics for now.

In hype cycles, being right on the wave is easier than picking the winners.

Drawing parallels with past waves (social, mobile, crypto), they stress that most startups will fail even if the overall trend is real; the goal is not to be first, but to be “last standing” with a model or product that actually endures.

WORDS WORTH SAVING

5 quotes

There’s nothing out there today in open source that is like GPT‑4 or 3.5… but I’d bet there’s a 3.5‑level model in the open source ecosystem within a year.

Sarah Guo

The future is here, it’s just not equally distributed—and autonomous agents are one of those things.

Elad Gil

Fundamentally, my view would be: let’s not regulate right now, at least most things.

Elad Gil

Alignment research is very tied to capability research, so it’s sort of impossible to say, ‘We’re going to stop making progress on research but figure out how to control this stuff.’

Sarah Guo

You don’t want to be the first to market, you want to be the last standing.

Elad Gil (citing Peter Thiel)

Progress and competitive dynamics of open-source vs proprietary foundation modelsAutonomous agents, memory, and persistent context in AI systemsDebates over AI regulation, risk types, and political incentivesAI investment hype cycles and how durable winners emergeHigh-potential AI application areas (compliance, voice/dubbing, legal, healthcare operations)Vertical and domain-specific foundation models vs general-purpose modelsIncumbent vs startup advantage and the role of regulation in other industries (especially healthcare and pharma)

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome