Skip to content
No PriorsNo Priors

No Priors Ep. 35 | With Sarah Guo and Elad Gil

What Does it Take to Improve by 10x or 100x? This week is another host-only episode. Sarah and Elad talk about the path to better model quality, the potential for fine tuning to different use cases, retrieval systems (RAG), feedback systems (RLHF, RLAIF) and Meta’s sponsorship of the open source model ecosystem. Plus Sarah and Elad ask if we’re finally at the beginning of a new set of consumer applications and social networks. 00:00 Introduction 03:00 - AI Models, Open AI Advances, and Fine Tuning 08:59 - Addressing Hallucinations in AI Models 13:22 - Open Source Models in Consumer Engagement 16:23 - New Trends in Social Content Creation 21:53 - Balancing Ambition With Realistic Customer Expectations

Sarah GuohostElad Gilhost
Oct 4, 202323mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

How Fine-Tuning, RAG, and Open Source Will 10X AI’s Impact

  1. Hosts Sarah Guo and Elad Gil outline six levers—multimodality, long context, customization, memory, recursion, and model orchestration—that can make today’s AI systems 10–100x more useful without waiting for dramatically bigger base models.
  2. They dive into model customization via fine-tuning, RLHF/RLAIF, and RAG, explaining why OpenAI’s fine-tuning push and Google’s AI-feedback research are pivotal for scaling quality and lowering costs.
  3. The conversation then turns to Meta’s strategic sponsorship of open-source models like LLaMA, drawing analogies to past infrastructure plays such as MySQL and Linux, and what that means for the broader ecosystem.
  4. Finally, they explore how generative AI could catalyze a new wave of consumer apps and social networks, and advise founders to pursue the “easy markets” and near-term value rather than overly hard markets at this stage.

IDEAS WORTH REMEMBERING

5 ideas

You can 10–100x usefulness of existing models with system design, not just bigger models.

Multimodality, longer context windows, customization, memory, recursion, and routing between specialized models can radically improve performance on real use cases using GPT-3.5/4-class systems.

Fine-tuning and RLHF are proven to unlock massive step-changes in usability.

ChatGPT’s success came from fine-tuning GPT‑3.5 with human feedback, showing that aligning outputs with user preferences and tasks can transform a capable but unwieldy model into a mainstream product.

RAG is critical for trustworthy, up-to-date, and cost-effective applications.

By retrieving from a controlled corpus (e.g., legal docs, company knowledge, medical research) and then letting the model reason over that, teams reduce hallucinations, lower retraining costs, and keep answers fresh.

AI-generated feedback (RLAIF) can substitute for expensive human raters in many domains.

Google’s work shows AI can often evaluate AI outputs as well as humans, enabling cheaper and faster iterative improvement of models, especially when domain-specific models are already more accurate than human experts.

Meta’s open-source push is a strategic bet to avoid lock-in and shape the stack.

By sponsoring strong open models like LLaMA 2, Meta reduces dependence on external labs, catalyzes a developer ecosystem, and externalizes some R&D cost—similar in spirit to prior open-source infrastructure plays.

WORDS WORTH SAVING

5 quotes

You don't need to wait for GPT‑7; you can 10x or even 100x use cases with existing models today.

Elad Gil

Fine-tuning really just means you create a lot of feedback… and it created a dramatic step function in the utility of GPT‑3.5.

Elad Gil

I think of the core driver [for RAG] as trustworthiness—citation, control of information source.

Sarah Guo

Instead of having to hire an army of people to fine-tune these models, you can actually have an AI help fine-tune this model.

Elad Gil

It’s no GPU before product/market fit. I think that’s the takeaway.

Elad Gil

Six major levers to dramatically improve current AI systemsFine-tuning, RLHF, and RLAIF as engines of model qualityRetrieval-augmented generation (RAG) for trustworthiness, freshness, and cost-efficiencyMeta’s sponsorship of open-source models (LLaMA/LLaMA 2) and ecosystem implicationsAI-native consumer applications and the next generation of social networksLessons from Chinese AI-driven content platforms like Toutiao and TikTokStartup strategy in the AI era: choosing markets and where to innovate

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome