Skip to content
No PriorsNo Priors

No Priors Ep. 63 | With Sarah Guo and Elad Gil

This week on No Priors hosts, Sarah and Elad are catching up on the latest AI news. They discuss the recent developments in AI like Meta’s new AI assistant and the latest in music generation, and if you’re interested in generative AI music, stay tuned for next week’s interview! Sarah and Elad also get into device-resident models, AI hardware, and ask just how smart smaller models can really get. These hardware constraints were compared to the hurdles AI platforms are continuing to face including computing constraints, energy consumption, context windows, and how to best integrate these products in apps that users are familiar with. Have a question for our next host-only episode, or feedback for our team? Reach out to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: 0:00 Intro 1:25 Music AI generation 4:02 Apple’s LLM 11:39 The role of AI-specific hardware 15:25 AI platform updates 18:01 Forward thinking in investing in AI 20:33 Unlimited context 23:03 Energy constraints

Sarah GuohostElad Gilhost
May 8, 202429mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AI’s Next Frontiers: Local Models, Long Context, Energy Limits, Music

  1. Sarah Guo and Elad Gil discuss rapid advances across the AI stack, from creative tools like Suno and Udio to small, locally-run language models and Meta’s latest AI products. They examine how platform dynamics play out as Apple, Meta, Snowflake, Databricks, and hyperscalers all jostle over models, data, and distribution. The conversation explores technical directions such as long-context LLMs and specialized hardware, alongside looming constraints like data center energy, nuclear power, and policy. Throughout, they frame massive AI CapEx as comparable to past infrastructure waves and debate which layers—models, platforms, apps—will hold durable value.

IDEAS WORTH REMEMBERING

5 ideas

AI music tools are expanding who can create and personalize audio content.

Models like Suno and Udio make it trivial for non-musicians to generate full songs with lyrics and vocals, enabling concepts like ‘personal soundtracks’ in a favorite artist’s style and foreshadowing broader creative AI applications.

Small, on-device models will redefine latency, privacy, and user experience.

Apple’s small open models and demand for 1–3B parameter LLMs suggest a future where many AI tasks run locally, enabling low-latency, persistent, and proactive features without constant cloud inference costs.

Platforms will likely absorb generic AI UX layers, but vertical or cross-platform plays can still win.

History shows OS vendors and platforms tend to subsume core experiences (like Office or launchers), yet niche or vertical products (e.g., Veeva on Salesforce) can grow large if they deeply own a specific domain.

Meta’s multi-pronged AI push shows the advantage of scale and distribution.

Meta is shipping capable consumer agents, image/animation tools, and strong open-source models, demonstrating how large players with massive GPU budgets can push past ‘optimal’ training points and still gain performance.

Long-context models will change how we architect prompts and applications.

With context windows in the millions of tokens (as seen at Magic and in Gemini 1.5), developers can drop entire codebases, legal corpora, or biological sequences into a single prompt, enabling qualitatively new applications (e.g., better protein folding models).

WORDS WORTH SAVING

5 quotes

It just seems like an interesting moment in time from the perspective of, look at all these different creative things that people are now empowered to do.

Elad Gil

There’s been huge demand for models that actually have useful capability in a one and three billion parameter size that'll fit on edge devices.

Elad Gil

In aggregate, a handful of players in terms of the hyperscalers are spending almost 200 billion dollars this year on compute for AI.

Sarah Guo

These are actually very solvable problems if we choose to solve them, which is why I thought that job posting on the Microsoft website was so interesting.

Elad Gil

If you do think of AI as a strategic issue and a national security issue, not using every energy resource we have is yet another dependency that we're creating for ourselves.

Sarah Guo

AI-generated music and emerging creative formats (Suno, Udio, voice cloning)Local and small LLMs on devices, including Apple’s model releasesMeta AI’s product strategy, open-source models, and multimodal experiencesPlatform dynamics among hyperscalers, Snowflake, Databricks, and model ownershipLong-context LLMs (e.g., Magic, Gemini 1.5) and new capabilities they unlockCompute, energy, and data center constraints, including nuclear power considerationsHistorical analogies for AI CapEx and the role of policy and geopolitics

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome