Skip to content
No PriorsNo Priors

No Priors Ep. 79 | With Magic.dev CEO and Co-Founder Eric Steinberger

Today on No Priors, Sarah Guo and Elad Gil are joined by Eric Steinberger, the co-founder and CEO of Magic.dev. His team is developing a software engineer co-pilot that will act more like a colleague than a tool. They discussed what makes Magic stand out from the crowd of AI co-pilots, the evaluation bar for a truly great AI assistant, and their predictions on what a post-AGI world could look like if the transition is managed with care. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EricSteinb Show Notes: 0:00 Introduction 0:45 Eric’s journey to founding Magic.dev 4:01 Long context windows for more accurate outcomes 10:53 Building a path toward AGI 15:18 Defining what is enough compute for AGI 17:34 Achieving Magic’s final UX 20:03 What makes a good AI assistant 22:09 Hiring at Magic 27:10 Impact of AGI 32:44 Eric’s north star for Magic 36:09 How Magic will interact in other tools

Sarah GuohostEric SteinbergerguestElad Gilhost
Aug 29, 202437mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Magic.dev’s Erik Steinberger on AGI, coding coworkers, and safety

  1. Erik Steinberger, CEO and co-founder of Magic.dev, discusses building an AI software engineer that functions as a true colleague rather than a lightweight coding assistant. He explains why Magic focuses on code-generation as a pathway to AGI, emphasizing long-context models, test-time compute, and an architecture tuned for continual learning from large histories of work. The conversation explores how to allocate compute between training and inference, why long context can outperform retrieval, and how recursive self-improvement could be bounded by safety-focused iterations. Steinberger also reflects on the societal impacts of AGI, from automation of most digital work to questions of meaning, power, and how capitalism and competition might shape outcomes.

IDEAS WORTH REMEMBERING

5 ideas

Code is a natural first domain for AGI-like systems.

Magic works backwards from AGI: if you can build a system that writes, tests, and iteratively improves code (including its own), you effectively have a system that can build most other systems without covering every consumer use case upfront.

Very long context windows can be more powerful than retrieval alone.

Steinberger argues that in-context learning—treating the model as an online optimizer over all available data—is fundamentally more expressive than any heuristic retrieval pipeline that surfaces a limited subset for each query.

Optimally balancing training compute and inference compute is crucial.

Model performance is a joint function of training and test-time compute; giving users control over how much compute to spend per query (e.g., via test-time search) is far more efficient than over-investing in training for all use cases.

Trust is the core product metric for an AI coding coworker.

Magic is reluctant to launch a “mediocre” assistant; the bar is that engineers feel comfortable letting the system write most code, with code review becoming light rather than a painful, error-fixing exercise.

AGI development needs recursive, model-assisted safety work.

Steinberger believes the only realistic way to prioritize and execute sufficient safety research is to iteratively use each generation of increasingly capable models to help analyze and mitigate the risks of the next.

WORDS WORTH SAVING

5 quotes

If your end goal is to have a system that can do everything, you can reduce that to building a system that can build that system.

Erik Steinberger

Instead of bringing the data to the compute, we're bringing the compute to the data.

Erik Steinberger

Retrieval selects a subset of data for one completion. Our model sees all the data all the time.

Erik Steinberger

There is no mediocre AGI future. If it’s not terrible, it will be amazing.

Erik Steinberger

I think the only way to reasonably approach this is to iteratively ask your model to solve alignment and safety at that stage.

Erik Steinberger

Erik Steinberger’s background in reinforcement learning and path into AIMagic.dev’s focus on code generation as a route to AGILong context windows, architecture choices, and limits of retrievalInference-time compute, test-time search, and training vs inference trade-offsProduct philosophy: from mediocre assistant to trusted coding coworkerTeam-building, culture, and recruiting non-obvious high-talent engineersAGI trajectories, safety, societal impact, and the future of human work and meaning

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome