No PriorsNo Priors Ep. 79 | With Magic.dev CEO and Co-Founder Eric Steinberger
At a glance
WHAT IT’S REALLY ABOUT
Magic.dev’s Erik Steinberger on AGI, coding coworkers, and safety
- Erik Steinberger, CEO and co-founder of Magic.dev, discusses building an AI software engineer that functions as a true colleague rather than a lightweight coding assistant. He explains why Magic focuses on code-generation as a pathway to AGI, emphasizing long-context models, test-time compute, and an architecture tuned for continual learning from large histories of work. The conversation explores how to allocate compute between training and inference, why long context can outperform retrieval, and how recursive self-improvement could be bounded by safety-focused iterations. Steinberger also reflects on the societal impacts of AGI, from automation of most digital work to questions of meaning, power, and how capitalism and competition might shape outcomes.
IDEAS WORTH REMEMBERING
5 ideasCode is a natural first domain for AGI-like systems.
Magic works backwards from AGI: if you can build a system that writes, tests, and iteratively improves code (including its own), you effectively have a system that can build most other systems without covering every consumer use case upfront.
Very long context windows can be more powerful than retrieval alone.
Steinberger argues that in-context learning—treating the model as an online optimizer over all available data—is fundamentally more expressive than any heuristic retrieval pipeline that surfaces a limited subset for each query.
Optimally balancing training compute and inference compute is crucial.
Model performance is a joint function of training and test-time compute; giving users control over how much compute to spend per query (e.g., via test-time search) is far more efficient than over-investing in training for all use cases.
Trust is the core product metric for an AI coding coworker.
Magic is reluctant to launch a “mediocre” assistant; the bar is that engineers feel comfortable letting the system write most code, with code review becoming light rather than a painful, error-fixing exercise.
AGI development needs recursive, model-assisted safety work.
Steinberger believes the only realistic way to prioritize and execute sufficient safety research is to iteratively use each generation of increasingly capable models to help analyze and mitigate the risks of the next.
WORDS WORTH SAVING
5 quotesIf your end goal is to have a system that can do everything, you can reduce that to building a system that can build that system.
— Erik Steinberger
Instead of bringing the data to the compute, we're bringing the compute to the data.
— Erik Steinberger
Retrieval selects a subset of data for one completion. Our model sees all the data all the time.
— Erik Steinberger
There is no mediocre AGI future. If it’s not terrible, it will be amazing.
— Erik Steinberger
I think the only way to reasonably approach this is to iteratively ask your model to solve alignment and safety at that stage.
— Erik Steinberger
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome