
No Priors Ep. 79 | With Magic.dev CEO and Co-Founder Eric Steinberger
Sarah Guo (host), Eric Steinberger (guest), Elad Gil (host)
In this episode of No Priors, featuring Sarah Guo and Eric Steinberger, No Priors Ep. 79 | With Magic.dev CEO and Co-Founder Eric Steinberger explores magic.dev’s Erik Steinberger on AGI, coding coworkers, and safety Erik Steinberger, CEO and co-founder of Magic.dev, discusses building an AI software engineer that functions as a true colleague rather than a lightweight coding assistant. He explains why Magic focuses on code-generation as a pathway to AGI, emphasizing long-context models, test-time compute, and an architecture tuned for continual learning from large histories of work. The conversation explores how to allocate compute between training and inference, why long context can outperform retrieval, and how recursive self-improvement could be bounded by safety-focused iterations. Steinberger also reflects on the societal impacts of AGI, from automation of most digital work to questions of meaning, power, and how capitalism and competition might shape outcomes.
Magic.dev’s Erik Steinberger on AGI, coding coworkers, and safety
Erik Steinberger, CEO and co-founder of Magic.dev, discusses building an AI software engineer that functions as a true colleague rather than a lightweight coding assistant. He explains why Magic focuses on code-generation as a pathway to AGI, emphasizing long-context models, test-time compute, and an architecture tuned for continual learning from large histories of work. The conversation explores how to allocate compute between training and inference, why long context can outperform retrieval, and how recursive self-improvement could be bounded by safety-focused iterations. Steinberger also reflects on the societal impacts of AGI, from automation of most digital work to questions of meaning, power, and how capitalism and competition might shape outcomes.
Key Takeaways
Code is a natural first domain for AGI-like systems.
Magic works backwards from AGI: if you can build a system that writes, tests, and iteratively improves code (including its own), you effectively have a system that can build most other systems without covering every consumer use case upfront.
Get the full analysis with uListen AI
Very long context windows can be more powerful than retrieval alone.
Steinberger argues that in-context learning—treating the model as an online optimizer over all available data—is fundamentally more expressive than any heuristic retrieval pipeline that surfaces a limited subset for each query.
Get the full analysis with uListen AI
Optimally balancing training compute and inference compute is crucial.
Model performance is a joint function of training and test-time compute; giving users control over how much compute to spend per query (e. ...
Get the full analysis with uListen AI
Trust is the core product metric for an AI coding coworker.
Magic is reluctant to launch a “mediocre” assistant; the bar is that engineers feel comfortable letting the system write most code, with code review becoming light rather than a painful, error-fixing exercise.
Get the full analysis with uListen AI
AGI development needs recursive, model-assisted safety work.
Steinberger believes the only realistic way to prioritize and execute sufficient safety research is to iteratively use each generation of increasingly capable models to help analyze and mitigate the risks of the next.
Get the full analysis with uListen AI
AGI’s economic impact will be extreme, not moderate.
He sees a bimodal future where most computer and factory work is automated, leading to enormous abundance but also deep challenges around power concentration, identity, and meaning for ambitious, competitive people.
Get the full analysis with uListen AI
Tools will adapt to AI, not the other way around.
In the long run, AI agents will sit at the same level as human employees, using IDEs, observability tools, and other systems directly; tools will evolve ‘for AI’ just as websites evolved for search crawlers.
Get the full analysis with uListen AI
Notable Quotes
“If your end goal is to have a system that can do everything, you can reduce that to building a system that can build that system.”
— Erik Steinberger
“Instead of bringing the data to the compute, we're bringing the compute to the data.”
— Erik Steinberger
“Retrieval selects a subset of data for one completion. Our model sees all the data all the time.”
— Erik Steinberger
“There is no mediocre AGI future. If it’s not terrible, it will be amazing.”
— Erik Steinberger
“I think the only way to reasonably approach this is to iteratively ask your model to solve alignment and safety at that stage.”
— Erik Steinberger
Questions Answered in This Episode
How do you empirically evaluate when an AI coding coworker is ‘trustworthy enough’ to skip code review for meaningful portions of a codebase?
Erik Steinberger, CEO and co-founder of Magic. ...
Get the full analysis with uListen AI
What concrete safety research questions would you ask a near-AGI coding system to help solve about its own future iterations?
Get the full analysis with uListen AI
Are there fundamental limits to long-context approaches—for example in latency, cost, or attention quality—that might force a hybrid with retrieval in practice?
Get the full analysis with uListen AI
How should governments and regulators design ‘the right guardrails’ that maintain competitive pressure while managing existential and power-concentration risks?
Get the full analysis with uListen AI
In a world where most digital work is automated, what new institutions or norms might help ambitious people find meaning and healthy outlets for competition?
Get the full analysis with uListen AI
Transcript Preview
(music plays) So welcome to No Priors. Today we're talking with Erik Steinberger, the co-founder and CEO of Magic. They're developing a software engineer co-pilot that will act more like a colleague than a tool. And Erik has a really fascinating background between work on- at Meta on different types of games, running, uh, Climate Science, which was a nonprofit focused on the climate world, and now, of course, developing a- a incredibly interesting AI model and system. So welcome to No Priors today, Erik.
Uh, thank you so much for having me. It's great.
So- so you have a super eclectic background. Um, could you tell us a little bit more about your, you know, what you worked on in the early days, how that evolved into working on AI, and sort of the path you've taken?
Yeah. Um, yeah, thank you. Um, so I- I guess when I was 14, I just had my midlife crisis and, uh, thought I had to do something important with my life. Uh, and spent a year trying to look at everything. Uh, it was pretty stupid. And, uh, basically I looked at the things like string theory and like all the things a 14-year-old would look at and be like, "Okay, what can I spend my life on?" Uh, and, uh, eventually, uh, my mom got me a book on AI and I didn't read it, I'm sorry. Um, but it was like the idea, um, was sufficient. Uh, so I was like, "Okay, this could do anything." Uh, and so- so you should just do that, and then it does everything. And then, um, it seemed plausible that, uh, you'd need to do reinforcement learning. Um, so I didn't know how to code at the time. Um, then, uh, learned to code, uh, uh, over a couple years. This was sort of in high school times. Uh, and then, um, it seemed plausible that you'd need to do reinforcement learning because otherwise you'd not be unfounded. Um, so- so I sort of just started working on RL, uh, played around with things for a bit. Um, and, uh, e- eventually, uh, reached out to someone at DeepMind, uh, to basically I was like pitching this like multi-page email that was like, "Could you like do like a mini PhD thing," where I'm like, "I'm a complete newbie, but if you can bash me, just please bash me like every two weeks and like tell me how to be a good researcher." And so- so eventually, um, I got like reasonable and then did some actual research work and- and worked with, you know, a few other people, including Norman Brown, um, who on developing new RL algorithms, uh, to be more sample efficient and just generally better and faster or whatever, uh, was the goal at the time, um, to- to- to solve, uh, whatever environments we were interested in at the time. Um, so yeah, that's how I got into it. I- I have no background in language models when we started, um, Magic at all. It- it just seemed... I just was like totally not on my radar. Uh, I was like, "Oh, wait a second, like if you take this and this and put it together, like- like maybe this works." Um, and so- so then- then I- I sort of, it felt like this huge relief of, uh, uncert- like, uh, uh, certainty relief of like where AGI would come from, uh, 'cause you just put those two things together and then- and then they will work, uh, was the sort of hope. Um, but yeah, yeah, my original background is in RL, and, uh, trying to come up with algorithms that sort of, yeah, just like have better structures, uh, to be more sample efficient and faster or better conversions.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome