AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus

AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus

No PriorsApr 3, 202629m

Elad Gil (host)

Fedus’ background: physics → Google Brain → OpenAIWhy physicists migrate into AIFrom ChatGPT to physical-world scienceExperimental data vs literature and simulation dataClosed-loop experimentation systemsLLMs as orchestration/control planeSpecialized symmetry-aware atomic modelsCommercialization: software layer vs discovery modelScaling laws mindset applied to materials R&DCapital intensity: compute vs lab infrastructureMultidisciplinary teams (AI, physics, chemistry, engineering)AGI/ASI skepticism: spiky, domain-specific intelligenceRobotics as accelerator for lab throughput

In this episode of No Priors, featuring Elad Gil, AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus explores periodic Labs builds AI-driven closed loops for materials discovery acceleration Fedus traces his path from physics and Google Brain scaling-era research to OpenAI’s GPT-4 productionization and the early formation of ChatGPT as a general chatbot product.

Periodic Labs builds AI-driven closed loops for materials discovery acceleration

Fedus traces his path from physics and Google Brain scaling-era research to OpenAI’s GPT-4 productionization and the early formation of ChatGPT as a general chatbot product.

Periodic Labs’ thesis is that major scientific acceleration requires closing the loop between AI systems and real-world experiments, not just text-only reasoning or literature digestion.

The key bottleneck in materials AI is not only model capability but grounded, high-quality, diverse experimental data—because literature values can be inconsistent by orders of magnitude.

Periodic uses large language models primarily as an orchestration layer that coordinates literature, internal data, simulations, and specialized symmetry-aware atomic neural nets.

Fedus argues “intelligence” is spiky and domain-dependent: rapid machine self-improvement is already emerging in verifiable domains like coding, while physical-world progress hinges on automation, data generation, and robotics reliability.

Key Takeaways

Scientific progress won’t “scale” like AI without physical closed loops.

Fedus’ core claim is that text-only systems won’t drive order-of-magnitude gains in science unless they can plan experiments, observe reality, and update beliefs from grounded measurements in an iterative loop.

Get the full analysis with uListen AI

Materials data quality is a first-order problem, not a detail.

He notes that literature-extracted material properties can vary by orders of magnitude; training on that distribution yields uncertainty rather than truth, making curated experimental grounding essential.

Get the full analysis with uListen AI

Foundation-model priors improve sample efficiency in new scientific domains.

Periodic leverages the general-world prior learned from tens of trillions of tokens (papers and internet) so models are not “randomly initialized,” reducing the number of experiments needed once targeted exploration begins.

Get the full analysis with uListen AI

The winning architecture is a system-of-systems, not one monolithic model.

Periodic uses LLMs as a natural-language interface and orchestration layer while delegating fast, symmetry-aware atomic predictions to specialized neural nets used as tools/reward functions within a larger workflow.

Get the full analysis with uListen AI

Generalization exists within physics regimes, but doesn’t magically transfer across them.

Fedus suggests models can generalize across quantum-governed phenomena, but that competence won’t necessarily help in other regimes like fluid dynamics—implying domain partitioning matters.

Get the full analysis with uListen AI

Commercial value starts as software, with optional upside from breakthroughs.

He frames Periodic initially as an “intelligence/control plane” for enterprises bottlenecked by materials/process engineering, while acknowledging a biotech-like discovery model could emerge if breakthroughs are captured directly.

Get the full analysis with uListen AI

Automation and robotics aren’t strictly required—but they raise the ceiling.

Periodic can generate reliable data with hybrid human-plus-automation workflows today, but more general, reliable robots (e. ...

Get the full analysis with uListen AI

Notable Quotes

Science ultimately isn't sitting in a room thinking really hard. You have to conduct experiments, you have to learn from them, you have to interface with reality.

Liam Fedus

It's not just, like, a pool of data. It's this interactive closed-loop system that is so powerful.

Liam Fedus

We think about [language models] almost as, like, an orchestration layer.

Liam Fedus

One fallacy is thinking about intelligence as a scalar. We've consistently seen these systems have a very odd spikiness.

Liam Fedus

Just because atoms are hard doesn't mean there's not an order of magnitude or two to speed up.

Liam Fedus

Questions Answered in This Episode

What specific experimental modalities or lab workflows does Periodic prioritize first to create the “closed loop” (e.g., synthesis, characterization, process tuning), and why?

Fedus traces his path from physics and Google Brain scaling-era research to OpenAI’s GPT-4 productionization and the early formation of ChatGPT as a general chatbot product.

Get the full analysis with uListen AI

When literature-reported properties vary by orders of magnitude, what governance process do you use to establish ground truth—replication, standardized protocols, calibration runs, or model-based reconciliation?

Periodic Labs’ thesis is that major scientific acceleration requires closing the loop between AI systems and real-world experiments, not just text-only reasoning or literature digestion.

Get the full analysis with uListen AI

Which parts of the stack are proprietary at Periodic (data pipeline, specialized atomic nets, orchestration policies), and which parts do you expect to rely on open/closed foundation models long-term?

The key bottleneck in materials AI is not only model capability but grounded, high-quality, diverse experimental data—because literature values can be inconsistent by orders of magnitude.

Get the full analysis with uListen AI

Can you give an example of a “symmetry-aware” atomic model choice you’ve made (e.g., equivariant architectures) and what measurable latency/accuracy advantage it provides over a transformer-only approach?

Periodic uses large language models primarily as an orchestration layer that coordinates literature, internal data, simulations, and specialized symmetry-aware atomic neural nets.

Get the full analysis with uListen AI

You describe scaling laws as enabling capital deployment in AI—what are the analogous “scaling curves” you’re trying to measure in materials R&D (throughput vs performance vs cost)?

Fedus argues “intelligence” is spiky and domain-dependent: rapid machine self-improvement is already emerging in verifiable domains like coding, while physical-world progress hinges on automation, data generation, and robotics reliability.

Get the full analysis with uListen AI

Transcript Preview

Elad Gil

[upbeat music] Today on No Priors, we're talking with Liam Fedus. Liam is one of the co-creators of ChatGPT, which I think almost everybody uses at this point. He was the VP of post-training at OpenAI, and before that was at Google Brain, where he worked on a variety of really early AI innovations. Liam will be telling us a bit about Periodic Labs, his company, which is focused on building an AI foundation lab for atoms. In other words, how do we impact the physical world, material sciences, chemistry, et cetera, using AI? Very exciting topic and excited to be talking with him today.

Speaker

Great. Great.

Elad Gil

Liam, thank you so much for joining us today on No Priors.

Speaker

Yeah, thank you so much for having me. It's great to see you.

Elad Gil

Yeah. So, uh, maybe what we can do, I, I, I think you're doing incredibly interesting things in terms of alternative types of models, specifically for material sciences, for the physical world. Effectively, what you're building is, um, an AI foundation lab for atoms, which I think is fascinating.

Speaker

That's right.

Elad Gil

But maybe what we can start with is a little bit more of your background. You know, I think you were, uh, VP at OpenAI. You worked on one of the first trillion-parameter models ever, et cetera. Could you tell us a little bit more about just, like, what got you here and...

Speaker

Yeah. Um, so even further back, I was a physics major, um, in undergrad. Um, spent some time doing dark matter research. Um, research-- We had a apparatus that was directionally sensitive to dark matter's direction.

Elad Gil

Mm-hmm.

Speaker

Um, so it was very interesting.

Elad Gil

Why, why are those... Sorry to interrupt, but I'd love to come back to this, but why are there so many physicists in AI right now? So you look at Dario Amodei, who runs, um, Anthropic.

Speaker

Of course, yeah.

Elad Gil

Uh, you look at Adam Brown at Google, you look at a variety of people, and they all kinda have these physics backgrounds.

Speaker

Yeah, my old manager, Zsa Zsa-

Elad Gil

Mm-hmm

Speaker

...also physics-

Elad Gil

Yeah

Speaker

...and now at Anthropic.

Elad Gil

Yeah. Why, why do you think that is?

Speaker

I think it's a great way to think about the world. It's, like, very principled, um, very, like, hard-nosed scientists, um, very careful, and I don't know. I think it's just, it's such a incredible field. You have such high leverage in computer science in AI.

Elad Gil

Mm-hmm.

Speaker

And so I think a lot of physicists were seeing that.

Elad Gil

Mm-hmm.

Speaker

Um, particularly in, like, high-energy physics. Um, after the discovery of the Higgs, um, I think a lot of high-energy physicists were sort of looking for what's next.

Elad Gil

Mm-hmm.

Speaker

Um, ultimately it becomes bottlenecked on the new, um, apparatus for, you know, pushing the next energy frontier, and I think a lot of physicists were looking at their skill set and looking at the progress elsewhere and, and saying like, "Hey, I think I could be a huge contributor elsewhere."

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome