No PriorsAI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus
At a glance
WHAT IT’S REALLY ABOUT
Periodic Labs builds AI-driven closed loops for materials discovery acceleration
- Fedus traces his path from physics and Google Brain scaling-era research to OpenAI’s GPT-4 productionization and the early formation of ChatGPT as a general chatbot product.
- Periodic Labs’ thesis is that major scientific acceleration requires closing the loop between AI systems and real-world experiments, not just text-only reasoning or literature digestion.
- The key bottleneck in materials AI is not only model capability but grounded, high-quality, diverse experimental data—because literature values can be inconsistent by orders of magnitude.
- Periodic uses large language models primarily as an orchestration layer that coordinates literature, internal data, simulations, and specialized symmetry-aware atomic neural nets.
- Fedus argues “intelligence” is spiky and domain-dependent: rapid machine self-improvement is already emerging in verifiable domains like coding, while physical-world progress hinges on automation, data generation, and robotics reliability.
IDEAS WORTH REMEMBERING
5 ideasScientific progress won’t “scale” like AI without physical closed loops.
Fedus’ core claim is that text-only systems won’t drive order-of-magnitude gains in science unless they can plan experiments, observe reality, and update beliefs from grounded measurements in an iterative loop.
Materials data quality is a first-order problem, not a detail.
He notes that literature-extracted material properties can vary by orders of magnitude; training on that distribution yields uncertainty rather than truth, making curated experimental grounding essential.
Foundation-model priors improve sample efficiency in new scientific domains.
Periodic leverages the general-world prior learned from tens of trillions of tokens (papers and internet) so models are not “randomly initialized,” reducing the number of experiments needed once targeted exploration begins.
The winning architecture is a system-of-systems, not one monolithic model.
Periodic uses LLMs as a natural-language interface and orchestration layer while delegating fast, symmetry-aware atomic predictions to specialized neural nets used as tools/reward functions within a larger workflow.
Generalization exists within physics regimes, but doesn’t magically transfer across them.
Fedus suggests models can generalize across quantum-governed phenomena, but that competence won’t necessarily help in other regimes like fluid dynamics—implying domain partitioning matters.
WORDS WORTH SAVING
5 quotesScience ultimately isn't sitting in a room thinking really hard. You have to conduct experiments, you have to learn from them, you have to interface with reality.
— Liam Fedus
It's not just, like, a pool of data. It's this interactive closed-loop system that is so powerful.
— Liam Fedus
We think about [language models] almost as, like, an orchestration layer.
— Liam Fedus
One fallacy is thinking about intelligence as a scalar. We've consistently seen these systems have a very odd spikiness.
— Liam Fedus
Just because atoms are hard doesn't mean there's not an order of magnitude or two to speed up.
— Liam Fedus
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome