Skip to content
No PriorsNo Priors

AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus

What happens when you apply the scaling laws of large language models to the physical work of atoms? Elad Gil sits down with Liam Fedus, co-founder at Periodic Labs, which is pioneering an AI foundation lab for atoms. Liam discusses how he pivoted from dark matter physics research to the front lines of artificial intelligence, including stints at Google Brain and working on ChatGPT at OpenAI. He talks about how Periodic is connecting massive language models to the physical world to overcome data bottlenecks in material science. Liam also shares how they use language models as an orchestration layer operating alongside specialized neural nets to run closed-loop physical experiments. They also explore the future of AGI and ASI, as well as the role of robotics in lab automation. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @LiamFedus | @periodiclabs Chapters: 00:00 – Cold Open 00:05 – Liam Fedus Introduction 00:39 – Liam’s Background at Google Brain, OpenAI 05:14 – From ChatGPT to Materials and Atoms 06:34 – Training Data in the Physical World 09:52 – Generalization Across Domains 11:31 – Models as an Orchestration Layer 12:48 – Commercialization and Business Model 16:10 – How Periodic’s Success May Shape the Future 17:45 – Multidisciplinary Scaling 19:41 – Capital and Compute 21:12 – Hiring at Periodic 21:44 – Thoughts on AGI and ASI 23:30 – Timeline for Machine-Directed Self-Improvement 25:39 – Automation and Data Generation 27:59 – Why Liam is Excited About the Future of Robotics 29:25 – Conclusion

Elad Gilhost
Apr 2, 202629mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Periodic Labs builds AI-driven closed loops for materials discovery acceleration

  1. Fedus traces his path from physics and Google Brain scaling-era research to OpenAI’s GPT-4 productionization and the early formation of ChatGPT as a general chatbot product.
  2. Periodic Labs’ thesis is that major scientific acceleration requires closing the loop between AI systems and real-world experiments, not just text-only reasoning or literature digestion.
  3. The key bottleneck in materials AI is not only model capability but grounded, high-quality, diverse experimental data—because literature values can be inconsistent by orders of magnitude.
  4. Periodic uses large language models primarily as an orchestration layer that coordinates literature, internal data, simulations, and specialized symmetry-aware atomic neural nets.
  5. Fedus argues “intelligence” is spiky and domain-dependent: rapid machine self-improvement is already emerging in verifiable domains like coding, while physical-world progress hinges on automation, data generation, and robotics reliability.

IDEAS WORTH REMEMBERING

5 ideas

Scientific progress won’t “scale” like AI without physical closed loops.

Fedus’ core claim is that text-only systems won’t drive order-of-magnitude gains in science unless they can plan experiments, observe reality, and update beliefs from grounded measurements in an iterative loop.

Materials data quality is a first-order problem, not a detail.

He notes that literature-extracted material properties can vary by orders of magnitude; training on that distribution yields uncertainty rather than truth, making curated experimental grounding essential.

Foundation-model priors improve sample efficiency in new scientific domains.

Periodic leverages the general-world prior learned from tens of trillions of tokens (papers and internet) so models are not “randomly initialized,” reducing the number of experiments needed once targeted exploration begins.

The winning architecture is a system-of-systems, not one monolithic model.

Periodic uses LLMs as a natural-language interface and orchestration layer while delegating fast, symmetry-aware atomic predictions to specialized neural nets used as tools/reward functions within a larger workflow.

Generalization exists within physics regimes, but doesn’t magically transfer across them.

Fedus suggests models can generalize across quantum-governed phenomena, but that competence won’t necessarily help in other regimes like fluid dynamics—implying domain partitioning matters.

WORDS WORTH SAVING

5 quotes

Science ultimately isn't sitting in a room thinking really hard. You have to conduct experiments, you have to learn from them, you have to interface with reality.

Liam Fedus

It's not just, like, a pool of data. It's this interactive closed-loop system that is so powerful.

Liam Fedus

We think about [language models] almost as, like, an orchestration layer.

Liam Fedus

One fallacy is thinking about intelligence as a scalar. We've consistently seen these systems have a very odd spikiness.

Liam Fedus

Just because atoms are hard doesn't mean there's not an order of magnitude or two to speed up.

Liam Fedus

Fedus’ background: physics → Google Brain → OpenAIWhy physicists migrate into AIFrom ChatGPT to physical-world scienceExperimental data vs literature and simulation dataClosed-loop experimentation systemsLLMs as orchestration/control planeSpecialized symmetry-aware atomic modelsCommercialization: software layer vs discovery modelScaling laws mindset applied to materials R&DCapital intensity: compute vs lab infrastructureMultidisciplinary teams (AI, physics, chemistry, engineering)AGI/ASI skepticism: spiky, domain-specific intelligenceRobotics as accelerator for lab throughput

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome