a16zBuilding an AI Physicist: ChatGPT Co-Creator’s Next Venture
At a glance
WHAT IT’S REALLY ABOUT
Periodic Labs aims to train AI scientists with experiment-in-loop learning
- Periodic Labs’ core thesis is that scientific discovery requires an iterative experiment–simulation loop, so real-world experiments should become the “reward function” for training agents, not just human preference or digital verifiers.
- They argue current frontier models struggle in physics and chemistry because the needed data is noisy, biased toward positive results, and often missing altogether—making lab-generated high-quality (including negative) data essential.
- The company’s first wedge is quantum-scale materials work (solid-state physics/chemistry), using automatable powder synthesis and characterization to run fast, verifiable iteration cycles.
- High-temperature superconductivity is positioned as a North Star because it’s both scientifically fundamental and technically tractable (a robust phase transition), while also forcing the team to build many reusable sub-capabilities (autonomous synthesis, characterization, simulation tool use).
- Commercialization is framed as “copilots” for advanced-industry R&D teams (semiconductors, space, defense, manufacturing), delivered via well-scoped deployments with clear evaluations and a land-and-expand approach.
IDEAS WORTH REMEMBERING
5 ideasIn physical sciences, “ground truth” is the experiment, not the model or simulator.
Periodic’s approach treats nature as the RL environment: simulations can guide search, but experimental measurements ultimately correct simulator deficiencies and provide hard-to-game reward signals.
Scaling laws can hold while still failing the task you care about.
They distinguish the scaling curve from its evaluation distribution: in-domain internet/test benchmarks may improve predictably, while out-of-domain physics tasks can have such a shallow improvement slope that progress would be impractically slow without targeted data and objectives.
Science-ready models need iteration, not just static knowledge ingestion.
They argue discovery requires repeated cycles of hypothesize → simulate → test → update; without the ability to act and learn from outcomes (including failures), models can at best reproduce literature distributions rather than reduce epistemic uncertainty.
The existing materials literature is an incomplete training set for discovery.
Reported properties can vary by orders of magnitude, negative results are rarely published, and context dependencies are high—so training purely on papers can yield weak predictors; a dedicated lab can systematically generate cleaner, more informative datasets.
Superconductivity is a strategic first domain because it is both inspiring and operationally practical.
As a phase transition, superconducting behavior is comparatively robust to hard-to-simulate microstructural details; it also provides a crisp success metric (raising Tc) and motivates building end-to-end autonomy across synthesis, characterization, and simulation.
WORDS WORTH SAVING
5 quotesUltimately, science is driven against experiment in the real world.
— Liam Fedus
I'd say the objective is let's replace the reward function from math graders and code graders that we're using today.
— Liam Fedus
LLMs can be very smart, but if they're not iterating on science, they won't discover science.
— Ekin Dogus Cubuk
The RL environment, nature, like, is our RL environment in, in our setting.
— Liam Fedus
If we could find a two hundred Kelvin superconductor, even before we make any product with it, that in itself says so much about the universe that we didn't know yet.
— Ekin Dogus Cubuk
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome