Skip to content
Dwarkesh PodcastDwarkesh Podcast

Michael Nielsen on Dwarkesh Patel: Why Ether Died Slowly

Lorentz fit Einstein equations while keeping the ether ontology; Michelson-Morley only ruled out ether wind, so a single result cannot force a paradigm shift.

Dwarkesh PatelhostMichael Nielsenguest
Apr 7, 20262h 3mWatch on YouTube ↗

CHAPTERS

  1. Michelson–Morley: what the experiment did (and didn’t) falsify

    Nielsen re-tells the Michelson–Morley story as scientists at the time understood it: not a clean disproof of “the ether,” but evidence against specific ether models (like an ether wind). The chapter highlights how later textbook narratives compress a messy research program into a simple crisis-and-resolution arc.

  2. Lorentz vs. Einstein: same equations, different meanings

    They discuss how Lorentz developed transformations that mathematically match special relativity but interpreted them through an ether frame. The conversation emphasizes that physics often advances by changing interpretation and ontology—not just by fitting equations.

  3. When verification arrives decades later: muons and time dilation

    Nielsen uses cosmic-ray muon lifetime measurements (mid-20th century) as an example of a long verification loop that eventually makes the ‘time really changes’ interpretation feel unavoidable. This illustrates how communities sometimes commit to frameworks before the most decisive empirical clinchers appear.

  4. Copernicus vs. Ptolemy: progress without being simpler or more accurate (yet)

    Dwarkesh presses on why heliocentrism was adoptable despite weaker predictive accuracy and even added epicycles in early Copernican models. Nielsen points to later unification—especially Newtonian synthesis—as a key reason some frameworks feel more compelling than their early scorecards suggest.

  5. Why natural selection wasn’t obvious earlier: prerequisites and “making the case”

    They explore why Darwin’s idea took so long to land despite breeders understanding pieces of selection. The key barrier wasn’t just the core mechanism but assembling deep time, geology, biogeography, and a persuasive evidential web—plus grappling with missing mechanisms like heredity.

  6. Automation limits: AlphaFold, data accumulation, and what counts as explanation

    AlphaFold is treated as both a signature AI success and a reminder that decades of experimental infrastructure (protein databanks, imaging techniques, funding) were the main driver. They debate whether large neural models are ‘scientific explanations’ like GR, or a new kind of object from which explanations might be extracted.

  7. Could gradient descent find general relativity? Big theory shifts and forcing functions

    Dwarkesh worries that optimization over observational fit might just produce “more epicycles,” missing global theory swaps. Nielsen frames Einstein’s path to GR as driven by incompatibilities (finite signal speed vs. action at a distance), plus long exploration through ugly intermediates before landing on a simple final form.

  8. Why aliens will have a different tech stack: the tech tree is vast and path-dependent

    Nielsen argues that “science finished by a theory of everything” is a category error: even after fundamentals, there’s immense unexplored combinatorial space (as in computer science). Differences in perception, embodiment, and early choices could steer civilizations into distinct regions of the tech tree—yielding genuinely different technological stacks.

  9. Diminishing returns vs. “new desserts”: why new fields keep appearing

    They examine the common ‘low-hanging fruit is gone’ story and Nielsen’s counter: progress is not a fixed buffet—new desserts get restocked when new fields, tools, and representations open up. Attention, fashion, and institutional dynamics determine which frontiers get resourced and which remain invisible.

  10. Are there infinitely many deep principles? Measuring progress and the Bloom “ideas get harder” result

    Dwarkesh asks whether there are endlessly many Noether/Church–Turing-level principles left, given empirical evidence that maintaining progress requires more researchers. Nielsen is skeptical that narrow productivity metrics capture the arrival of new fields and spillovers, and suggests institutional and tool changes can reset the difficulty curve.

  11. Quantum computing’s origin story: why the field ignited in the 1980s and why Nielsen joined early

    Nielsen explains why quantum computing could have been invented earlier (von Neumann era) but wasn’t: computation became salient and single-quantum-state control emerged around the same time. He recounts how a mentor handed him foundational papers in 1992, and why Deutsch/Feynman-style questions felt both deep and tractable.

  12. Open science and the credit economy: what ‘success’ looks like and what’s still missing

    They treat open science as a redesign of the political economy of knowledge: papers, code, data, and preprints all sit in different credit regimes. Nielsen emphasizes that attribution norms are socially constructed (physics vs. biology preprints) and that better incentive systems can unlock more collective progress.

  13. Prolificness vs. depth—and how to actually internalize what you learn

    The conversation turns personal: how creators balance routine output with high-variance exploration, and how “deep learning” (in the human sense) often requires demanding stakes and time spent stuck. They discuss why podcasts (and now LLMs) can create an illusion of understanding unless paired with forcing functions like exercises, writing, or building artifacts.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome