Skip to content
Lex Fridman PodcastLex Fridman Podcast

Jeremy Howard: fast.ai Deep Learning Courses and Research | Lex Fridman Podcast #35

Lex Fridman and Jeremy Howard on jeremy Howard on democratizing deep learning, tools, and real impact.

Lex FridmanhostJeremy Howardguest
Aug 27, 20191h 44mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Jeremy Howard on democratizing deep learning, tools, and real impact

  1. Jeremy Howard discusses his path from early programming and music to founding fast.ai, emphasizing a lifelong focus on practical data work and useful tools. He contrasts different programming languages and paradigms, arguing current Python-based deep learning stacks are powerful but fundamentally constrained for real innovation, and outlines why he’s betting on Swift and MLIR-style compilers. A large portion of the conversation centers on making deep learning accessible—through fast.ai courses, better tooling, and cloud setups—so domain experts (especially in areas like medicine) can solve real problems with limited data and compute. He also critiques academic incentives, over-reliance on big data and big compute, and stresses the ethical responsibilities and labor implications of AI.

IDEAS WORTH REMEMBERING

5 ideas

Practical problems and domain expertise should drive deep learning work.

Howard argues the most valuable AI work comes from domain experts applying deep learning to real problems (e.g., medicine, fisheries, media bias), not from abstract benchmarks; you should pair technical skills with a field you deeply understand.

You rarely need “big data” or massive compute to get state-of-the-art results.

Through transfer learning and careful experimentation, fast.ai repeatedly achieves top performance on modest hardware and smaller datasets, showing most organizations can do serious deep learning without Google-scale resources.

Current Python/CUDA stacks limit innovation; better compiler and language design are crucial.

Because Python is slow and forces critical loops into low-level CUDA C, many ideas (e.g., sparse CNNs, novel RNN kernels) are practically un-researchable; Howard is optimistic about Swift + MLIR-style tensor DSLs that let researchers write fast, hackable kernels in a high-level language.

Research incentives push academics toward tiny gains on trendy problems, not impactful ideas.

He notes that fields like transfer learning and active learning, which radically reduce data and labeling needs, were largely ignored until someone showed dramatic results, while papers that improve well-studied benchmarks by small margins are rewarded.

The fastest path to competence in deep learning is to train many models on your own data.

Howard stresses that learners should quickly fine-tune pretrained models on personally relevant datasets, inspect inputs/outputs and errors, and iterate; this builds intuition far more effectively than reading theory alone.

WORDS WORTH SAVING

5 quotes

Most of the research in the deep learning world is a total waste of time.

Jeremy Howard

I think all the major breakthroughs in AI in the next 20 years will be doable on a single GPU.

Jeremy Howard

We don’t need more experts creating slightly evolutionary research in areas that everybody is studying. We need experts using deep learning to solve real problems.

Jeremy Howard

The key differentiator between people that succeed and people that fail is tenacity.

Jeremy Howard

I don’t think we need a lot of new technological breakthroughs to do a lot of great work right now.

Jeremy Howard

Jeremy Howard’s background, early programming, and love of data-centric toolsProgramming language paradigms (APL/J, K, Delphi, Perl, Python, Swift) and their impact on deep learning researchLimits of current deep learning tooling (Python, CUDA, TensorFlow) and promise of Swift + MLIRfast.ai’s mission: democratizing deep learning via practical courses, libraries, and cloud accessibilityApplied deep learning in medicine and global healthcare (triage, diagnostics, regulation, data privacy)Research vs practice: transfer learning, active learning, optimization and DAWNBench performance workLearning, teaching, and careers: how to become effective in deep learning and build meaningful startupsEthics, labor displacement, and responsibilities of data scientists

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome