Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Lex Fridman Podcast #21

Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Lex Fridman Podcast #21

Lex Fridman PodcastMay 13, 20191h 13m

Lex Fridman (host), Chris Lattner (guest), Narrator, Lex Fridman (host)

Fundamentals of compilers, LLVM, and Clang as reusable infrastructureHistory and design philosophy behind Swift, including safety and usabilityCompiler optimizations, hardware evolution, and the analogy to neural networksSwift for TensorFlow, automatic differentiation, and graph-based ML executionGoogle TPUs, bfloat16, and hardware–software co-design for ML acceleratorsMLIR as a shared, next-generation compiler framework for machine learningOpen source culture, community-building, and Lattner’s experiences at Apple, Google, and Tesla

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Chris Lattner, Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Lex Fridman Podcast #21 explores chris Lattner on Compilers, Swift, TPUs, and ML’s Future Infrastructure Chris Lattner discusses his journey from early programming to creating LLVM, Clang, and Swift, and how compilers bridge human intent and diverse hardware. He explains the architecture of compilers, why LLVM succeeded as reusable infrastructure, and how Swift was designed for safety, performance, and ease of learning. The conversation then moves to machine learning: TPUs, Swift for TensorFlow, automatic differentiation, and MLIR as a next-generation compiler layer for ML systems. Lattner also reflects on open source culture, his brief but intense experience at Tesla, and the organizational and human aspects of building large-scale compiler communities.

Chris Lattner on Compilers, Swift, TPUs, and ML’s Future Infrastructure

Chris Lattner discusses his journey from early programming to creating LLVM, Clang, and Swift, and how compilers bridge human intent and diverse hardware. He explains the architecture of compilers, why LLVM succeeded as reusable infrastructure, and how Swift was designed for safety, performance, and ease of learning. The conversation then moves to machine learning: TPUs, Swift for TensorFlow, automatic differentiation, and MLIR as a next-generation compiler layer for ML systems. Lattner also reflects on open source culture, his brief but intense experience at Tesla, and the organizational and human aspects of building large-scale compiler communities.

Key Takeaways

Compilers are the critical bridge between human intent and heterogeneous hardware.

Lattner frames compilers as multi-phase systems (front end, optimizer, back end) that let humans write in high-level languages while targeting many different architectures, from CPUs and GPUs to specialized ML accelerators.

Get the full analysis with uListen AI

LLVM’s real innovation is infrastructure and community, not a single magic optimization.

By standardizing the middle and back ends of the compiler stack, LLVM enabled many languages (Swift, Rust, Julia, Clang, etc. ...

Get the full analysis with uListen AI

Language design trade-offs in Swift prioritize safety, performance, and learnability simultaneously.

Swift was created partly because Objective‑C couldn’t be made memory-safe without ceasing to be Objective‑C; Swift therefore embraces strong typing, safety, and progressive disclosure of complexity—so beginners can start simple while experts still get low-level control and performance.

Get the full analysis with uListen AI

Compiler concepts naturally extend into machine learning via graph and IR design.

Intermediate representations (IRs) in compilers are graph-like and language-agnostic, analogous to computation graphs in ML; this makes compiler technology a natural foundation for things like TensorFlow graphs, XLA, and future ML-specific IRs.

Get the full analysis with uListen AI

Swift for TensorFlow explores what’s possible when the *language* itself is ML-aware.

Unlike thin bindings around TensorFlow in other languages, Swift for TensorFlow integrates features like automatic differentiation and graph construction into the language and compiler, allowing more powerful analysis and optimization than library-only approaches in Python.

Get the full analysis with uListen AI

TPUs exemplify tight hardware–software co-design, including novel numeric formats.

Google’s TPUs use bfloat16 to trade precision for dynamic range and hardware efficiency, reflecting an explicit co-design loop between ML algorithms, compiler/runtime, and custom hardware to maximize performance per watt and per dollar.

Get the full analysis with uListen AI

MLIR aims to be “LLVM 2.0” for ML, reducing duplicated effort across stacks.

With many overlapping ML compiler projects (XLA, TensorRT, nGraph, etc. ...

Get the full analysis with uListen AI

Notable Quotes

Compilers are the art of allowing humans to think at the level of abstraction they want, and then getting that program to run on a specific piece of hardware.

Chris Lattner

The interesting thing about LLVM is not the innovations in compiler research; it’s that through standardization, it made things possible that otherwise wouldn’t have happened.

Chris Lattner

You can’t implement C++ without thinking, ‘There has to be a better thing.’

Chris Lattner

Technology is neither good nor bad. It’s how you apply it.

Chris Lattner

My goal is to change the world and make it a better place, and that’s what I’m really motivated to do.

Chris Lattner

Questions Answered in This Episode

If you were designing a new systems language today from scratch, what would you do differently from Swift, given what you’ve learned from Swift for TensorFlow and MLIR?

Chris Lattner discusses his journey from early programming to creating LLVM, Clang, and Swift, and how compilers bridge human intent and diverse hardware. ...

Get the full analysis with uListen AI

How far can automatic differentiation and ML-specific compiler optimizations be pushed before they start to constrain model and algorithm design?

Get the full analysis with uListen AI

In what scenarios do you think JIT compilation and dynamic execution will ultimately win over static compilation for production ML workloads, and why?

Get the full analysis with uListen AI

How should hardware designers and compiler engineers collaborate differently to keep pace with the rapid evolution of ML models and training techniques?

Get the full analysis with uListen AI

What governance or community structures are needed to make MLIR as successful and vendor-neutral as LLVM has been for general-purpose compilation?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Chris Lattner. Currently, he's a senior director at Google working on several projects, including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He's one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM Compiler Infrastructure Project and the Clang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as vice president of Autopilot Software during the transition from Autopilot Hardware 1.0 to Hardware 2.0, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. I could have easily talked to Chris for many more hours. Compiling code down across the levels of abstraction is one of the most fundamental and fascinating aspects of what computers do, and he is one of the world experts in this process. It's rigorous science, and it's messy, beautiful art. This conversation is part of The Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now here's my conversation with Chris Lattner. What was the first program you've ever written?

Chris Lattner

My first program-

Lex Fridman

Hmm, back. And when was it?

Chris Lattner

I think I started as a kid, and my parents got a BASIC programming book. And so when I started, it was typing out programs from a book, and seeing how they worked, and then typing them in wrong and trying to figure out why (laughs) they were not working right, that kinda stuff.

Lex Fridman

So BASIC. What was the first language that you remember yourself maybe falling in love with, like really connecting with?

Chris Lattner

Oh. I don't know, I mean, I feel like I've learned a lot along the way and, and each of them have a different special thing about them. So I started in BASIC and then went, like, GW-BASIC, which was the thing back in the DOS days, and then upgraded to QBasic and eventually QuickBASIC, which are all slightly more fancy versions of Microsoft BASIC. Um, made the jump to Pascal and started doing machine language programming in Assembly in Pascal, which was really cool. Turbo Pascal was amazing for its day. Um, eventually got into C, C++, and then kinda did lots of other weird things, so.

Lex Fridman

I feel like you took the dark path, which is the, uh... You coulda, you coulda gone Lisp.

Chris Lattner

Yeah, yeah.

Lex Fridman

You coulda gone higher level-

Chris Lattner

Yeah.

Lex Fridman

... sort of functional, philosophical, hippie route.

Chris Lattner

Yeah.

Lex Fridman

Instead, you went into, like, the dark arts-

Chris Lattner

Yeah.

Lex Fridman

... of the C++.

Chris Lattner

It was straight, straight into the machine, right?

Lex Fridman

Straight to the machine.

Chris Lattner

And so I s- so started with BASIC, Pascal, and then Assembly, and then wrote a lot of Assembly. And, um-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome