Rajat Monga: TensorFlow | Lex Fridman Podcast #22

Rajat Monga: TensorFlow | Lex Fridman Podcast #22

Lex Fridman PodcastJun 3, 20191h 10m

Lex Fridman (host), Rajat Monga (guest)

Early days of Google Brain and scaling deep learning at GoogleDecision to open-source TensorFlow and its industry impactMajor design choices: graphs, production focus, and TensorFlow 2.0Keras integration and simplifying APIs for beginners and enterprisesTensorFlow’s broader ecosystem (Lite, JS, TFX, TPUs, datasets, Hub)Balancing research innovation with production stability and backward compatibilityBuilding and managing a large open-source community and engineering team

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Rajat Monga, Rajat Monga: TensorFlow | Lex Fridman Podcast #22 explores rajat Monga on TensorFlow’s evolution, ecosystem, and open-source impact Rajat Monga, engineering director at Google and co-founder of TensorFlow, discusses the origins of Google Brain, the motivation and risk behind open-sourcing TensorFlow, and how it rapidly became a global standard for deep learning. He explains key design decisions (graphs, production focus, eager execution), the integration of Keras, and the tension between research agility and enterprise stability. Monga outlines TensorFlow’s expanding ecosystem across cloud, mobile, browser, and specialized hardware, emphasizing modularity, usability, and community-driven growth. He also touches on team-building, hiring, and how competition from projects like PyTorch has shaped TensorFlow 2.0 and beyond.

Rajat Monga on TensorFlow’s evolution, ecosystem, and open-source impact

Rajat Monga, engineering director at Google and co-founder of TensorFlow, discusses the origins of Google Brain, the motivation and risk behind open-sourcing TensorFlow, and how it rapidly became a global standard for deep learning. He explains key design decisions (graphs, production focus, eager execution), the integration of Keras, and the tension between research agility and enterprise stability. Monga outlines TensorFlow’s expanding ecosystem across cloud, mobile, browser, and specialized hardware, emphasizing modularity, usability, and community-driven growth. He also touches on team-building, hiring, and how competition from projects like PyTorch has shaped TensorFlow 2.0 and beyond.

Key Takeaways

Open-sourcing TensorFlow catalyzed global adoption of deep learning.

Releasing a production-grade ML library from a major company signaled that open innovation is viable at scale, accelerating research, tooling, and industry uptake far beyond what internal use alone could achieve.

Get the full analysis with uListen AI

TensorFlow was designed from day one with production in mind.

The choice of a graph-based model and focus on deployment (data centers, mobile, TPUs) came from real Google product needs, which made it attractive to enterprises wanting reliability, scalability, and long-lived models.

Get the full analysis with uListen AI

TensorFlow 2.0 pivots toward simplicity and usability without abandoning power.

By making eager execution the default and standardizing on Keras APIs, TensorFlow lowers the barrier for beginners and typical developers while still allowing advanced users to drop into lower-level constructs and graphs when needed.

Get the full analysis with uListen AI

Pretrained models and curated datasets massively reduce the start-up cost for users.

Components like TensorFlow Hub and TensorFlow Datasets let users plug in proven architectures and standardized data quickly, enabling both hobbyists and enterprises to get value without deep ML expertise or pristine data infrastructure.

Get the full analysis with uListen AI

Maintaining backward compatibility is costly but crucial for trust and adoption.

Enterprises run models for years, so TensorFlow must preserve APIs and behavior where possible; Monga advocates designing new features as if from a clean slate, then carefully mapping them onto existing systems rather than constantly breaking users.

Get the full analysis with uListen AI

Competition from PyTorch and others directly influenced TensorFlow’s evolution.

PyTorch’s success with imperative-style research workflows helped push TensorFlow to prioritize eager execution and better research ergonomics, shortening the time it took to adopt these ideas and improving the overall ecosystem.

Get the full analysis with uListen AI

A healthy open-source ecosystem depends on process, transparency, and shared ownership.

TensorFlow’s growth (thousands of contributors and commits) required formal processes like RFCs, special interest groups, design reviews, and clearer modular boundaries so that external organizations and hardware vendors can extend it effectively.

Get the full analysis with uListen AI

Notable Quotes

We wanted to see if we could take what people were doing in research and scale it to what Google has in terms of compute power and data.

Rajat Monga

The decision to open source TensorFlow was one of the seminal moments in all of software engineering ever.

Lex Fridman

If you don’t have a graph, how do you deploy now? That’s what tipped the balance for us.

Rajat Monga

We had to pick one. Keras was clearly one that lots of people loved, so we settled on that.

Rajat Monga

Unless you design with a clean slate and not worry about [backwards compatibility], you’ll never get to a good place.

Rajat Monga

Questions Answered in This Episode

How will TensorFlow balance backward compatibility with the need to rapidly support new hardware and algorithmic paradigms over the next decade?

Rajat Monga, engineering director at Google and co-founder of TensorFlow, discusses the origins of Google Brain, the motivation and risk behind open-sourcing TensorFlow, and how it rapidly became a global standard for deep learning. ...

Get the full analysis with uListen AI

In what areas does TensorFlow still lag behind PyTorch for cutting-edge research workflows, and how might future versions address that?

Get the full analysis with uListen AI

How can organizations with messy, non-digitized data realistically prepare to benefit from TensorFlow and deep learning?

Get the full analysis with uListen AI

What concrete steps is the TensorFlow team taking to make the core more modular so hardware vendors and large enterprises can plug in custom components more easily?

Get the full analysis with uListen AI

If you were designing a new ML framework from scratch today, what would you do differently based on everything learned from TensorFlow 1.x and 2.x?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Rajat Monga. He's an engineering director at Google, leading the TensorFlow team. TensorFlow is an open source library at the center of much of the work going on in the world in deep learning, both the cutting edge research and the large-scale application of learning-based approaches. But it's quickly becoming much more than a software library. It's now an ecosystem of tools for the deployment of machine learning in the cloud, on the phone, in the browser, on both generic and specialized hardware, TPU, GPU, and so on. Plus, there's a big emphasis on growing a passionate community of developers. Rajat, Jeff Dean, and a large team of engineers at Google Brain are working to define the future of machine learning with TensorFlow 2.0, which is now in alpha. I think the decision to open source TensorFlow was a definitive moment in the tech industry. It showed that open innovation could be successful, and inspired many companies to open source their code, to publish, and in general, engage in the open exchange of ideas. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter, @LexFridman, spelled F-R-I-D. And now, here's my conversation with Rajat Monga. You were involved with Google Brain since its start in 2011 with, uh, Jeff Dean. It started with this belief the proprietary machine learning library and turned into TensorFlow in 2014, the open source library. So, what were the early days of Google Brain like? What were the goals, the missions? How do you even proceed forward once there's so much possibilities before you?

Rajat Monga

It was interesting back then, you know, when I started out, when you, you were even just talking about it. The idea of deep learning was interesting and intriguing in some ways. It hadn't yet taken off, but it held some promise and it showed some very promising and early results. I think the, the idea where Andrew and Jeff had started was, what if we can take this, what people are doing in research, and scale it to what Google has in terms of the compute power? And, uh, also put that kind of data together, what does it mean? And so far, the results have been if you scale the compute, scale the data, it does better, and would that work. And so that, that was the first year or two, "Can we prove that out right?" And with this belief, when we started the first year, we got some early wins, which, which is always great.

Lex Fridman

What were the wins like? What was the wins where you were, "There's some promise to this, this is gonna be good"?

Rajat Monga

I think the two early wins were, one was speech that we collaborated very closely with the speech research team who was also getting interested in this, and the other one was on images where we, you know, the cat paper as we call it-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome