Skip to content
Lex Fridman PodcastLex Fridman Podcast

Rajat Monga: TensorFlow | Lex Fridman Podcast #22

Lex Fridman and Rajat Monga on rajat Monga on TensorFlow’s evolution, ecosystem, and open-source impact.

Lex FridmanhostRajat Mongaguest
Jun 3, 20191h 10mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Rajat Monga. He's an engineering director at Google, leading the TensorFlow team. TensorFlow is an open source library at the center of much of the work going on in the world in deep learning, both the cutting edge research and the large-scale application of learning-based approaches. But it's quickly becoming much more than a software library. It's now an ecosystem of tools for the deployment of machine learning in the cloud, on the phone, in the browser, on both generic and specialized hardware, TPU, GPU, and so on. Plus, there's a big emphasis on growing a passionate community of developers. Rajat, Jeff Dean, and a large team of engineers at Google Brain are working to define the future of machine learning with TensorFlow 2.0, which is now in alpha. I think the decision to open source TensorFlow was a definitive moment in the tech industry. It showed that open innovation could be successful, and inspired many companies to open source their code, to publish, and in general, engage in the open exchange of ideas. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter, @LexFridman, spelled F-R-I-D. And now, here's my conversation with Rajat Monga. You were involved with Google Brain since its start in 2011 with, uh, Jeff Dean. It started with this belief the proprietary machine learning library and turned into TensorFlow in 2014, the open source library. So, what were the early days of Google Brain like? What were the goals, the missions? How do you even proceed forward once there's so much possibilities before you?

    2. RM

      It was interesting back then, you know, when I started out, when you, you were even just talking about it. The idea of deep learning was interesting and intriguing in some ways. It hadn't yet taken off, but it held some promise and it showed some very promising and early results. I think the, the idea where Andrew and Jeff had started was, what if we can take this, what people are doing in research, and scale it to what Google has in terms of the compute power? And, uh, also put that kind of data together, what does it mean? And so far, the results have been if you scale the compute, scale the data, it does better, and would that work. And so that, that was the first year or two, "Can we prove that out right?" And with this belief, when we started the first year, we got some early wins, which, which is always great.

    3. LF

      What were the wins like? What was the wins where you were, "There's some promise to this, this is gonna be good"?

    4. RM

      I think the two early wins were, one was speech that we collaborated very closely with the speech research team who was also getting interested in this, and the other one was on images where we, you know, the cat paper as we call it-

    5. LF

      Mm-hmm.

    6. RM

      ... that was covered by-

    7. LF

      Yeah.

    8. RM

      ... (laughs) uh, a lot of folks.

    9. LF

      And, uh, the birth of Google Brain was a- around neural networks. That was ... So, it was deep learning from the very beginning.

    10. RM

      That's right.

    11. LF

      That was the whole mission.

    12. RM

      Yeah.

    13. LF

      So, what, what, uh, in terms of scale, what was the sort of, uh, dream of what this could become? Like, what, were there echoes of this open source TensorFlow community that might be brought in? Was there a sense of TPUs? Was there a sense of like, machine learning is now gonna be at the core of the entire company, is g- going to grow into that direction?

    14. RM

      Yeah, I, I think ... So, so that was interesting, and like, if I think back to 2012 or 2011-

    15. LF

      Right.

    16. RM

      ... and first was, can we scale it? And in the year or so, we had started scaling it to hundreds and thousands of machines. In fact, we had some runs even going to 10,000 machines, and all of those shows great promise. Uh, in terms of machine learning at Google, the good thing was Google's been doing machine learning for a long time. Deep learning was new, but as we scaled this up, we showed that, yes, that was possible, and it was gonna impact lots of things, like we started seeing real products wanting to use this. Again, speech was the first. There were image things that photos came out of, and, and then many other products as well. So, so that was exciting. Um, as we went into that a couple of years, externally also, academia started to, you know, there was lots of push on, "Okay, deep learning's interesting. We should be doing more," and so on. And so, by 2014, we were looking at, "Okay, this is a big thing. It's gonna grow," and, uh, not just internally, externally as well. Yes, maybe Google's ahead of where everybody is, but there's a lot to do, so a lot of this start to make sense and come together.

    17. LF

      So, the decision to open source ... I was just chatting with, uh, with Chris Lattner about this. Uh, the decision to go open source for TensorFlow, I w- I would say is that for me personally seems to be one of the big seminal moments in all of software engineering ever.

    18. RM

      (laughs) .

    19. LF

      I think that's a ... When a large company like Google decides to take a large project that many lawyers might argue has a lot of IP, just decide to go open source with it, and in so doing, lead the entire world in saying, "You know what? Open innovation is, is, is a pretty powerful thing, and it's okay to do." (laughs) Uh, that, that was ... I mean, that's an, uh, that's an incredible, credible moment in time. So, do you remember those discussions happening?

    20. RM

      Yeah.

    21. LF

      Whether open source should be happening? What was that like?

    22. RM

      I would say, I think I ... So the, the initial idea came from Jeff, who was a big proponent of this. I think it came off of two big things. Uh, one was, research-wise, we were a research group. We were putting all our research out there, if you wanted to ... We were building on others' research, and we wanted to push the state of the art forward, and part of that was to share the research. That's how I think deep learning and machine learning has really grown so fast.So the next step was, okay, now would software help with that? And it seemed like there were existing a few libraries out there, Theano being one, Torch being another, and a few others, but they were all done by academia, and so the level was, was significantly different. The other one was, from a software perspective, Google had done lots of software or that we used internally, you know, and we published papers. Often, there was an open source project that came out of that, that somebody else picked up that paper and implemented, and they were very successful. Back then, it was like, "Okay, there's Hadoop, which has come off of tech that we built." We know the tech we've built is way better for a number of different reasons. We've, you know, invested a lot of effort in that. And turns out, we have Google Cloud and we are now not really providing our tech, but we are saying, "Okay, we have Bigtable, which is the original thing. We are gonna now provide HBase APIs on top of that, which isn't as good, but that's what everybody's used to." So there's, there's like, can we make something that is better and really just provide... Helps the community in lots of ways, but also helps push the right... a good standard forward.

    23. LF

      So how does cloud fit into that? There's a TensorFlow open source-

    24. RM

      Right.

    25. LF

      ... library. And how does the fact that you can, uh, use so many of the resources that Google provides in the cloud fit into that strategy?

    26. RM

      So, so TensorFlow itself is open and you can use it anywhere, right? And we wanna make sure that continues to be the case. On Google Cloud, we do make sure that there's lots of integrations with everything else, and we wanna make sure that it works really, really well there, so...

    27. LF

      You're leading the TensorFlow effort. Can you tell me the history and the timeline of TensorFlow project in terms of major design decisions? So like the open source decision, but really, uh, you know, what to include and not. There's this incredible ecosystem that I'd like to talk about.

    28. RM

      Yeah.

    29. LF

      There's all these parts, but what, uh, if you just... Some sample moments that, uh, de- defined what TensorFlow eventually became through its... I don't know if you're allowed to say history when it's just...

    30. RM

      (laughs)

  2. 15:0030:00

    Mm-hmm. …

    1. RM

      projects, people doing... You know, we don't think about documentation. Um, I think what that changed was instead of deep learning being a research thing-

    2. LF

      Mm-hmm.

    3. RM

      ... some people who were just developers could now suddenly take this out and do some interesting things with it-

    4. LF

      Mm-hmm.

    5. RM

      ... like, who had no clue what machine learning was before then. Um, and that, I think, really changed how things started to scale up in some ways, and- and, uh, pushed on it. Over the next few months, as we looked at, you know, "How do we stabilize things?" As we look at not just researchers now, we want stability, people who want to deploy things. That's how we started planning for 1.0. And there are certain needs for that perspective. And so, again, documentation comes up, um, designs, more kinds of things to put that together. And so, that was exciting to get that to a stage where more and more enterprises wanted to buy in and really get behind that. And I think post-1.0 and, you know, with the next few re- releases, that enterprise adoption also started to take off. I would say between the initial release and 1.0, it was... Okay, researchers, of course, uh, then a lot of hobbyists and early interest- people excited about this who started to get on board, and then over the 1.X thing, lots of enterprises.

    6. LF

      I imagine anything that's, you know, below 1.0-

    7. RM

      (laughs) .

    8. LF

      ... gets pressured to be, uh... Yeah, everybody probably wants something that's stable.

    9. RM

      Exactly.

    10. LF

      And, uh, do you have a sense now that TensorFlow is sta-

    11. RM

      (laughs) .

    12. LF

      Like, it feels like deep learning in general is extremely dynamic field, uh, so much is changing. Do you ever... And TensorFlow has been growing incredibly. Do you have a sense of stability-

    13. RM

      (laughs) .

    14. LF

      ... at the helm of this? I mean, I know you're in the midst of it, but...

    15. RM

      Yeah. It- it's... Yeah, it's... I think in the midst of it, it's often easy to forget what, uh, an enterprise wants and what some of the people, uh, uh, on that side want. There are still people running models that are three years old, four years old.

    16. LF

      Right.

    17. RM

      So, Inception is still used by tons of people. Res-... Even ResNet-50 is, what, a couple of years old now or more? But there are tons of people who use that-

    18. LF

      Yeah, it's ancient.

    19. RM

      ... and they're fine.

    20. LF

      Yeah.

    21. RM

      They don't need the last couple of per- bits of performance or quality. They want some stability and things that just work.

    22. LF

      Mm-hmm.

    23. RM

      And so there is value in providing that with that kind of stability, and- and making it really simpler, because that allows a lot more people to access it. And then there's the- the research crowd, which wants... Okay, they want to do these crazy things exactly like you're saying, right? Not just deep learning in the straight-up models that used to be there. They want, um, RNNs, and even RNNs are maybe... or they are transformers now.

    24. LF

      Right.

    25. RM

      And, uh, now it needs to combine with RL and GANs and so on. So- so there's definitely that area that... like, the boundary that's shifting and pushing the state of the art. Uh, but I think there's more and more of the past that's w- much more stable, and even stuff that was two, three years old is very, very usable by lots of people. So that makes it... That part makes it a lot easier.

    26. LF

      So I imagine, maybe you can correct me if I'm wrong, one of the biggest use cases is essentially taking something like ResNet-50 and doing some kind of, uh, transfer learning on a very particular problem that you have. I- it's basically probably what majority of the world does.... and you want to make that as easy as possible, say?

    27. RM

      Yes. So, so I would say for the hobbyist perspective, that's the most common case, right? In fact, the apps on phones and stuff that you will see, the early ones, that's the most common case. I, I would say there are a couple of reasons for that. One is that everybody talks about that.

    28. LF

      Mm-hmm.

    29. RM

      It looks great on slides.

    30. LF

      Yeah.

  3. 30:0045:00

    Right.…

    1. RM

      a crazy number of compute devices across the world. And we often used to think of ML and training and all of this as, okay, something you do either in the workstation or the data center or cloud, but we see things running on the phones, we see things running on really tiny chips. I mean, we had some demos at the developer summit. And so the way I've... think about this ecosystem is, how do we help get machine learning on every device that has a compute capability?

    2. LF

      Right.

    3. RM

      And that continues to grow. And, and so, uh, in some ways, this ecosystem has looked at, you know, various aspects of that and grown over time to cover more of those, and we continue to push the boundaries. In some areas, we've built, um, more tooling and things around that to help you. I mean, the first tool we started was TensorBoard, if you wanted to learn just the training piece. Um, TFX or TensorFlow Extended to really do your entire ML pipelines if you're, you know, care about all that production stuff. Uh, but then going to the edge, going to different kinds of things. And it's not just us now. Um, we are at a place where, you know, there are lots of libraries being built on top. So there are some for research, maybe things like TensorFlow Agents or TensorFlow Probability that started as research things or for researchers for focusing on certain kinds of algorithms, but they're also being deployed or used by, you know, production folks. And, uh, some have come from within Google, just teams across Google who wanted to, to build these things. Others have come from just the community, because there are different pieces that different parts of the community care about, and I, I see our goal as enabling even that, right? It's not we, we cannot and won't build every single thing. That just doesn't make sense. But if we can enable others to build the things that they care about, and there's n- a broader community that cares about that, and we can help encourage that, and, um, that, that's great. That really helps the entire ecosystem, not just those. Uh, one of the big things about 2.0 that we're pushing on is, okay, we have these so many different pieces, right?

    4. LF

      Mm-hmm.

    5. RM

      How do we help make all of them work well together? So there are few key pieces there that we're pushing on, uh, one being the core format in there and how we share the models themselves through SavedModel and Mo- TensorFlow Hub and so on, um, and, you know, a few of the pieces that we really put this together.

    6. LF

      I was very skeptical that that's... You know, when TensorFlow.js came out, it didn't seem... Or deeplearning.js, as it was earlier called.

    7. RM

      Yeah, that was the first.

    8. LF

      It seems like technically very difficult project. As a standalone, it's not as difficult, but as a thing that integrates into the ecosystem, it seems very difficult. So y- I mean, there's a lot of aspects of this you make it look easy, but, uh-

    9. RM

      (laughs)

    10. LF

      ... on the technical side, how many challenges have to be overcome here?

    11. RM

      A lot. (laughs)

    12. LF

      And still have to be overcome.

    13. RM

      Yes, yes.

    14. LF

      That's the question here too.

    15. RM

      There, there are lots of steps to it, right? And we reiterated over the last few years that there's a lot we've learned. I, I, yeah, n- Often when things come together well, things look easy, and that's exactly the point. It should be easy for the end user, but there are lots of things that go behind that. If I think about still, um, challenges ahead, there are... You know, we have a lot more devices coming on board, for example, from the hardware perspective. How do we make it really easy for these vendors to integrate with something like TensorFlow, right? Uh, so there's a lot of compiler stuff that others are working on. There are, uh, things we can do in terms of our APIs and so on that we can do. As we... You know, TensorFlow started as a very monolithic system, and to some extent it still is. There are less, lots of tools around it, but the core is still pretty large and monolithic. One of the key challenges for us to scale that out is how do we break that apart with clearer interfaces. It's, um... You know, in some ways, it's Software Engineering 101, but for a system that's now four years old, I guess, or more, and that's still rapidly evolving, and that we're not slowing down with, it's hard to, you know, change and modify and really break apart.

    16. LF

      Mm-hmm.

    17. RM

      It, it's sort of like, as people say, right, it's like, uh, changing the engine with the car running or trying to fix that.

    18. LF

      That's right.

    19. RM

      That's exactly what we're trying to do.

    20. LF

      So there, there's a challenge here because the downside of so many people, uh, being excited about TensorFlow and be- coming to rely on it in many of their applications is that you're kind of responsible. L- it's the technical debt. You're responsible for previous versions to some degree still working. So when you're trying to innovate, I mean, uh, it's probably easier to just start from scratch every few months. (laughs)

    21. RM

      (laughs) Absolutely. (laughs)

    22. LF

      You know? So do you feel the pain of that? Uh, 2.0 does break some back compatibility, but not too much. It seems like the conversion is pretty straightforward. Uh, do, do you think that's still important given how quickly deep learning is changing? Can you just... (laughs) The things that d- you've learned, can you just start over, or is there pressure to not?

    23. RM

      It, it's a, it's a tricky balance. So i- if it was, um, just a researcher writing a paper who a year later will not look at that code again, sure, it doesn't matter. Uh, there are a lot of production systems that rely on TensorFlow, both at Google and across the world, and people worry about this. I mean, they're- these systems run for a long time.

    24. LF

      Right.

    25. RM

      Uh, so it is important to keep that compatibility and so on. And yes, it does come with a huge cost. There's, uh... We have to think about a lot of things as we do new things and make new changes. Uh, I think the... It, it's a trade-off, right? You can, you might slow certain kinds of things down, but the overall value you're bringing because of that is, is much bigger because it's not just about breaking the person yesterday, it's also about ge- telling the person tomorrow that, "You know what? This is how we do things. We're not going to break you when you come on board-"

    26. LF

      Right.

    27. RM

      "... because there are lots of new people who are also going to come on board."

    28. LF

      Right.

    29. RM

      Um, a... You know, one way I, I like to think about this, and I always push the team to think about it as well, when you want to do new things, you want to start with a clean slate.... design with a clean slate in mind, and then we'll figure out how to make sure all the other things work. And, yes, we do make compromises occasionally. But unless you design with the clean slate and not worry about that, you'll never get to a good place.

    30. LF

      Oh, that's brilliant. So even if you're do- you are responsible when in the idea stage, when you're thinking of new-

  4. 45:001:00:00

    So one, one of…

    1. RM

      we've spent a lot of time in making sure we can accept those contributions well, we can help the contributors in, in adding those, putting the right process in place, getting the right kind of community, welcoming them, and so on. Like over the last year, we've really pushed on transparency. That, that's important for an open source project. Uh, people want to know where things are going, and we're like, "Okay, here's a process where you can do that. Here are our fees and so on." Uh, so thinking th- through ... There are lots of community asp- aspects that come into that you can really work on. As a small project, it's maybe easy to do, because there's like two developers and, and you can do those. A- as you grow, putting more of these processes in place, thinking about the documentation, thinking about, "What do developers care about? What kind of tools would they want to use?" All of these come into play, I think.

    2. LF

      So one, one of the big things, I think, that feeds the TensorFlow fire is, uh, people building something on TensorFlow. And, uh, you know, some, uh, implement a particular architecture that does something cool and useful, and then put it, that on GitHub. And so it just feeds this, uh, this growth. Do you s- have a sense that with 2.0 and 1.0 that there may be a little bit of a partitioning like there is with Python 2 and, and 3? That there will be a codebase in, in the older versions of TensorFlow that will not be as compatible easily? Or d- are you pretty confident that this kind of, uh, conversion is pretty natural and easy to do?

    3. RM

      So, we're definitely working, uh, hard to make that very easy to do. There is lots of tooling that we talked about at the developer summit this week, and we'll continue to invest in that tooling. It's ... Um, you know, when you think of these significant version changes, that's always a risk.

    4. LF

      Yeah.

    5. RM

      And we, we are really pushing hard to make that transition very, very smooth. I, I think ... So, so at some level people want to move and they see the value in the new thing. They don't want to move just because it's a new thing. I mean some people do, but most people want a, a really good thing. And I think over the next few months, as people start to see the value, we'll definitely see that shift happening. So I'm, I'm pretty excited and confident that we, we'll see people moving. Um, as you said earlier, this field is also moving rapidly, so that'll help because we can do more things, and, you know, all the new things will clearly happen in 2.X, so people will have lots of good reasons to move.

    6. LF

      So what do you think, uh, TensorFlow 3.0 looks like?

    7. RM

      (laughs) .

    8. LF

      Is th- is there a ... Are things happening so crazily that even at th- the end of this year seems impossible to plan for? Or is it possible to plan for the next five years?

    9. RM

      I, I think it's tricky. There are some things that we can expect in terms of, okay, change. Yes, change is gonna happen.

    10. LF

      (laughs) .

    11. RM

      (laughs) . Uh, are, are, are there some going, things going to stick around and some things not gonna stick around? I, I would say the, the basics of deep learning, the, you know, say, convolutional models or the, the basic kind of things, they'll probably be around in some form still in five years. Uh, will RL and GANs stay? Very likely, based on where they are. Will we have new things? Probably, but those are hard to predict and ... Some ... Directionally some things that we can see as ... You know, in, in things that we're starting to do, right, with some of our projects right now, is, uh, just 2.0 combining Eager Execution and, and Graphs where we're starting to make it more like just your natural programming language. You're not trying to program something else. Uh, similarly with Swift for TensorFlow, we are taking that approach. Can you do something round-up right? So, so some of those ideas seem like, okay, that's the right direction. In five years, we expect to see more in that area.Um, other things we don't know is, will hardware accelerators be the same? Will we be able to train with, uh, four bits instead of 32 bits? (laughs) Uh-

    12. LF

      And I, and I think the TPU side of things is exploring that. I mean, TPU is already on version three. It seems that the evolution of TPU and Tensor Flow are sort of, uh, they're co-evolving, almost, uh, in terms of both are learning from each other and from the community and, uh, from the applications where the biggest benefit is achieved.

    13. RM

      That's right.

    14. LF

      You've been trying to sort of with, with ego, with cares to make Tensor Flow as accessible and easy to use as possible. What do you think, for beginners, is the biggest thing they struggle with? Have you encountered that or is basically what Caress is solving is that eager, like we talked about?

    15. RM

      Yeah.

    16. LF

      Uh, struggle with.

    17. RM

      For, for some of them, like you said, right, the beginners want to just be able to take some image model, they don't care if it's Inception or Resnet or something else, and do some training or transfer learning on their kind of model. Being able to make that easy is important.

    18. LF

      Mm-hmm.

    19. RM

      So, I, i- in some ways, if you do that by providing them simple models with, say, in Hub or so on, they don't care about what's inside that box, but they want to be able to use it. So, so we are pushing on, I think, different levels. If you look at just a component that you get which has the layers already smooshed in, the, the beginners probably just want that.

    20. LF

      Mm-hmm.

    21. RM

      Then the next step is, okay, look at building layers with Caress. If you go out to research, then they are probably writing custom layers themselves or doing their own loops. So there's a whole spectrum there.

    22. LF

      And then providing the pretrained models seems to really decrease the time from you trying to start. So you, you could basically in a collab notebook achieve what you need. Uh, so I'm, I'm basically answering my own question because-

    23. RM

      Yep.

    24. LF

      ... I think what Tensor Flow delivered on recently is, uh, is, is trivial for beginners. So, I was just wondering if there was, um, other pain points you're trying to ease, but I'm not sure there would be.

    25. RM

      No, tho- those are probably the big ones. I mean, I, I see high schoolers doing a whole-

    26. LF

      Right.

    27. RM

      ... bunch of things now, which is pretty amazing.

    28. LF

      It's, it's both amazing and terrifying. So, uh-

    29. RM

      (laughs) Yes.

    30. LF

      ... uh, in, in a sense that when they grow up, um, it's, uh, some incredible ideas will be coming from them. So there's certainly a technical aspect to your work, but you also have a management aspect to your role with Tensor Flow, leading the project, uh, large number of developers and people. So what do you look for in a good team? What do you think? You know, Google has been at the forefront of exploring what it takes to build a good team, and Tensor Flow is one of the most cutting-edge technologies in the world. So in this context, what do you think makes for a good team?

  5. 1:00:001:10:42

    Yeah. …

    1. LF

      Or is there still a balance to where, I mean, it's the less deadline... You had the Dev Summit-

    2. RM

      Yeah.

    3. LF

      ... and they, they came together incredibly. Uh, it looked like there was a lot of moving pieces and so on. So that, uh, did that deadline make people rise (laughs) to the occasion, releasing TensorFlow 2.0 Alpha?

    4. RM

      Yeah.

    5. LF

      I'm sure that was done last minute as well.

    6. RM

      (laughs)

    7. LF

      I mean, like, the, uh, the up to the-

    8. RM

      Yeah. Yes.

    9. LF

      Up to the, up to the, the last point.

    10. RM

      Yes, yes. Again, you know, it's one of those things that's, uh, you need to strike the good balance.

    11. LF

      Right.

    12. RM

      There's some value that deadlines bring, that does bring a sense of urgency to get the right things together instead of, you know, getting the perfect thing out. You need something that's good and works well. And the team definitely did a great job in putting that together. So I was very amazed and excited by everything, how that came together. Uh, that said, across the year, we try not to put artificial deadlines. We focus on, uh, key things that are important, figure out what that, how much of it's important, and, and we are developing in the open what, you know, internally and externally, everything's available to everybody so you can pick and look at where things are. Uh, we do releases at a regular cadence so fine if something doesn't necessarily end up with this month, it'll end up in the next release in a month or two, uh, and that's okay, but we want to get...... like, keep moving as fast as we can in these different areas, um, because we can iterate and improve on things. Sometimes it's okay to put things out that aren't fully ready. We'll make sure it's clear that, okay, this is experimental, but it's out there if you want to try and give feedback, that's very, very useful. I think that quick cycle and quick iteration is important. That's what we often focus on rather than, "Here's a deadline where you get everything else."

    13. LF

      Is 2.0, is there pressure to make that stable? Or, like for example, WordPress 5.0 just came out with- and there was no pressure to, uh... It was a lot of build-up. They delivered it way too late, but- and they said, "Okay, well, but we're gonna release a lot of updates really quickly to improve it." This, do you see TensorFlow 2.0 in that same kind of way or is there this pressure to once it hits 2.0, once you get to the release candidate and then you get to the final, that that's going to be the- the stable thing?

    14. RM

      So it's going to be stable in just like when NodeEx was where every API that's there is going to remain and work. Uh, it doesn't mean we can't change things under the covers. It doesn't mean we can't add things. So there's still a lot more to, for us to do, and we're going to do even more releases. So in that sense, there's still... I don't think we'll be done in, like, two months when we release this.

    15. LF

      (laughs) I don't know if you can say, but is there, you know, there's not external deadlines for TensorFlow 2.0, but is there internal deadlines, uh, artificial or otherwise, that you try and just set for yourself? Is, or is it whenever it's ready?

    16. RM

      So we want it to be a great product, right? And that's a big important piece for us. TensorFlow is already out there. We have, you know, 41 million downloads for 1.X. So it's not like-

    17. LF

      It's pretty good. Pretty good.

    18. RM

      ... we have to have this... Yeah, yeah, exactly. So it's not like... A lot of the features that we've, you know, really polishing and putting them together are there. We don't have to rush that just because. So in that sense, we want to get it right and really focus on that. Uh, that said, we have said that we are looking to get this out in the next few months, in the next quarter, and we, you know, as far as possible, we'll definitely try to make that happen.

    19. LF

      Yeah, my, my favorite line was, "Spring is a relative concept."

    20. RM

      (laughs)

    21. LF

      I love it.

    22. RM

      (laughs) Yes.

    23. LF

      Spoken like a true developer. So, you know, something I'm really interested in, in your previous line of work, is before TensorFlow, you led a team at Google on search ads.

    24. RM

      Mm-hmm.

    25. LF

      I think, uh, this is like, this is a very interesting topic on- on every level, on a technical level, because at their best, ads connect people to the things they want and need.

    26. RM

      Yep.

    27. LF

      So it's... And, and at their worst, they're just these things that annoy the heck out of you, uh, to the point of ruining the entire user experience of whatever you're actually doing.

    28. RM

      Mm-hmm.

    29. LF

      Uh, so they have a bad rep, I guess. Uh, and s- at the- on the other end, so that's connecting users to the thing they need and want, is a beautiful opportunity for machine learning to shine.

    30. RM

      Right.

Episode duration: 1:10:57

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode NERNE4UThHU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome