Lex Fridman PodcastAndrew Ng: Deep Learning, Education, and Real-World AI | Lex Fridman Podcast #73
EVERY SPOKEN WORD
150 min read · 30,165 words- 0:00 – 2:23
Introduction
- LFLex Fridman
The following is a conversation with Andrew Ng, one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched Deep Learning AI, Landing AI, and the AI Fund, and was the chief scientist at Baidu. As a Stanford professor and with Coursera and Deep Learning AI, he has helped educate and inspire millions of students, including me. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend A Cent of Money as a great book on this history. Debits and credits on ledgers started over 30,000 years ago. The US dollar was created over 200 years ago. And Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it's still aiming to and just might redefine the nature of money. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now here's my conversation with Andrew Ng.
- 2:23 – 5:05
First few steps in AI
- LFLex Fridman
The courses you taught on machine learning at Stanford and later on Coursera that you co-founded have educated and inspired millions of people. So let me ask you, what people or ideas inspired you to get into computer science and machine learning when you were young? When did you first fall in love with the field, is another way to put it?
- ANAndrew Ng
Growing up in Hong Kong and Singapore, I started learning to code when I was five or six years old. Uh, at that time, I was learning the basic programming language and I would take these books and, you know, they'll tell you, "Type this program into your computer." So I'd type that program to my computer. And as a result of all that typing, uh, I would get to play these very simple shoot them up games that, that, you know, I had implemented on my, on my little computer. So I thought it was fascinating as a young kid, uh, that I could write this code that was really just copying code from a book into my computer to then play these cooler video games. Another moment for me was, um, when I was a teenager and my father, who's a doctor, was reading about expert systems and about neural networks. So he got me to read some of these books and, um, I thought it was really cool that you could provide a computer that started to exhibit intelligence. Then I remember doing an internship while I was in high school, uh, this is in Singapore, where I remember doing a lot of photocopying and, and, uh, as office assistant. Um, and the highlight of my job was when I got to use the shredder. So the teenager me remembers thinking, "Boy, this is a lot of photocopying. If only you could write software, build a robot, something to automate this, maybe I could do something else." So I think a lot of my work since then, um, has centered on the theme of automation. Even the way I think about machine learning today, we're very good at writing learning algorithms that can automate things that people can do, um, or even launching the first, uh, MOOCs, Mass Open Online Courses, that later led to Coursera. I was trying to automate what could be automatable in how I was teaching on campus.
- LFLex Fridman
Process of education, tried to automate parts of that to make it more sort of t- to have more impact from a single teacher, single educator.
- ANAndrew Ng
Yeah. I, I, I felt, you know, teaching at Stanford, uh, I was teaching machine learning to about 400 students a year at the time, and, um, I found myself filming the exact same video every year, like telling the same jokes-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... in the same room. And I thought, "Why am I doing this? Why don't I just take last year's video and then I can spend my time building a deeper relationship with students?"
- LFLex Fridman
Yes.
- ANAndrew Ng
So that process of thinking through how to do that, that led to the first, first MOOCs that we launched.
- LFLex Fridman
(laughs) And then you have more time to write new jokes.
- 5:05 – 16:07
Early days of online education
- LFLex Fridman
Are there favorite memories from your early days at Stanford teaching thousands of people in person and then millions of people, uh, online?
- ANAndrew Ng
You know, teaching online, what not many people know was that a lot of those videos were shot between the hours of 10:00 PM and 3:00 AM. Um-
- LFLex Fridman
(laughs) Yeah.
- ANAndrew Ng
A lot of times, uh, we, we, w- w- launching the first MOOCs out of Stanford, we'd already announced the course, about 100,000 people had signed up. Uh, we just started to write the code and we had not yet actually filmed the videos. So, you know, a lot of pressure, 100,000 people waiting for us to produce the content. So many Fridays, Saturdays, um, I would go out, have dinner with my friends, uh, and then I would think, "Okay, do you want me to go home now or do you want to go to the office to film videos?" And-... the thought of being able to help 100,000 people potentially learn machine learning. Fortunately, that, um, made me think, "Okay, I'm gonna go to my office," go to my tiny little recording studio. I would adjust my Logitech webcam, adjust my Wacom tablet, make sure my lapel mic was on-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... and then I would start recording, often until 2:00 AM or 3:00 AM. I think I'm fortunate that it doesn't, doesn't show that-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... it was recorded that late at night, but, uh, it was really inspiring, the, the, the thought that we could create content to help so many people learn about machine learning.
- LFLex Fridman
How did that feel? The fact that you're probably somewhat alone, maybe a couple of friends, recording with a Logitech webcam and kind of going home alone at 1:00 or 2:00 A- AM at night and knowing that that's gonna reach sort of thousands of people, eventually millions of people is (laughs) ... What's that feeling like? I mean, is there a feeling of just satisfaction, of pushing through?
- ANAndrew Ng
I think it's humbling and I wasn't thinking about what I was feeling. I think one thing we, uh, that I'm proud to say we got right from the early days was, um, I told my whole team back then that the number one priority is to do what's best for learners, do what's best for students. And so when I went to the recording studio, the only thing on my mind was, "What can I say? How can I design my slides? What I need to draw right to make these concepts as clear as possible for learners?" Um, I think, you know, I- I- I've seen sometimes instructors, it's tempting to, "Hey, let's talk about my work. Maybe if I teach you about my research, someone will cite my papers a couple more times." And I think one of the things we got right launching the first few MOOCs and later building Coursera was putting in place that bedrock principle of let's just do what's best for learners and forget about everything else. And, and I think that that as a guiding principle turns out to be really important to the, to the rise of the MOOC movement.
- LFLex Fridman
And the kind of learner you imagined in your mind is as, as broad as possible, as global as possible. So really try to reach as many people interested in machine learning and AI as possible.
- ANAndrew Ng
I really wanna help anyone that had an interest in machine learning, uh, to break into the field. And, and I- I think sometimes, uh, eventually people ask me, "Hey, why are you spending so much time explaining gradient descent?"
- LFLex Fridman
(laughs) .
- ANAndrew Ng
And then, and my answer was, um, if I look at what I think the learner needs and would benefit from, I felt that having that, um, a good understanding of the foundations, kind of back to the basics, would put them in a better state to then build on a long-term career. So try to consistently make decisions on that principle.
- LFLex Fridman
So one of the things you actually revealed to the narrow AI community at, at the time and, and to the world, is that the amount of people who are actually interested in AI is much larger than we imagined. By you teaching the class and how popular it became-
- ANAndrew Ng
Yes.
- LFLex Fridman
... it showed that, wow, this isn't just a small community of-
- ANAndrew Ng
Yes.
- LFLex Fridman
... sort of people who go to NeurIPS and, and it's, it's much bigger. It's developers, it's people from all over the world, from... (laughs) . I mean, I'm Russian, so-
- ANAndrew Ng
Yes.
- LFLex Fridman
... there's, everybody in Russia is really interested. There's a huge number of programmers who are interested in machine learning. India, China, uh, South America, everywhere. The- there's just millions of people who are interested in machine learning. So how big do you get a sense that this, the number of people is that are interested from your perspective?
- ANAndrew Ng
I think the number has grown over time. I, I think it's one of those things that maybe it feels like it came out of nowhere, but as an insider building it, it took years. It's one of those overnight successes that took years-
- LFLex Fridman
Yeah.
- ANAndrew Ng
... to, to, to get there. My first foray into this type of online e- education was when we were filming my Stanford class and sticking the videos on YouTube and, and some other things. We had uploaded the whole works and so on, but you know, basically the one hour 15 minute video that we put on YouTube. Um, and then we had four or five other versions of websites that I had built, uh, most of which you would never have heard of because they reached small audiences. But that allowed me to iterate, allowed my team and me to iterate, to learn what are the ideas that work and what doesn't. Uh, for example, one of the features I was really excited about and really proud of was build this website where multiple people could be logged into the website at the same time. So today, if you go to a website, you know, if y'all logged in and then I wanna log in, you need to log out if it's the same browser, on the same computer.
- LFLex Fridman
Yeah.
- ANAndrew Ng
But I thought, well, what if two people, say you and me, were watching a video together in front of a computer? What if a website could have you type your name and password, have me type my name and password, and then now the computer knows both of us are watching together and it gives both of us credit for anything we do as a group. Uh, implemented this feature, rolled it out in a high, in, in a school in San Francisco. Uh, we had about 20-something users, um, uh, worked with a teacher there at Sacred Heart Cathedral Prep. The teacher was great. Um, and guess what? Zero people used this feature.
- LFLex Fridman
(laughs) .
- ANAndrew Ng
Uh, it turns out people studying online, they want to watch the videos by themselves so you can play back, pause at your own speed rather than in groups. So that was one example of a tiny lesson learned, uh, out of many that allowed us to hone into the set of features. Yeah.
- LFLex Fridman
And it sounds like a brilliant feature. So I guess the lesson to take from that is, uh, you (laughs) ... There's, uh, something that looks amazing on paper and then nobody uses it, doesn't actually have the, the impact that you think it might have.
- ANAndrew Ng
Yeah.
- 16:07 – 17:46
Teaching on a whiteboard
- LFLex Fridman
We live in a world where most courses and talks have slides, PowerPoint, Keynote, and yet you famously often still use a marker and a whiteboard. The simplicity of that is, uh, compelling and, for me at least, fun to watch.
- ANAndrew Ng
Thank you.
- LFLex Fridman
So let me ask, why do you like using a marker and whiteboard even on the biggest of stages?
- ANAndrew Ng
I think it depends on the concepts you wanna explain. Uh, for mathematical concepts, it's nice you can build up the equation one piece at a time, uh, and the whiteboard marker or the pen and stylus is a very easy way, you know, to build up an equation, build up a complex concept one piece at a time, uh, while you're talking about it. And sometimes that enhances understandability. Um, the, the downside of writing is that it's slow. And so if you want a long sentence, it's very hard to write that. So I think there are pros and cons, and sometimes I use slides and sometimes I use a, a whiteboard or a stylus.
- LFLex Fridman
The slowness of a whiteboard is also its upside 'cause it forces you to reduce everything to the basics.
- ANAndrew Ng
I see.
- LFLex Fridman
So s- some of, uh, some of your talks that involve the whiteboard, I, I mean it's, there's really not much, you go very slowly and you really focus on the most simple principles, and that's, uh, that's a beautiful... That enforces a kind of a minimalism of ideas that I think is s- surprisingly, at least for me, is, is great f- for education. Like, a great talk, I think, is not one that has a lot of content.
- ANAndrew Ng
I see.
- LFLex Fridman
A great talk is one that just clearly says a few simple ideas. And I think you... (laughs) The whiteboard somehow enforces that.
- 17:46 – 23:17
Pieter Abbeel and early research at Stanford
- LFLex Fridman
Pieter Abbeel, who's now one of the top roboticists and reinforcement learning experts in the world, was your first PhD student. So I, I bring him up just because I kind of imagine this is, um...... this was, must've been an interesting time in your life. Uh, do you have any favorite memories of working with Peter as, so your, your first student in those uncertain times? Especially before deep learning really, um, really sort of, uh, blew up. Any favorite memories from those times?
- ANAndrew Ng
Yeah. Um, I was really fortunate to have had Peter Abu as my first PhD student. Um, and I think even my long-term professional success builds on early foundations or early work that, that, that Peter was so critical to. So, I was really grateful to him, uh, for working with me. Um, you know, what not a lot of people know is just how hard research was and, and still is. Um, Peter's PhD thesis was using reinforcement learning to fly helicopters. Uh, and so, you know, actually even today, the website heli.stanford.edu, H-E-L-I.stanford.edu, is still up. You can watch videos of us using reinforcement learning to make a helicopter fly upside down, fly loose for us. It, it's just cool.
- LFLex Fridman
It's one of the most incredible robotics videos ever, so-
- ANAndrew Ng
Oh, you've seen it?
- LFLex Fridman
... people should watch it.
- ANAndrew Ng
Oh, yeah. Thank you.
- LFLex Fridman
It's inspiring. That's from, uh, like 2008, or '07, or '06.
- ANAndrew Ng
Yeah.
- LFLex Fridman
Like that range.
- ANAndrew Ng
Something like that. It was like, yeah, so it was over 10 years old.
- LFLex Fridman
That was really-
- ANAndrew Ng
Um-
- LFLex Fridman
... inspiring to a lot of people, yes.
- ANAndrew Ng
And what not many people see is how hard it was. Uh, so Peter and, um, Adam Coles and Morgan Quigley and I were working on various versions of the helicopter and a lot of things did not work. Uh, for example, turns out one of the hardest problems we had was when the helicopter's flying around upside down, doing stunts, how do you figure out the position? How do you localize the helicopter? So we wanted to try all sorts of things. Uh, having one GPS unit doesn't work 'cause you're flying upside down, the GPS unit's facing down so you can't see the satellites. So we tried, um, we, we, we experimented with trying to have two GPS units, one facing up, one facing down, so if you flip-
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
... over. That didn't work 'cause the downward-facing one couldn't synchronize if you're flipping quickly. Um, Morgan Quigley was exploring this crazy complicated configuration of specialized hardware to interpret GPS signals, uh, look into FBGs, completely insane. Spent about a year working on that. Um, didn't work. So I remember Peter, great guy, him and me, you know, sitting down in my office, looking at some of the latest things we had tried that didn't work and saying, you know, "Darn it, like what now?" Because, because we'd tried so many things and it, and it just didn't work. Um, in the end, uh, what we did, and Adam Coles was, was crucial to this, was, uh, put cameras on the ground and used cameras on the ground to localize the helicopter. And that solved the localization problem so that we could then focus on the reinforcement learning and inverse reinforcement learning techniques so it didn't actually make the helicopter fly. Um, and, you know, I'm reminded when, when, when I was doing, um, this work at Stanford, around that time, there was, uh, a lot of reinforcement learning theoretical papers, but not a lot of practical applications. So the, uh, autonomous helicopter work for flying helicopters, uh, was, is one of the few, you know, practical applications of reinforcement learning at the time, which, which caused it to become pretty well known. Um, I, I, I feel like we, we might have almost come full circle with today. There's so much buzz, so much hype, so much excitement-
- LFLex Fridman
Yeah.
- ANAndrew Ng
... about reinforcement learning, but again, we're hunting for more applications of all of these great ideas that, that the community's come up with.
- LFLex Fridman
What was the drive, sort of in the face of the fact that most people are doing theoretical work, what motivate you in the uncertainty and the challenges to get the helicopter, sort of to do the, the applied work? To get th- the actual system to work? Yeah, in the face of fear, uncertainty, sort of, uh, the setbacks that you mentioned for localization?
- ANAndrew Ng
I like stuff that works. I, I, I, I love-
- LFLex Fridman
(laughs) In the physical world, so like it's, it's back to the shredder and the... (laughs)
- ANAndrew Ng
Yeah. You know, I, I, I like theory but when I work on theory myself, and this is personal taste, I'm not saying anyone else should do what I do. But when I work on theory, I personally enjoy it more if I feel that my th- the work I do will influence people, have positive impact or help someone. Um, I remember when, uh, many years ago, I was speaking with a mathematics professor and kind of just said, "Hey, why do you do what you do?" Uh, and, and, and he said, he, he, he actually, you know, he had stars in his eyes when he answered. And this mathematician, um, n- not from Stanford, different university, he said, "I do what I do because it helps me to discover truth and beauty in the universe."
- LFLex Fridman
Yeah.
- ANAndrew Ng
He had stars in his eyes when he said that.
- LFLex Fridman
Yeah.
- ANAndrew Ng
And I thought, "That's great." Um, I don't wanna do that.
- LFLex Fridman
(laughs)
- ANAndrew Ng
I think it's great that someone does that, fully support the people that do it, a lot of respect for people that... But I am more motivated when I can see a line to how the work that my teams and I are doing helps people. Um, the world needs all sorts of people. I'm just one type. I don't think everyone should do things th- the same way as I do. But I, I, I, when I delve into either theory or practice, if I personally have conviction, you know, that here's a path for it to help people, uh, I, I, I find that more satisfying to have that conviction.
- LFLex Fridman
That's, that's your path.
- 23:17 – 32:55
Early days of deep learning
- ANAndrew Ng
Yeah.
- LFLex Fridman
You were a proponent of deep learning before it gained widespread acceptance. What did you see in this field that gave you confidence? What, what was your thinking process like in that first decade of the, I don't know what that's called, 2000s, the aughts?
- ANAndrew Ng
Yeah. I can tell you the thing we got wrong and the thing we got right. The thing we really got wrong was the importance of, uh, uh, the early importance of unsupervised learning. So, uh, early days of Google Brain, we put a lot of effort into unsupervised learning rather than supervised learning. And there was this argument, actually I think it was around, um, 2005, uh, after, uh, you know, NeurIPS, at that time called NIPS but now NeurIPS, had ended.... and, uh, Geoff Hinton and I were sitting in the cafeteria outside, you know, the conference. We had lunch, we were just chatting, and Geoff pulled out this napkin and he started sketching this argument on the, on the napkin. Well, it was very compelling so I'll, I'll repeat it. Um, human brain has about, uh, 100 trillion, so there's 10 to the 14 synaptic connections. Uh, you will live for about 10 to the 9 seconds. Uh, that's 30 years. You actually live for two, two, two by 10 to the 9, maybe three by 10 to the 9 seconds. So just let's say 10 to the 9.
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
So if each synaptic connection, each weight in your brain's neural network, has just a one-bit parameter, that's 10 to the 14 bits you need to learn in up to 10 to the 9 seconds-
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
... of your life. So via this simple argument, which is a lot of the problems, it's very simplified-
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
... that's 10 to the 5 bits per second you need to learn in your life. And, um, I have a one-year-old daughter. Uh, I am not pointing out 10 to 5 bits per second of labels to her. So, and, and I think I'm a, I'm very loving parent, but I'm just not gonna do that, right?
- LFLex Fridman
(laughs) Yeah.
- ANAndrew Ng
Um, so from this, you know, very crude, definitely problematic argument, there's just no way that most of what we know is through supervised learning. The way you can get so many bits of information is from sucking in images, audio, just experiences in the world. Um, and so that argument, uh, and, and there are a lot of known flaws to this argument, you know, we shouldn't go, go into, really convinced me that there's a lot of power to unsupervised learning. Um, so that was the part that we actually maybe, maybe got wrong. I, I still think supervised learning is really important, but we, uh, but, but in the early days, you know, 10, 15 years ago, a lot of us thought that was the path forward.
- LFLex Fridman
Oh, so you're, you're saying that that, that perhaps was the wrong intuition for the time.
- ANAndrew Ng
For the time. That, that, that was the part we got wrong. The part we got right was the importance of scale. So, uh, Adam Coates, uh, another, uh, wonderful person, fortunate to have worked with him, um, he was in my group at Stanford at the time, and Adam had run these experiments at Stanford showing that the bigger we train a, you know, learning algorithm, the better its performance. And it was based on that... Uh, uh, uh, there was a graph that Adam generated, you know, uh, where the x-axis, y-axis, lines going up into the right. So bigger you ma- make this thing, the better its performance. Accuracy is the vertical axis.
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
So it's really based on that chart that Adam generated that gave me the conviction that you could scale these models way bigger than what we could on the few CPUs which is what we had at Stanford, that we would get even better results. And it was really based on that one figure that Adam generated-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... that gave me the conviction, uh, to go with Sebastian Thrun to pitch, you know, starting, starting a project at, at Google, which became the Google Brain project.
- LFLex Fridman
Google Brain, you co-founding Google Brain, and there the intuition was scale will bring performance for the system, so we should chase a larger and larger scale.
- ANAndrew Ng
Correct.
- LFLex Fridman
And I think people don't r- don't realize how, how groundbreaking of a... it's simple, but it's a groundbreaking idea, that bigger data sets will, will result in better performance.
- ANAndrew Ng
It was controver- it was controversial at the time. Uh, some of my well-meaning friends, you know, senior people in the machine learning community I won't name, but who's-
- LFLex Fridman
Okay.
- ANAndrew Ng
... peop- still people, some, some of whom we, we know. Uh, my well-meaning friends came and were trying to give me friendly advice like, "Hey, Andrew, why are you doing this? This is crazy. It's in the neural network architecture. Look at these architectures we're building. You just wanna go for scale? Like, this is a bad career move." So, so my, my well-meaning friends, you know, were trying to... some of them were trying to talk me out of it. Um, uh, but I find that if you wanna make a breakthrough, you sometimes have to have conviction and do something before it's popular, since that lets you have a bigger impact.
- LFLex Fridman
Le- let me ask you just then a small tangent on that topic. I find myself, uh, arguing with people saying that greater scale, especially in the context of active learning, so s- very carefully selecting the data set, but growing the scale of the data set, is going to lead to even further breakthroughs in deep learning. And there's currently s- pushback at that idea, that larger data sets are no longer the... so you wanna increase the efficiency of learn- you ma- you wanna make better learning mechanisms. And I personally believe that just bigger data sets will still, with the same me- learning methods we have now, will result in better performance. What's your intuition at this time on those... I, on the... this dual side is doing... need to come up with better architectures for learning, or can we just get bigger, better data sets that will improve performance?
- ANAndrew Ng
I think both are important, and it's also problem dependent. So-
- LFLex Fridman
Right.
- ANAndrew Ng
... for a few data sets we may be approaching, you know, a base error rate or approaching or surpassing human level performance and, and then there's that theoretical ceiling that we will never surpass, so base error rate. Um, but then I think there are plenty of problems where, where we're still quite far from either human level performance or from base error rate and, uh, bigger data sets with neural networks w- without further algorithm innovation will be sufficient to take us further. Um, but on the flip side, if we look at the recent breakthroughs using, you know, transformer networks for language models, uh, it was a combination of novel architecture, uh, but also scale had a lot to do with it. If we look at what happened with, you know, GP2 and BERT, I think scale was a large part of the story.
- LFLex Fridman
Yeah, that's, that's not often talked about is the, the scale of the data set it was trained on and the quality of the data set because there's some, uh... So it was like Reddit threads that had... uh, they were operated highly, so there's already some weak supervision on a very large data set that people don't often talk about, right?
- ANAndrew Ng
I find that today we have pr- uh, maturing processes to managing code, things like Git, right? Version control. Uh, took us a long time to, to evolve the good processes. I, I, I remember, I remember when my friends and I were emailing each other C++ files in email, you know?
- LFLex Fridman
Yeah.
- 32:55 – 33:23
Quick preview: deeplearning.ai, landing.ai, and AI fund
- ANAndrew Ng
- LFLex Fridman
You're now running three efforts, the AI fund, Landing AI, and DeepLearning.AI. As you've said, the AI fund is involved in creating new companies from scratch, Landing AI is involved in helping already established companies do AI, and Deep Learning AI is for education of everyone else, or of individuals interested of getting into the field and excelling in it. So let's perhaps talk about each of these areas. First,
- 33:23 – 45:55
deeplearning.ai: how to get started in deep learning
- LFLex Fridman
DeepLearning.AI. How, the basic question, how does a person interested in deep learning get started in the field?
- ANAndrew Ng
DeepLearning.AI is working to create courses to help people break into AI. So, uh, my machine learning course that I taught through Stanford remains one of the most popular courses on Coursera.
- LFLex Fridman
To this day, it's probably one of the courses, sort of if I ask somebody, "How did you get into machine learning?" Or, "How did you fall in love with machine learning?" Or, "What got you interested?" They, it always goes back to R- (laughs) Andrew Ng at some point.
- ANAndrew Ng
Yeah, sure, sure.
- LFLex Fridman
So you've influenced, the amount of people you've influenced is ridiculous. So for that-
- ANAndrew Ng
Mm-hmm.
- LFLex Fridman
... I, I'm sure I speak for a lot of people to say a big thank you. (laughs)
- ANAndrew Ng
No, yeah, thank you. Y- you know, I, I was once, once reading a news article, um, uh, I think it was Tech Review, and I'm gonna mess up the statistic, but I remember reading an article that said, um, something like, "One-third of all programmers are self-taught." I may have the number one-third wrong. It was two-thirds. But when I read that article, I thought, "This doesn't make sense. Everyone is self-taught." So-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... 'cause you, you teach yourself. I don't-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... teach people. I just-
- LFLex Fridman
Right. That's well put.
- ANAndrew Ng
(laughs) Um-
- LFLex Fridman
So yeah, so how, how does one get started in deep learning, wh- and where does DeepLearning.AI fit into that?
- ANAndrew Ng
So the deep learning specialization, uh, offered by DeepLearning.AI is, uh, is, is, uh, uh, I think, uh, one, it h- was Coursera's top specialization. Uh, it might still be. So it's a very popular way for people to take that specialization, to learn about everything from neural networks to, uh, how to tune a neural network, to what is a convnet, to what is a RNN or sequence model, or, or what is an attention model? And so the deep learning specialization, um, steps everyone through those algorithms so you deeply understand it and can implement it and use it, you know, for, for whatever application.
- LFLex Fridman
From the very beginning? So what would you say are the prerequisites for somebody to take the deep learning specialization in terms of maybe math or programming background?
- ANAndrew Ng
Yeah. Need to understand basic programming since there are programming exercises in Python. Uh, and the math prereq is quite basic. So no calculus is needed. If you know calculus, that's great, you get better intuitions. But deliberately try to teach that specialization without requiring calculus. So I think, um, high school math would be sufficient. Uh, if you know how to multiply two matrices, I think, I think that, that, that's, that, that's great. Uh...
- LFLex Fridman
So little basic linear algebra is great.
- ANAndrew Ng
Basic linear algebra, even very, very basic linear algebra and some programming. Um, I think that people that done the machine learning course will find the deep learning specialization a bit easier. But it's also possible to jump into the deep learning specialization directly, but it, it, it will be a little bit harder since we tend to, you know, go over faster, uh, concepts like how does gradient descent work and what is an objective function, which, which is covered more slowly in the machine learning course.
- LFLex Fridman
Could you briefly mention some of the key concepts in deep learning that students should learn that you envision them learning in the first few months, in the first year or so?
- ANAndrew Ng
So if you take the deep learning specialization, you learn the foundations of what is a neural network, how do you build up a neural network from a, you know, single logistic unit to a stack of layers, to, um, different activation functions. You then have a train in your networks. One thing I'm very proud of in that specialization is we go through a lot of practical know-how of how to actually make these things work. So what are the differences between different optimization algorithms? Uh, what do you do if the algorithm overfits, or how do you tell if the algorithm is overfitting? When do you collect more data? When should you not bother to collect more data? I find that, um, even today, unfortunately, there are, you know, engineers that will spend six months trying to pursue a particular direction, uh, such as collect more data, 'cause we heard more data is valuable. But sometimes you could run some tests and could have figured out six months earlier that for this particular problem, collecting more data isn't gonna cut it, so just don't spend six months collecting more data. Spend your time m- mo- modifying the architecture or trying something else. So go through a lot of the practical know-how so that when, uh, when, when, when someone, when you take the deep learning specialization, you have those skills to be very efficient in how you build these networks.
- LFLex Fridman
So dive right in to play with the network, to train it, to do the inference on a particular data set, to build an intuition about it without, without, um, building it up too big to where you spend, like you said, six months learning, building up your big project without building any intuition of a small s- a small aspect of the data that could already tell you everything you need to know about that data.
- ANAndrew Ng
Yes, and also the systematic frameworks of thinking for how to go about building practical machine learning. Maybe, to make an analogy, um, when we learn to code, we have to learn the syntax of some programming language, right, be it Python or C++ or Octave or whatever. Um, but, uh, equally important or maybe even more important part of coding is to understand how to string together these lines of code into coherent things. So, you know, w- when should you put something in a function call and when should you not? How do you think about abstraction? So those frameworks, um, are what makes a programmer efficient, uh, even more than understanding the syntax. I remember when I was an undergrad at Carnegie Mellon, um, one of my friends would debug their codes by first trying to compile it and then... It was C++ code. And then every line that has a syntax error, uh, they want to get rid of syntax errors as quickly as possible. So how do you do that? Well, they would delete every single line of code with a syntax error.
- LFLex Fridman
(laughs)
- ANAndrew Ng
So really efficient for getting rid of syntax errors, but horrible debugging standards.
- LFLex Fridman
Right. Yes.
- ANAndrew Ng
So I think... So we learn how to debug, uh, and I think in machine learning, the way you debug a machine learning program is very different than the way you, you know, like do binary search or whatever, use a debugger, trace through the code in a traditional software engineering. So it's an evolving discipline, but I find that the people that are really good at debugging machine learning algorithms are easily 10X, maybe a 100X faster at getting something to work. So-
- LFLex Fridman
And the basic process of debugging is... So, so the, the, the bug in this case, why isn't this thing learning, uh, learning, improving, sort of going into the questions of overfitting and all those kinds of things.
- ANAndrew Ng
Yeah.
- 45:55 – 49:40
Unsupervised learning
- LFLex Fridman
What is the most beautiful, surprising, or inspiring idea in deep learning to you? Something that captivated your imagination. Is it the scale that could be achieve- uh, the performance that could be achieved with scale, or is there other ideas?
- ANAndrew Ng
I think that if my only job was being an academic researcher-
- LFLex Fridman
Mm-hmm.
- ANAndrew Ng
... I have an unlimited budget and, you know, didn't have to worry about short-term impact and only focus on long-term impact, I'd probably spend all my time doing research on unsupervised learning. Um, I still think unsupervised learning is a beautiful idea. Um, at both this past NeurIPS and ICML, uh, I was attending workshops or listening to various talks about self-supervised learning, which is one vertical segment, maybe, of sort of unsupervised learning that I'm excited about. Uh, maybe just to summarize the idea, I, I guess you know the idea, but I'll describe briefly.
- LFLex Fridman
No, please.
- ANAndrew Ng
So here's an example of s- self-supervised learning. Let's say we grab a lot of unlabeled images off the internet, so we have infinite amounts of this type of data. I'm gonna take each image and rotate it by a random multiple of 90 degrees, and then I'm going to train a supervised neural network to predict what was the original orientation, so has this been rotated 90 degrees, 180 degrees, 270 degrees, or, or zero degrees. So you can generate an infinite amount of labeled data because you rotated the image, so you know what's the ground truth label. And so, uh, various researchers have found that by taking unlabeled data and making up labeled data sets and training a large neural network on these tasks, you can then take the hidden layer representation and transfer it to a different task very powerfully. Um, learning word embeddings, where we take a sentence, delete a word, predict the missing word, which is how we learn, you know, one of the ways we learn word embeddings, is another example. And I think, um, there's now this portfolio of techniques for generating these made-up tasks. Um, another one called Jigsaw would be if you take an image, cut it up into a, you know, three by three grid, so, like, a nine, three by three puzzle piece, jump out the nine pieces, and have a neural network predict which of the nine factorial possible permutations it, it, it came from. So, uh, many groups, uh, including, you know, OpenAI, uh, Peter Abbe has been looking, doing some work on this too, uh, Facebook, uh, Google Brain, I think DeepMind, oh, actually, uh, Aaron VanderAald has great work on the CPCObjective. So many teams are doing exciting work, and I think this is a way to generate infinite labeled data, uh, and, and I find this a very exciting piece of unsupervised learning.
- LFLex Fridman
So long term, you think that's going to unlock a lot of power in, in, uh, machine learning systems is this kind of unsupervised learning?
- ANAndrew Ng
I, I don't think this is a whole enchilada.
- LFLex Fridman
Right.
- ANAndrew Ng
I think this is just a piece of it, and I think this one piece unsup- self-supervised learning is starting to get traction. We're very close to it being useful. Uh, well, word embeddings is really useful.
- LFLex Fridman
Right.
- ANAndrew Ng
I think we're getting closer and closer to this having a, a significant real-world impact maybe in computer vision and video, uh, but I think this concept, uh, and, and I think there'll be other concepts around it, you know.... other unsupervised learning things that, that I worked on, I've been excited about, um, I was really excited about sparse coding, uh, and ICA, uh, slow feature analysis. I think all of these are ideas that various of us were working on about a decade ago before we all got distracted-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... by how well-supervised learning was doing.
- LFLex Fridman
How supervised learning worked, yeah.
- ANAndrew Ng
Brought up also-
- LFLex Fridman
So we would return, we would return to the fundamentals of representation learning that, that really started this movement of deep learning.
- ANAndrew Ng
I think there's a lot more work that one could explore around this theme of ideas and other ideas to come up better algorithms.
- 49:40 – 56:12
deeplearning.ai (continued)
- LFLex Fridman
So if we could return to, uh, maybe talk quickly about the specifics of, uh, DeepLearning.AI, the deep learning specialization perhaps. How long does it take to complete the course, would you say?
- ANAndrew Ng
The official length of the deep learning specialization is I think 16 weeks, so about four months. Uh, but it's, uh, go at your own pace. So if you subscribe to the deep learning specialization, uh, there are people that finished it in less than a month by working more intensely and studying more intensely. So it really depends on, on the individual. Y- when we created the deep learning specialization, uh, we wanted to make it very accessible and very affordable. Um, and with, you know, Coursera and DeepLearning.AI's education mission, one of the things that's really important to me is that if there's someone for whom paying anything is a, is a financial hardship, uh, they just apply for financial aid and, and get it for free.
- LFLex Fridman
If you were to recommend a daily schedule for people in learning, whether it's through the DeepLearning.AI specialization or just learning in the world of deep learning, um, what would you recommend? How do they go about day-to-day sort of specific advice about learning, about their journey in, in the world of deep learning, machine learning?
- ANAndrew Ng
I think, um, getting the habit of learning is key, and that means regularity. Um, so for example, we send out our weekly newsletter, The Batch, every Wednesday, so people know it's coming Wednesday, you can spend a little bit of time on Wednesday catching up on the latest news, uh, through The Batch on, you know, on, on, on Wednesday. Um, and for myself, I've picked up a habit of spending some time every Saturday and every Sunday reading or studying. And so I don't wake up on a Saturday and have to make a decision, do I feel like reading or studying today or not? It's- it's just, it's just what I do. And the fact there's a habit makes it easier. So I think if, um, someone can get into that habit, it's like, you know, just like we brush our teeth every morning, I don't think about it. If I thought about it, it's a little bit annoying to have to spend two minutes doing that-
- LFLex Fridman
Yeah.
- ANAndrew Ng
... uh, but it's a habit that it takes no cognitive load, but this would be so much harder if we have to make a decision every morning. Um, so, and, and actually that's the reason why I wear the same thing every day as well, it's just one less-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... decision. I just get up and I wear my blue shirt. So, but I think if you can get that habit, that consistency of studying, then, then it actually feels easier.
- LFLex Fridman
So yeah, and it's kind of amazing, in, in my own life, like I, I play guitar every day for, like I force myself to at least for five minutes play guitar.
- ANAndrew Ng
Mm-hmm.
- LFLex Fridman
It's just, it's a ridiculously short period of time, but because I've gotten into that habit, it's incredible what you can accomplish in a period of a y- a year or two years. You can become, you know, uh, e- exceptionally good at certain aspects of a thing by just doing it every day for a very short period of time. It's kind of a miracle that-
- ANAndrew Ng
Mm-hmm.
- LFLex Fridman
... that's how it works. It adds up over time.
- ANAndrew Ng
Yeah. And I think it's something... it's often not about the bursts of sustained efforts and the all-nighters because you could only do that a limited number of times. It's the sustained effort over a long time. I think, you know, reading two research papers is a nice thing to do, but the power is not reading two research papers, it's reading two research papers a week for a year. Then you read 100 papers and, and you actually learn a lot when you read 100 papers.
- LFLex Fridman
So regularity and making learning a, a habit. Do you have, do you have general other study tips for particularly deep learning that people should... in, in their process of learning is there some kind of recommendations or tips you have a- as they learn?
- ANAndrew Ng
One thing I still do when I'm trying to study something really deeply is, uh, take handwritten notes. Um, it varies. I know there are a lot of people that take the deep learning courses during a commute, uh, or something where, where it may be more awkward to take notes, so I know it may not work for everyone. But, uh, when I'm taking courses on Coursera, you know, and I still take some every now and then, the most recent one I took was a course on clinical trials because I was interested about that. I, I got out my little Moleskine notebook and I was sitting at my desk, I was just taking down notes of what the instructor was saying. And that act... we, we know that that act of taking notes, preferably handwritten notes, uh, um, increases retention.
- LFLex Fridman
So as you're sort of watching the video, just kind of pausing maybe and then taking the basic insights down on paper?
- ANAndrew Ng
Yeah. So a- actually there have been a few studies if you, you know, search online you find some of these studies that, um, taking handwritten notes, because handwriting is slower as we were saying just now-
- LFLex Fridman
(laughs) Right.
- ANAndrew Ng
... um, it causes you to recode the knowledge in your own words more, and that process of recoding promotes long-term retention. This is as opposed to typing, uh, which is fine. Again, typing is better than nothing or in taking a class and not taking notes is better than not taking any class at all. But comparing handwritten notes and typing, um, you can usually type faster for a lot of people than you can handwrite notes. And so when people type they're more likely to just transcribe verbatim what they heard, and that reduces the amount of recoding and that actually results in less long-term retention.
- LFLex Fridman
I don't know what the psychological effect there is but it's so true. There's something fundamentally different about writing hand- handwriting. I wonder what that is. I wonder if it is as simple as just the time it takes to write is slower.
- ANAndrew Ng
Yeah. And, and, and, and because, uh, because you can't write as many words you have to take whatever they said and summarize it into fewer words. And that summarization process requires deeper processing of the meaning which then results in better retention.
- LFLex Fridman
That's fascinating.
- ANAndrew Ng
Oh, and I've spent, I-I think, you know, because of Coursera, I've spent so much time studying pedagogy. It's, it's actually one of my passions. I really love learning how to more efficiently help others learn. Um, yeah, o-one of the things I do both in creating videos or when we write the batch is, um, I try to think, is one minute spent with us going to be a more efficient learning experience than one minute spent anywhere else? And we really try to, you know, make it time-efficient for the learners because, you know, everyone's busy. So when, when we're editing, I, I, I often tell my teams every word needs to fight for its life and if you can delete a word, let's just-
- LFLex Fridman
Beautiful.
- ANAndrew Ng
... delete it and not wait, let's not waste the learner's time.
- LFLex Fridman
Uh, that's so, it's so amazing that you think that way 'cause there is millions of people that are impacted by your teaching and sort of that one minute-
- ANAndrew Ng
Oh, yes.
- LFLex Fridman
... spent has a ripple effect, right?
- ANAndrew Ng
(laughs) Oh, 100%.
- 56:12 – 58:56
Career in deep learning
- LFLex Fridman
How does one make a career out of an interest in deep learning? Do you have advice for people? We just talked about sort of the beginning early steps, but if you want to make it a entire life's journey or at least a journey of a decade or two, h-how, how do you do it?
- ANAndrew Ng
So most important thing is to get started. Uh-
- LFLex Fridman
Right. (laughs) Of course.
- ANAndrew Ng
And, and I think in the early parts of a career coursework, um, like the deep learning specialization or, is a very efficient way to master this material. Uh, so because, you know, instructors, uh, be it me or someone else or, you know, Laurence Moroney, teaches, uh, TensorFlows specialization or other things we're working on, spend effort to try to make it time-efficient for you to learn a new concept. So coursework is actually a very efficient way for people to learn concepts at the beginning parts of break into new fields. Um, in fact, one thing I see at Stanford, uh, some of my PhD students want to jump into research right away and I actually tend to say, "Look, in your first couple years as a PhD student, spend time taking courses, uh, because it lays the foundation." It's fine if you're less productive in your first couple years, you, you'll be better off in the long term. Um, beyond a certain point, there's materials that doesn't exist in courses because it's too cutting-edge, the course hasn't been created yet, there's some practical experience that we're not yet that good as teaching in a, in a course. And I think after exhausting the efficient coursework, then most people, um, need to go on to, uh, either ideally work on projects, uh, and then maybe also continue their learning by reading blog posts and research papers and things like that. Um, doing projects is really important and, again, I think it's important to start small and just do something. Uh, today you read about deep learning, it feels like, oh, all these people are doing such exciting things. What if I'm not building a neural network that changes the world, then what's the point? Well, the point is sometimes building that tiny neural network, you know, be it MNIST or upgrade to Fashion-MNIST, to, to whatever, doing your own fun hobby project. That's how you gain the skills so that you do bigger and bigger projects. I find this to be true at the individual level and also at the organizational level. For a company to become good at machine learning, sometimes the right thing to do is not to tackle the giant project, is instead to do the small project that lets the organization learn and then build up from there. But this is true both for individuals and, and, and for, and for companies.
- LFLex Fridman
So taking the first step and then taking small steps is the key. Should students pursue a PhD do you think? You can do so much. That's the, one of the fascinating things in machine learning. You can have so much impact without ever getting a PhD. So what are your thoughts?
- 58:56 – 1:03:28
Should you get a PhD?
- LFLex Fridman
Should people go to grad school? Should people get a PhD?
- ANAndrew Ng
I think that there are multiple good options of which doing a PhD could be one of them. I think that if, if someone's admitted to a top PhD program, you know, at MIT, Stanford, top schools, uh, I think that's a very good experience. Uh, or if someone gets a job at a top organization, at the top AI team, I think that's also a very good experience. Um, there are some things you still need a PhD to do. If someone's aspiration is to be a professor, you know, at the top academic university, you just need a PhD to do that. Uh, but if your goal is to, you know, start a company, build a company, do great technical work, I think, uh, PhD is a good experience, but I would look at the different options available to someone, you know, where are the places where you can get a job? Where are the places you can get in a PhD program and kind of weigh the pros and cons of those.
- LFLex Fridman
So just to linger on that for a little bit longer, what final dreams and goals do you think people should have? So, um, the, what options should they explore? So you can work in industry, so for a large company, uh, like Google, Facebook, Baidu, all these large sort of w- companies that already have huge teams of machine learning engineers. You can also do within industry sort of more research groups like kind of like Google Research, Google Brain. Then you can also do, uh, like we said, a professor in aca- in academia. And, uh, what else? Oh, you can still build your own company. (laughs)
- ANAndrew Ng
(laughs) .
- LFLex Fridman
You can do a startup. Is there anything that stands out between those options or are they all beautiful different journeys that people should consider?
- ANAndrew Ng
I think the thing that affects your experience more is less, um, are you in this company versus that company or academia versus industry. I think the thing that affects your experience most is who are the people you're interacting with in a, in a daily basis. So, um, even if you look at some of the large companies, the experience of individuals in different teams is very different. And what matters most is not the logo above the door when you walk into the giant building every day. What matters the most is who are the 10 people, who are the 30 people you interact with every day? So I actually tend to advise people, um, if you get a job from a, from a company, ask who is your manager.... who are your peers, who are you actually gonna talk to? We're all social creatures. We tend to, you know, become more like the people around us and, uh, if you work with great people, you will learn faster. Uh, or if you get admitted, if you get a job at a great company or a great university, maybe the logo you work in, you know, is great, but you're actually stuck on some team doing really work that doesn't excite you, uh, and then that's actually really bad experience. So this is true both for, um, universities and for large companies. For small companies, you can kind of figure out who you'd be working with quite quickly and I tend to advise people if a company refuses to tell you who you will work with, some of them say, "Oh, join us. We have a rotation system. We'll figure it out," I think that- that that's a worrying answer because it, uh, because it- it means you may not get sensed to, you may not actually get to- to team with- with great peers and great people to work with.
- LFLex Fridman
It's actually a really profound advice that we kind of sometimes sweep, we don't consider too rigorously or carefully, is the people around you are really often... especially when you accomplish great things, it seems the great things are accomplished because of the people around you. So th- that's, um, it's not about the- the wh- whether you learn this thing or that thing or, like you said, the logo that hangs up top. It's the people. That's a fascinating-
- ANAndrew Ng
I see.
- LFLex Fridman
... and it's such a hard search process (laughs) of finding, just like finding the right, uh, friends and, uh, somebody to get married with and that kind of thing, it's a very hard search pr- it's a people-search problem.
- ANAndrew Ng
Yeah. Uh, I think when someone interviews, you know, at a university or the research lab or the large corporation, it- it's good to insist on just asking who are the people, who is my manager? And if you refuse to tell me, I'm gonna think, well, maybe that's 'cause you don't have a good answer, may not be someone I like.
- LFLex Fridman
And if th- you don't particularly connect, if something feels off with the people, uh, then don't stick, uh, to it, you know, that's a really important signal to consider.
- ANAndrew Ng
Yeah. Yeah. And- and actually I- I- I actually, um, in my standard class, CS230, as well as an ACN talk, I- I think I gave, like, a hour-long talk on, uh, career advice, including, uh, on the job search process and then some of these, so- so you- you can find those videos online.
- LFLex Fridman
Awesome. And I'll point them-
- ANAndrew Ng
Career advice.
- LFLex Fridman
I'll- I'll point people to them. Beautiful.
- 1:03:28 – 1:11:14
AI fund - building startups
- LFLex Fridman
So the AI Fund helps AI startups get off the ground, or perhaps you can elaborate on all the fun things it's involved with. What's your advice on how does one build a successful AI startup?
- ANAndrew Ng
You know, in Silicon Valley, a lot of startup failures come from building a products that no one wanted. So when-
- LFLex Fridman
(laughs)
- ANAndrew Ng
... uh, you know, cool technology, but who's gonna use it? So I think I tend to be very outcome-driven, um, and- and- and customer-obsessed. Uh, ultimately, we don't get to vote if we succeed or fail. It's only the customer that- that- that... they're the only one that gets a thumbs-up or thumbs-down votes in the long term. In the short term, you know, there are f- various people that get various votes, but in the long term, that's what really matters.
- LFLex Fridman
So as you build a startup, you have to constantly c- ask the question, (laughs) will the customer gives a th- give a thumbs-up on this?
- ANAndrew Ng
I think so. I think, uh, startups that are very customer-focused, customer-obsessed, deeply understand the customer and are m- oriented to serve the customer, um, are more likely to succeed, uh, with the proviso that I think all of us should only do things that we think create social good and moves the world forward. So- so I- I- I personally don't want to build addictive digital products just to sell a lot of ads or, you know, there are things that- that could be lucrative that I won't do, um, uh, but if we can find ways to serve people in meaningful ways, um, I think those can be- those can be great things to do, uh, either in the academic se- setting or in a corporate setting or a startup setting.
- LFLex Fridman
So can you give me the idea of why you started the AI- the AI Fund?
- ANAndrew Ng
I remember when I was leading the AI group at, uh, Baidu, um, I had two jobs, two parts of my job. One was to build an AI engine to support the existing businesses and that- that- that was running, you know, just ran, just performed by itself. The second part of my job at the time, which was to try to systematically initiate new lines of businesses, uh, using the company's AI capabilities, so, you know, the self-driving car team came out of my group, the smart speaker team, uh, similar to what is, um, Amazon Echo or Alexa in the US, uh, but we actually announced it before Amazon did, so we- Baidu wasn't following Am- wasn't following Am- Amazon. That- that came out of my group and I found that to be, um, actually the- the most fun part of my job, uh. So what I want to do was to build AI Fund as a startup studio to systematically create new startups from scratch. With all the things we can now do with AI, I think the ability to build new teams, to go after this rich space of opportunities is a very important way to... very important mechanism to get these projects done that I think will move the world forward. So I've- I've been fortunate to build a few teams that had a meaningful positive impact and I felt that, um, we might be able to do this in a more systematic, repeatable way. So a startup studio is a relatively new concept. There- there are maybe dozens of startup studios, you know, right now, uh, but, uh, I feel like all of us, many teams are still trying to figure out how do you systematically build companies with a high success rate. So I think, um, e- even a lot of my, you know, venture capital friends are- seem to be more and more building companies rather than investing in companies, but I find it a fascinating thing to do, to figure out the mechanisms by which we could systematically build successful teams, successful businesses, um, in- in areas that we find meaningful.
- LFLex Fridman
So a startup studio is something... is- is a place and a mechanism for startups to go from zero to success, to try to develop a blueprint.
- ANAndrew Ng
It- it's actually a place for us to build startups from scratch. So we often, uh, bring in founders and work with them or- or maybe even have existing ideas that we match.... founders with. Uh, and then this launches, you know, for- hopefully into successful companies.
- LFLex Fridman
So how close are, are you to figuring out a way to automate the process of starting from scratch and building a successful AI startup?
- ANAndrew Ng
Yeah. I think, uh, with, we've been constantly improving and iterating on our processes, the, how we do that. Uh, so things like, you know, how many customer calls do we need to make in order to get customer validation? Uh, how do we make sure this technology can be built? Quite a lot of our businesses need cutting edge machine learning algorithms, so the kind of algorithms that were developed in the last one or two years, and even if it works in a research paper, it turns out taking the production is really hard. There are a lot of issues for making these things work in the real life, uh, uh, that are not widely addressed in academia. So how do you validate that this is actually doable? How do you build a team, get the specialized domain knowledge, be it in education or healthcare, whatever sector we're focusing on? So I think we've actually getting, we've been getting much better at, um, giving the entrepreneurs a high success rate, but I think we're still... I think the whole world is still-
- LFLex Fridman
Yes.
- ANAndrew Ng
... in the early phases of figuring this out.
- LFLex Fridman
But y- y- do you think there is some aspects of that process that are transferrable from one startup to another, to another, to another?
- ANAndrew Ng
Yeah, very much so. Um, you know, starting a company to most entrepreneurs is a, is a really lonely thing and I've seen so many entrepreneurs not know how to make a certain decisions, like w- w- when do you need to, um, how do you do B2B sales, right? If you don't know that, it's, it's, it's really hard. Or, um, how do you market this efficiently other than, you know, buying ads which is really expensive? Are there more efficient tactics to that or, uh, for a machine learning project, you know, basic decisions can change the course of whether machine learning product works or not. And so there are so many hundreds of decisions that entrepreneurs need to make, and making a mistake in a couple of key decisions can have a huge impact, uh, uh, you know, on, on the fate of the company. So I think a startup studio provides a support structure that makes starting a company much less of a lonely experience. And also, um, when facing with these key decisions like trying to hire your first, uh, uh, VP of engineering, what's good selection criteria? How do you source? Should I hire this person or not? By helping, by having, by having an ecosystem around the entrepreneurs, the founders to help, I think we help them at the key moments and, and hopefully significantly, um, make them more enjoyable and, and higher success rate.
- LFLex Fridman
So they have somebody to brainstorm with in these very difficult decision points?
- ANAndrew Ng
And also to help them recognize what they may not even realize is a key decision point.
- LFLex Fridman
(laughs) Right. That's, that's the first and probably the most important part. Yeah.
- ANAndrew Ng
Actually, can I say one other thing? Um, you know, I, I think, uh, building companies is one thing, but I, I feel like it's really important that we build companies that move the world forward. Uh, for example, within AI Fund team, there was once an idea for, uh, a new company that if it had succeeded, would have resulted in people watching a lot more videos, in a certain narrow vertical type of video. Um, I looked at it, the business case was fine, the revenue case was fine, but I looked at it and just said, "I don't want to do this." Like, you know, I, I don't actually just want to have a lot more people watch this type of video. Wasn't educational. Is it educational? Maybe.
- LFLex Fridman
Yeah.
- ANAndrew Ng
And so, and so I, I, I, I, I killed the idea on the basis that I didn't think it would actually help people. So, um, whether building companies or work of enterprises or doing personal projects, I think, um, it's up to each of us to figure out what's the difference we want to make in the world.
- 1:11:14 – 1:20:44
Landing.ai - growing AI efforts in established companies
- LFLex Fridman
With Landing.ai, you help already established companies grow their AI and machine learning efforts. How does a large company integrate machine learning into their efforts?
- ANAndrew Ng
AI is a general purpose technology and I think it will transform every industry. Um, our community has already transformed to a large extent to software internet sector. Most software internet companies, you know, outside the top, right, five or six or three or four, um, already have reasonable machine learning capabilities or are getting there. There's still room for improvement. Um, but when I look outside the software internet sector, everything from manufacturing, agriculture, healthcare, logistics, transportation, there's so many opportunities that, uh, very few people are working on. So I think the next wave for AI is for us to also transform all of those other industries. There was a McKinsey study estimating $13 trillion of, uh, global economic growth, uh, to... Uh, US GDP is 19 trillion dollars, so 13 trillion is a big number. Or PwC estimate $16 trillion, so whatever number is, it's large. But the interesting thing to me was a lot of that impact will be outside the software internet sector so we need more teams, um, to work with these companies to help them adopt AI and I think this is one of the things that will make, you know, help drive global economic growth and make humanity more powerful.
- LFLex Fridman
And like you said, the impact is there. So what are the best industries, the biggest industries where AI can help perhaps outside the software tech sector?
- ANAndrew Ng
Um, frankly, I think it's all of them.
- LFLex Fridman
(laughs)
- ANAndrew Ng
Uh, some, some of the ones I'm spending a lot of time on are manufacturing-
- LFLex Fridman
Manufacturing.
- ANAndrew Ng
... agriculture, looking into healthcare. Um, for example, in manufacturing, uh, we do a lot of work in visual inspection where today there are people standing around using the eye, human eye, to check if, you know, this plastic part or the smartphone or this thing has a scratch or a dent or something in it. Um, we can use a camera to take a picture, use a algorithm, deep learning and other things to check if it's defective or not, uh, and thus help factories improve yield and improve quality and improve throughput.It turns out the practical problems we run into are very different than the ones you might read about in, in most research papers. The data sets are really small, so we face small data problems. Um, you know, the factories keep on changing the environment, so it works well on your test set, but guess what? You know, the, uh, something changes in the factory. The lights go on or off. Recently, we, there was a factory in which, um, a bird flew through the factory and pooped on something. And so that, you know, so that changed stuff. And so increasing our algorithmic robustness to all the changes that happen in the factory, uh, I find that we run into a lot of practical problems that, that are not as widely discussed in, in academia. And it's really fun kind of being on the cutting edge, solving these problems before, you know, maybe before many people are even aware that there is a problem there.
- LFLex Fridman
And that's such a fascinating space. You're absolutely right. But what, what is the first step that a company should take? Because it's just a scary leap into this new world of going from the human eye inspecting to digitizing that process, having a camera, having an algorithm. What's the first step? Like, what's the early journey that you recommend that you see these companies taking?
- ANAndrew Ng
I published a document called the AI Transformation Playbook, uh, that's online, and, and taught briefly in the AI For Everyone course on Coursera about the long-term journey that companies should take. But the first step is actually to start small. Um, I've seen a lot more companies fail by starting too big than by starting too small. Um, take even Google. You know, most people don't realize how hard it was and how controversial it was in the early days. So when I started Google Brain, um, it was controversial. You know, people thought deep learning neural nets, tried it, didn't work. Why would you want to do deep learning? So my first internal customer within Google was the Google Speech team, which is not the most lucrative project in Google, uh, not the most important. It's not web search or advertising. But by starting small, um, my team helped the speech team build a more accurate speech recognition system. And this caused their peers, other teams, to start to have more faith in deep learning. My second internal customer was, uh, the Google Maps team where we used computer vision to read house numbers, you know, from basic street view images to more accurately locate houses within Google Maps. So it improved the quality of the geodata. And it was only after those two successes that I then started a more serious conversation with the Google Ads team.
- LFLex Fridman
So there's a ripple effect, that you showed that it works in these, in these cases and then it just propagates through the entire company, that this, this thing has a lot of value and use for us.
- ANAndrew Ng
I think the early small scale projects, it helps the teams gain faith, but also helps the teams learn what these technologies do. Um, I still remember when our first GPU server, it was a server under some guy's desk. And, you know, and, and that taught us early important lessons about how do you, um, have multiple users share a set of GPUs, which was really not obvious at the time. But those early lessons were important. We learned a lot from that first GPU server that later helped the teams think through how to scale it up to, to much larger deployments.
- LFLex Fridman
Are there concrete challenges that companies face that, uh, that you see as important for them to solve?
- ANAndrew Ng
I think building and deploying machine learning systems is hard. Uh, there's a huge gulf between something that works in a Jupyter notebook on your laptop versus something-
- LFLex Fridman
Right.
- ANAndrew Ng
... that runs in a production deployment setting in a, in a factory or an agriculture plant or whatever. Um, so I see a lot of people, you know, get something to work on your laptop and say, "Wow, look what I've done." And that's, that's, that's great. That's hot. That's a very important first step, but a lot of teams underestimate the rest of the steps needed. Um, so for example, I've heard this exact same conversation between a lot of machine learning people and business people. The machine learning person says, "Look, my algorithm does well on the test set and it's a clean test set. I didn't repeat." Um, and then machine, and the business person says, "Thank you very much, but your algorithm sucks. It doesn't work."
- LFLex Fridman
Yeah.
- ANAndrew Ng
And the machine learning person says, "No, wait, I did well on the test set." Um, and I think there is a gulf between what it takes to do well on a test set on your hard drive versus what it takes to work well in a, in a deployment setting. Uh, some, some common problems, um, robustness and generalization. You know, you deploy something in the factory, maybe they chop down a tree outside the factory, so the, uh, tree no longer covers the window and the lighting is different. So the test set changes.
Episode duration: 1:29:09
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 0jspaMLxBig
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome