Skip to content
Lex Fridman PodcastLex Fridman Podcast

Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59

Lex Fridman and Sebastian Thrun on sebastian Thrun on AI, self‑driving cars, flying taxis, and education.

Lex FridmanhostSebastian Thrunguest
Dec 21, 20191h 18mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Sebastian Thrun. He's one of the greatest roboticists, computer scientists, and educators of our time. He led the development of the autonomous vehicles at Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then led the Google self-driving car program which launched the self-driving car revolution. He taught the popular Stanford course on Artificial Intelligence in 2011, which was one of the first massive open online courses, or MOOCs as they're commonly called. That experience led him to co-found Udacity, an online education platform. If you haven't taken courses on it yet, I highly recommend it. Their self-driving car program, for example, is excellent. He's also the CEO of Kitty Hawk, a company working on building flying cars, or more technically, EVTOLs, which stands for electric vertical takeoff and landing aircraft. He has launched several revolutions and inspired millions of people, but also, as many know, he's just a really nice guy. It was an honor and a pleasure to talk with him. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, follow it on Spotify, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. If you leave a review on Apple Podcasts or YouTube or Twitter, consider mentioning ideas, people, topics you find interesting. It helps guide the future of this podcast. But in general, I just love comments with kindness and thoughtfulness in them. This podcast is a side project for me, as many people know, but I still put a lot of effort into it, so the positive words of support from an amazing community, from you, really help. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation that you can skip to, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Brokerage services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST robotics and LEGO competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, which, again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Sebastian Thrun. You've mentioned that The Matrix may be your favorite movie. So let's start with a crazy philosophical question. Do you think we're living in a simulation? And in general, do you find the thought experiment interesting?

    2. ST

      Define simulation, I would say maybe we are, maybe we are not, but it's completely irrelevant to the way we should act.

    3. LF

      Right. Putting aside for a moment the, the fact that it might not have any impact on how we should act as human beings, for people studying theoretical physics, these kinds of questions might be kind of interesting, looking at the universe's, uh, information processing system.

    4. ST

      Universe is an information processing system.

    5. LF

      It is.

    6. ST

      It's a huge physical, biological, chemical computer, there's no question. Um, but I live here and now. I care about people, I care about us.

    7. LF

      What do you think it's trying to compute?

    8. ST

      I don't think there's an intention. I think that just the- the world evolves the way it evolves, and it's- it's beautiful, it's unpredictable, and I'm really, really grateful to be alive.

    9. LF

      Spoken like a true human.

    10. ST

      Which last time I checked, I was. (laughs)

    11. LF

      Or that, in fact, this whole conversation is just a Turing test to see if, if indeed, if indeed you are. You've also said that one of the first programs or the first few programs you've written was a, wait for it, TI-57 calculator. (laughs)

    12. ST

      Yeah.

    13. LF

      Maybe that's early '80s? We don't wanna date calculators or anything.

    14. ST

      It was early '80s, correct.

    15. LF

      Yeah. So if you were to place yourself back into that time, into the mindset you were in, could you have predicted the evolution of computing, AI, the internet, technology in- in the decades that followed?

    16. ST

      I was super fascinated by Silicon Valley, which I'd seen on television once and thought, "My God, this is so cool. They built like DRAMs there and CPUs. How cool is that?" And as a college student a few year later, a few years later, I decided to really study intelligence and study human beings, and found that even back then in the '80s and '90s, that artificial intelligence is what fascinated me the most. What was missing is that back in the day, the computers are really small. They're like the brains you could build were not anywhere bigger as a cockroach, and cockroaches aren't very smart. So we weren't at the scale yet where we are today.

    17. LF

      Did you dream at that time to achieve the kind of scale we have today, or did that seem possible?

    18. ST

      I always wanted to make robots smart, and I felt it was super cool to build an artificial human, and the best way to build an artificial human was to build a robot, because that's kind of the closest we could do. Unfortunately, we aren't there yet. Uh, the robots today are still very brittle.... but it's fascinating to, to study intelligence from a constructive perspective, where you build something.

    19. LF

      To understand, you build. What do you think it takes to build an intelligent system and an intelligent robot?

    20. ST

      I think the biggest innovation that we've seen is machine learning, and it's the idea that the computers can basically teach themselves. Let's give an example. I'd say, um, everybody pretty much knows how to walk, and we learn how to walk in the first year or two of our lives.

    21. LF

      Mm-hmm.

    22. ST

      But no scientist has ever been able to write down the rules of human gait. We don't understand it. We can't put... we have it in our brains somehow. We can practice it. We understand it, but we can't articulate it. We can't pass it on by language. And that, to me, is kind of a deficiency of today's computer programming. When you program a computer, they're so insanely dumb that you have to give them rules for every contingencies. Very unlike the way people learn, we learn from data and experience, computers are being instructed. And because it's so hard to get this instruction set right, we pay software engineers $200,000 a year.

    23. LF

      Yeah.

    24. ST

      Now, the most recent innovation, which has actually been in the make for, like, 30, 40 years, is an idea that computers can find their own rules. So they can learn from falling down and getting up, the same way children can learn from falling down and getting up. And that revolution has led to a capability that's completely unmatched. Today's computers can watch experts do their jobs, whether you're a doctor or a lawyer, pick up the regularities, learn those rules, and then become as good as the best experts.

    25. LF

      So the dream of, in the '80s, of expert systems, for example, had at its core the idea that humans could boil down their expertise on a sheet of paper, so to sort of reduce... sort of be able to explain to machines how to do something explicitly. So do you think... what's the use of human expertise into this whole picture? Do you think most of the intelligence will come from machines learning from experience without human expertise input?

    26. ST

      So the, the question for me is much more, how do you express expertise? Um, you can exp- express expertise by writing a book. You can express expertise by showing someone what you're doing. You can express expertise by applying it, by, by many different ways. And I think the expert systems was our best attempt in AI to capture expertise and rules, where someone sat down and said, "Here are the rules of human gait. Here's when you put your big f- toe forward and your heel backwards, and here are how you stop stumbling." And as we now know, the set of rules, the set of language that we can command, is incredibly limited. The majority of the human brain doesn't deal with language, it deals with, like, subconscious, numerical, perceptual things that we don't even re- are self-aware of. Um, now, when a AI system watches an expert do their job and practice their job, um, it can pick up things that people can't even put into writing, into books or rules. And that's where the real power is. We now have AI systems that, for example, look over the shoulders of highly paid human doctors, like dermatologists or radiologists, and they can somehow pick up those skills that no one can express in words.

    27. LF

      So you were a key person in launching three revolutions, online education, autonomous vehicles, and flying cars, or VTOLS. So high level, and I apologize for all the philosophical questions (laughs) .

    28. ST

      There's n- no apology necessary.

    29. LF

      (laughs) H- h- how do you choose what problems to try and solve?

    30. ST

      Um-

  2. 15:0030:00

    Mm-hmm. …

    1. ST

      get. What we did really, really well was time management. We were done with everything a month before the race, and we froze the entire software a month before the race. And it turned out, um, looking at other teams, every other team complained if they had just one more week, they would have won.

    2. LF

      Mm-hmm.

    3. ST

      And we decided, (laughs) we're not gonna fall into that mistake. We're gonna be early, and we had an entire month to shake the system, uh, and we actually found two or three minor bugs in the last month that we had to fix. And we were completely prepared when the race occurred.

    4. LF

      Okay, so first of all, that's- that's such an incredibly rare achievement in terms of being able to- to be done on time, or ahead of time. What do you... how do you do that in- in your future work? What advice do you have in general? Because it seems to be so rare, especially in highly innovative projects like this. People work to the last second.

    5. ST

      Well, the nice thing about the DARPA Grand Challenge is that the problem was incredibly well-defined. We were able, for a while, to drive the old DARPA Grand Challenge course, which had been used the year before.

    6. LF

      Yes.

    7. ST

      And then, uh, at some reason, we were kicked out of the region, so we had to go to a different desert, the Sonoran Desert, and were able to drive desert trails-

    8. LF

      Mm-hmm.

    9. ST

      ... just of the same type. So there was never any debate about, like, what's actually the problem. We didn't sit down and say, "Hey, should we build a car or a plane?"

    10. LF

      Mm-hmm.

    11. ST

      Uh, we had to build a car. That made it very, very easy. Then I- I studied my own life and life of others and realized that the typical mistake that people make is that there is this kind of crazy bug left that they haven't found yet, um, and- and it's just, they regret it, and the- the bug would have been trivial to fix, they just hadn't fixed it yet. And I didn't want to fall into that trap, so I built a testing team. We had a testing team that built a testing, uh, booklet of 160 pages of tests we had to go through just to make sure we shake out the system appropriately.

    12. LF

      Wow.

    13. ST

      And the testing team was with us all the time and dictated to us, "Today, we do railroad crossings. Tomorrow, we do- we practice the start of the event." And in all of these, um, we- we- we thought, "Oh, my God, this is long-solved trivial," and then we tested it out. "Oh, my God, it doesn't do a railroad crossing. Why not? Oh, my God, it mistakes the, uh, the rails for metal barriers."

    14. LF

      Yes.

    15. ST

      "Shit, we have to fix this."

    16. LF

      Yes.

    17. ST

      So it was really a co- a continuous focus on improving the weakest part of the system, and as- as long as you- you focus on improving the weakest part of the system, you eventually build a really great system.

    18. LF

      Let me just pause on that. It's, to me, as an engineer, it's just super exciting that you were thinking like that, especially at that stage. That's brilliant, that testing was such a core part of it. And maybe to linger on the point of leadership, I think it's one of the first times you were really a leader, and you've led many very successful teams since then. What does it take to be a good leader?

    19. ST

      I would say, um, most of all, don't just take credit, uh- (laughs)

    20. LF

      (laughs)

    21. ST

      ... uh, for the work of others.

    22. LF

      Right.

    23. ST

      Um, that's- that's very convenient, turns out, um, because I can't do all these things myself.

    24. LF

      (laughs)

    25. ST

      I'm an engineer at heart, so I- I care about engineering. So- so I- I don't know what the chicken or the egg is, but as a kid, I loved computers because you could tell them to do something and they actually did it.

    26. LF

      Hm.

    27. ST

      It was very cool, and you could, like, in the middle of the night, wake up at 1:00 in the morning and switch on your computer, and what you told it to do yesterday, it would still do. That was really cool. Uh, unfortunately, that- that didn't quite work with people. So you go to people and tell them what to do, and they don't do it.

    28. LF

      Mm-hmm.

    29. ST

      ... uh, and they hate you for it. Uh, or you, you do it today, and then you go a day later and they stop doing it. So you have to... So then the question really became, how can you put yourself in the brain of the peop- of people, as opposed to computers? And it turns out computers are super dumb. They're inco- they're so dumb. If, if people were as dumb as computers, I wouldn't want to work with them.

    30. LF

      Mm-hmm.

  3. 30:0045:00

    At the very least,…

    1. ST

      people outside the system, it's a different topic, but the system itself is a good system. If I had one wish, I would say it'd be really great if there was more debate about what the great big problems are in society, and focus on those, and most of them are interdisciplinary. Um, unfortunately, it's very easy to, to fall into a inner disciplinary viewpoint where your problem is dictated by what your closest colleagues believe the problem is. It's very hard to break out and say, "Well, there's an entire new field of problems." So to give you an example, um-... prior to me working on self-driving cars, I was a roboticist and a machine learning expert, and I wrote books on robotics. Uh, something called probabilistic robotics. It's a very methods-driven kind of viewpoint of the world. I built robots that acted in museums as tour guides, that, like, led children around. They did something that, at the time, was moderately challenging. When I started working on cars, several colleagues told me, "Sebastian, you're destroying your career, because in our field of robotics, cars are looked like as a gimmick. They're not expressive enough. They can only push the throttle and- and- and the brakes. There's no dexterity. There's no complexity. It's just too simple." And no one came to me and said, "Wow, if you solve that problem, you can save a million lives." Right? Among all robotic problems that I've seen in my life, I would say the self-driving car, transportation is the one that has the most hope for society. So how come the robotics community wasn't all over the place? And it was become... because we focused on methods, on solutions, and not on problems. Like, if you go around today and ask your grandmother, "What bugs you? What really makes you upset?" I- I challenge any academic, uh, to do this, and then realize how far your research is probably away from that today.

    2. LF

      At the very least, that's a good thing for academics to deliberate on.

    3. ST

      The other thing that's really nice in Silicon Valley is Silicon Valley is full of smart people outside academia, right? So there's the Larry Pages and Mark Zuckerbergs in the world who are anywhere as smart or smarter than the best academics I met in my life. Uh, and what they do is they- they are at a different level. They- they build the systems. They build, they build the customer-facing systems. They build things that people can use without technical education, and they are inspired by research. They're inspired by scientists. They hire the best PhDs from the best universities for a reason. Um, so I think this kind of vertical integration that between the real product, the real impact, and the real thought, the real ideas, that's actually working surprisingly well in Silicon Valley. It did not work as well in other places in this nation. So when I worked at Carnegie Mellon, we had the world's finest computer science university, but there wasn't those people in Pittsburgh that would be able to take these very fine computer science ideas and turn them into massively impactful products. Uh, that symbiosis seemed to exist pretty much only in Silicon Valley and maybe a bit in Boston and Austin.

    4. LF

      Yeah. Well, Stanford, that's-

    5. ST

      And New York.

    6. LF

      ... that's r- really interesting. So if we look a little bit further on from the, the DARPA Grand Challenge and the launch of the Google self-driving car, what do you see as the state, the challenges of autonomous vehicles as they are now i- is actually achieving that huge scale and having a huge impact on society?

    7. ST

      I'm extremely proud of what, what has been accomplished. And again, I'm- I'm taking a lot of credit for the work by others.

    8. LF

      (laughs)

    9. ST

      And I'm- I'm actually very optimistic, and- and people have been kind of worrying, "Is it too fast? Is it too slow? Why is it not there yet?" And so on. It is actually quite an- an interesting hard problem, uh, in- in that a- a self-driving car, to build one that manages 90% of the problems encountered in everyday driving is easy. We can literally do this over a weekend. To do 99% might take a month. Then there's 1% left. So 1% would mean that you still have a fatal accident every week. Very unacceptable. So now you work on this 1%, and the 99% of the le- the remaining 1% is actually still relatively easy, but now you're down to like a hundredth of 1%, and it's still completely unacceptable in terms of safety. Um, so the variety of things you encounter are just enormous, and that gives me enormous respect for human being that we're able to deal with the couch on the highway, right? Or the deer in the headlight or the blown tire that we've ne- never been trained for, and all of a sudden have to handle it in an emergency situation, and often do very, very successfully. It's amazing from that perspective how safe driving actually is given how many millions of miles we drive, uh, every year in this country. Um, we are now at a point where I believe the technology is there, and I've seen it. Um, I've seen it in Waymo, I've seen it in Aptiv, I've seen it in Cruise, in- in a number of companies, in- in Voyage, where, uh, vehicles now driving around and basically flawlessly are able to drive people around in- in limited scenarios. In fact, you can go to Vegas today and order a summon a Lyft, and if you, uh, got the right setting of your app, you'll be picked up by a driverless car. Now there's still safety drivers in there, but that's a fantastic way to kind of learn what the limits are of technology today. And there's still some glitches, but the glitches have become very, very rare. I think the next step is gonna be to down-cost it, to harden it. Um, the- the- the entrapment, the sensors are not quite an automotive grade standard yet. And then to really build the business models to really kind of go somewhere and make the business case, and the business case is hard work. It's not just, "Oh my God, we have this capability. People just gonna buy it." You have to make it affordable. You have to, uh, give people the- the... find the social acceptance of people. None of the teams yet has been able to or gutsy enough to drive around without a person inside the car. A- and that's the- the next magical hurdle. We'll be able to send these vehicles around completely empty in traffic, and I think... I mean, I wait every day, wait for the news that Waymo has just done this.

    10. LF

      (laughs) So, you know, interesting, you mentioned gutsy. Let me, let me ask some, uh, maybe unanswerable question, maybe edgy questions, but, uh, in terms of how much risk is required, some guts in terms of leadership style, it would be good to contrast approaches. And I don't think anyone knows what's right, but if we compare Tesla and Waymo, for example, Elon Musk and the Waymo team...... the, there's slight differences in approach. So on the Elon side, there's more, I don't know what the right word to use, but aggression in terms of innovation. And on Waymo's side, there's more, sort of cautious, safety-focused approach to the problem. What do you think it takes? Which leadership at which moment is right? Which approach is right?

    11. ST

      Look. I, I don't sit in either of those teams, so I'm unable to even verify, like, somebody says is correct.

    12. LF

      Right.

    13. ST

      In the end of the day, every innovator in, in that space will, will face a fundamental dilemma. And I would say, you could put aerospace titans into the same bucket.

    14. LF

      Yes.

    15. ST

      Which is, you have to balance public safety with your drive to innovate. And this country in particular, United States, has a 100-plus-year history of doing this very successfully. Um, air travel is, what, a 100 times as safe per mile than ground travel than, than cars, and there's a reason for it. Because people have found ways to be very methodological about ensuring public safety, whilst still being able to make progress on important aspects, for example, like airline noise and, and fuel consumption. Um, so I think that those practices are, are proven and they actually work. Uh, we live in a world safer than ever before. And yes, there will always be the provision that something goes wrong. There's always the possibility that someone makes a mistake or there's an unexpected failure. We can never guarantee to 100% absolute safety, other than just not doing it. But I think... I'm, I'm very proud of the history of, of, of United States. I mean, we've, we've dealt with much more dangerous technology, like nuclear energy, and h- and kept that safe too. We have nuclear weapons and we keep those safe. So, so we have methods and procedures that really balance these two things very, very successfully.

    16. LF

      You've mentioned a lot of great autonomous vehicle companies, um, uh, that are taking sort of the Level 4, Level 5. They jump in full autonomy with a safety driver and take that kind of approach, and also through simulation and so on. There's also the approach that Tesla autopilot is doing, which is kind of incrementally taking a Level 2 vehicle and using machine learning, and learning from the driving of human beings, and trying to creep up, trying to incrementally improve the system until it's able to achieve Level 4 autonomy. So perfect autonomy in certain kind of geographical regions. What are your thoughts on these contrasting approaches?

    17. ST

      Well, so first of all, I, I'm a very proud Tesla owner and I literally use the autopilot every day, and it literally has kept me safe. It is a beautiful technology, specifically for highway driving when I'm slightly tired.

    18. LF

      Yeah.

    19. ST

      Because then it turns me into a much safer driver, and that I'm 100% confident that's the case. In terms of the right approach, I think the, the biggest change I've seen since I ran the Waymo team is, is this thing called deep learning. Uh, I think deep learning was, was not a hot topic when I, when I started Waymo, or Google self-driving cars. It was there, but in fact, we started Google Brain at the same time in Google X. So-

    20. LF

      Mm-hmm.

    21. ST

      ... I invested in deep learning, but people didn't talk about it. It wasn't a hot topic, and now it is. There's a shift of, of emphasis from a more geometric perspective, where you use geometric sensors that give you a full 3D view and you do geometric reasoning about, oh, this box over here might be a car, towards a more human-like, "Oh, uh, let's just learn about it. This looks like the thing I've seen 10,000 times before, so maybe it's the same thing," uh, machine learning perspective. And that has really put, I think, all these approaches on steroids. Um, at Udacity, we teach a course in self-driving cars. Um, we c- in fact, I think we've, we've, we've graduated over 20,000 or so people on self-driving car skills, so every, every self-driving car team in the world now use our engineers.

    22. LF

      (laughs)

    23. ST

      And in this course, the very first homework assignment is to do lane finding on, in images. And lane finding images, for a layman, what this means is you, you put a camera into your car or you open your eyes and you wouldn't know where the lane is, right? So all... so you can stay inside the lane with your car. Um, humans can do this super easily. You just look and you know where the lane is, just intuitively. Um, for machines, for a long time, it was super hard because people would write these kind of crazy rules. If there's like vine lane markers and here's what right really means, this is not quite right enough, so let's... or it's not right, or maybe the sun is shining, so when the sun shines and this is right, and this is a straight line, or maybe it's not quite a straight line because the road is curved or... and, and do we know that there should be six feet between lane markings or not, or 12 feet? Whatever it is. Um, and now, um, what the students are doing, they would take machine learning. So instead of, like, writing these crazy rules what a lane marker is, they just say, "Hey, let's take an hour driving and label it and, and tell the vehicle this is actually the lane by hand, and then these are examples," and have the machine find its own rules what, what lane markings are. And within 24 hours, now every student who's never done any programming before in this space can write a perfect lane finder as good as the best commercial lane finders. And that's completely amazing to me. Um, we've seen progress using machine learning that completely dwarfs anything that I saw 10 years ago, ago.

    24. LF

      Yeah. And just as a side note, the self-driving c- uh, car nanodegree, the f- fact that you launched that many years ago now, maybe four years ago?

    25. ST

      Three years ago.

    26. LF

      Three years ago. Is incredible that it... That's a great example of system-level thinking, sort of just, uh, taking an entire course that teaches you how to solve the entire problem. I, I, I definitely recommend people

    27. ST

      ... existing universities will be very slow to move, um, because they're departmentalized and there's no-

    28. LF

      Right.

    29. ST

      ... department for self-driving cars. So-

    30. LF

      Right.

  4. 45:001:00:00

    (laughs) …

    1. ST

      to be at least $10 million, right?

    2. LF

      (laughs)

    3. ST

      So think about this, you, you get to have a skill and you team up and build a company and your worth now is $10 million. I mean, that's kind of cool. I mean, what, what other thing could you do in life to be worth $10 million within a year?

    4. LF

      Yeah, amazing. But to come back for a moment on- onto deep learning and its application in autonomous vehicles, you know, what are your thoughts on Elon Musk's statement, provocative statement perhaps, that lidar is a crutch? So this geometric way of thinking about the world may be holding us back if, eh, what we should instead be doing in this robotics spa- in this particular space of autonomous vehicles is using camera as a primary sensor and using computer vision and machine learning as the primary way to-

    5. ST

      I have two comments.

    6. LF

      ... (crosstalk)

    7. ST

      I think, first of all, we all know that people can drive cars without lidars in their heads because we only have eyes.

    8. LF

      Yes.

    9. ST

      And we mostly just use eyes for driving. Um, maybe we use some other perception about our bodies, accelerations, occasionally our ears, certainly not our noses. (laughs) So, so the, the existence proof is there that eyes must be sufficient. In fact, we could even drive a car if someone put a camera out and then gave us the camera image with no latency, we would be able to drive a car that way the same way. So a camera is also sufficient. Secondly, I really love the idea that in, in the Western world, we have many, many different people trying different hypotheses. Um, it's almost like an anthill, right? If an anthill tries to forage for food, right, you can sit there as two ants and, and agree what the perfect path is, and then every single ant marches for the most likely location of food is, or we can even just spread out. And I promise you, the spread out solution will be better because-

    10. LF

      Mm-hmm.

    11. ST

      ... if the discussing philosophical intellectual ants get it wrong and they all move in the wrong direction, they're gonna waste the day, and then they're gonna discuss again for another week. Whereas if all these ants go in a random direction, someone's gonna succeed and they're gonna come back and, and, and claim victory and get the Nobel Prize or whatever the ant equivalent is, and then they all march in the same direction.

    12. LF

      (laughs)

    13. ST

      And that's great about society. That's great about the Western society. We're not clan-based, we're not central based, we don't have a, uh, Soviet Union style central government that, that tells us where to forage. We just forage. We start a C-corp. (laughs) We get investor money, go out and try it out. And who knows who's gonna win?

    14. LF

      (laughs) I like it. In your... When you look at the long-term vision of autonomous vehicles, do you see machine learning as fundamentally being able to solve most of the problems? So, learning from experience?

    15. ST

      I'd say, um, we should be very clear about what machine learning is and is not. And I think there's a lot of confusion. What it is today is a technology that can go through large databases of repetitive patterns and find those patterns. So an example, we did a study at Stanford two years ago where we applied machine learning to detecting skin cancer in images. And we harvested, uh, or built a data set of 129,000 skin photo shots that were, all had been biopsied for what the actual situation was. And those included melanomas and, and carcinomas, also included rashes and, and, and, and other skin conditions, lesions. Um, and then we had a, a network find those patterns and it was, by and large, able to then detect skin cancer with an iPhone as accurately as the best board-certified, uh, Stanford-level dermatologist. We proved that. Uh, now, now this thing was great in this one thing, in finding skin cancer, but it couldn't drive a car. Um, so, so the difference to human intelligence is we do all these many, many things. And-We can often learn from a very small data set of experiences. Whereas machines still need very large data sets, and things have to be very repetitive. Now that's still super impactful because almost everything we do is repetitive, so that's gonna really transform human labor. But it's not this almighty general intelligence. We're really far away from a system that will exhibit general intelligence. Um, to that end, I- I actually commiserate the naming a little bit because artificial intelligence, if you believe Hollywood, is immediately mixed into the idea of human suppression and- and machine superiority. I don't think that we're gonna see this in my lifetime. Um, I don't think human suppression is a- is a good idea. I don't see it coming. I don't see the technology b- being there. What I see instead is a- a very pointed, focused pattern recognition technology that's able to extract patterns from large data- from large data sets. And in doing so, it can be super impactful, right? Super impactful. Um, let's take the impact of artificial intelligence on- on human work. We all know that it takes something like 10,000 hours to become an expert. If you're gonna be a doctor, or a lawyer, or even a really good driver, it takes a certain amount of time to become experts. Machines now are able and- and have been p- shown to- to observe people become experts, and observe experts, and then extract those rules from experts in some interesting way that could go from law to sales to driving cars to diagnosing cancer, and then giving that capability to people who are completely new in their job. We now can... And that's- that's been done. It's been done commercially in many, many in- instantiations. So that means we can use machine learning to make people expert on their very first day of their work. Like think about the impact, if- if your doctor, uh, is still in their first 10,000 hours, you have a doctor who's not quite an expert yet. Who would not want a doctor who is the world's best expert? And now we can leverage machines to really eradicate the error in decision-making, error in lack of expertise for human doctors. That could save your life.

    16. LF

      If we can linger on that for a little bit, in which way do you hope machines in the medical- in the medical field could help assist doctors? You mentioned this sort of, uh, accelerating the learning curve or people if they start a job, or in the first 10,000 hours can be assisted by a machine. How do you- how do you envision that assistance looking?

    17. ST

      So we built this- this app for an iPhone that can detect and classify and- and d- diagnose skin cancer.

    18. LF

      Right.

    19. ST

      And we proved two years ago that it does pretty much as good or better than the best human doctors. So let me tell you a story. So there's a- a friend of mine, let's call him Ben. Ben is a very famous venture capitalist. He goes to his doctor and the doctor looks at a mole and- and says, "Hey, that mole is probably harmless." And for some very funny reason, he pulls out that phone with our app, he's a collaborator in our study, and the app says, "No, no, no, no, this is a melanoma." And- and for background, melanomas are, uh ... I think skin cancer is the most common cancer in this country. Uh, melanomas can go from- from stage zero to stage four within less than a year. Stage zero means you can basically cut it out yourself with- with a kitchen knife and be safe. And stage four means your chances of living f- five more years are less than 20%. Uh, so it's a very serious, serious, serious condition. So this doctor, uh, who took out the iPhone looked at the iPhone and was a little bit puzzled and said, "You know what? Just to be safe, let's cut it out and biopsy it." That's the- the technical term for, "Let's get an in-depth diagnostics that is more than just looking at it." And it came back as cancerous, as a melanoma, and it was then removed. And my friend Ben, I was hiking with him and then we were talking about AI and said, I told him, "I do this work on skin cancer." And he said, "Oh, funny, my doctor just had an iPhone that found my cancer."

    20. LF

      (laughs)

    21. ST

      I went, "Wow." (laughs) So I was like completely intrigued. I didn't even know about this. So here's a person, I mean, this is a real human life, right?

    22. LF

      Yes.

    23. ST

      Like who doesn't know somebody who has been affected by cancer? Cancer is cause of death number two. Cancer is this kind of disease that- that is mean in- in the following way. Most cancers can actually be cured relatively easily if we catch them early. And- and the reason why we don't tend to catch them early is because they have no symptoms. Like your very first symptom of a gallbladder cancer or a pancreatic cancer might be a headache, and when you finally go to your doctor because of these headaches or your- your back pain and you're being imaged, it's usually stage four plus, and that's the time when your curing chances might be dropped to a single-digit percentage. So if you could leverage AI to inspect your body on a regular basis without even a doctor in the room, maybe when you take a shower or what have you, I know that sounds creepy, but then we might be able to save millions and millions of lives.

    24. LF

      Mm-hmm. You've mentioned there's a concern that people have about near-term impacts of AI in terms of job loss. So you've mentioned being able to assist doctors, being able to assist people in their jobs. Do you have a worry of people losing their jobs or the economy being affected by the improvements in AI?

    25. ST

      Yeah, anybody concerned about job losses, please come to udacity.com. Uh, we teach, uh, contemporary tech skills and-

    26. LF

      (laughs)

    27. ST

      ... we have a kind of implicit job promise. (laughs) We often s- when- when we measure, we- we spend way over 50% of our graduates in new jobs and they're very satisfied about it. And it costs almost nothing, costs like 1,500 max or something like that.

    28. LF

      And so there's a cool new programming degree with the- w- with the US government guaranteeing that you will help s- give scholarships that educate people in- in this kind of situation.

    29. ST

      Yeah, we've working with the US government...... on, on th- the idea of, of basically rebuilding the American dream. So Udacity has just dedicated 100,000 scholarships for citizens of America, for various levels of courses that eventually will, will get you a, a job. And those courses are all somewhat related to the tech sector, because the tech sector is kind of the hottest sector right now. And they range from entry-level digital marketing to very advanced self-driving car engineering. And we're doing this with the White House because we think it's bipartisan, it's an issue that is- that if you wanna really make America great, um, being able to, to be part of the solution and, and live the American dream requires us to be proactive about our education and our skillset, it's just the way it is today. And it's always been this way. We always had this American dream to send our kids to college, and now the American dream has to be to send ourselves to college.

    30. LF

      (laughs)

  5. 1:00:001:15:00

    And I know many…

    1. ST

      the workplace, I think we're gonna be super successful.

    2. LF

      And I know many fellow roboticists and computer scientists that I will insist take this course.

    3. ST

      (laughs) Not to be named here.

    4. LF

      (laughs) Not to be named. Many, many years ago, 1903, the Wright brothers flew in Kitty Hawk for the first time and you've launched a company of the same name, Kitty Hawk, with the dream of building flying cars, eVTOLS. So at the big picture what are the big challenges of making this thing that actually inspired generations of people about what the future looks like? What does it take? What are the biggest challenges?

    5. ST

      So flying cars has always been a dream. Every boy every girl wants to fly, let's be honest.

    6. LF

      Yes.

    7. ST

      And let's go back in our history when we were dreaming of flying. I think my- honestly my single most remembered childhood dream has been a dream where I was sitting on a pillow and I could fly.... I was, like, five years old.

    8. LF

      (laughs)

    9. ST

      I remember, like, maybe three dreams over my childhood. Th- but that's the one I dr- remember most vividly. Um, and then Peter Thiel famously said, "They promised us flying cars, and they gave us 140 characters." Uh-

    10. LF

      (laughs)

    11. ST

      ... pointing, as Twitter, at the time, limiting message size to 140 characters. So we're coming back now to really go for the super impactful stuff, like flying cars. Um, and to be precise, they're not really cars. Uh, they don't have wheels. Um, they're actually much closer to a helicopter than, than anything else. They take off vertically, and they fly horizontally. But they have important differences. Um, one difference is that they are much quieter. Uh, we just released a, a vehicle called Project Heaviside that can fly over you as low as a helicopter, and you basically can't hear. It's like 38 decibels. It's like, like, the, the, the... If you were inside a library, you might be able to hear it. But anywhere outdoors, your ambient noise is higher. Um, secondly, they're, they're much more affordable. They're much more affordable than helicopters. And the reason is, helicopters are i- expensive for many reasons. Um, the- there's lots of single point of failures in a helicopter. There's a bolt between the blades that's caused Jesus Bolt. And the reason why it's called Jesus Bolt is that if this bolt breaks, you will die. There is no second solution in helicopter flight. Whereas we have this distributed mechanism. When you go from gasoline to electric, you can now have many, many, many small motors, as opposed to one big motor. And that means if you lose one of those motors, not a big deal. Heaviside, if it loses a motor, has eight of those. If we lose one of those eight motors, so it's seven left, it can take off just like before, uh, and land just like before. Um, we are now also moving into a technology that doesn't require a commercial pilot, because on some level, flight is actually easier than, than ground transportation. Like, in, in self-driving cars, uh, th- the, the world is full of, like, children and bicycles and other cars and mailboxes and curbs and, and shrubs and what have you, all these things you have to avoid. When you go above the buildings and tree lines, there's nothing there. I mean, you can do the test right now. Look outside and count the number of things you see flying. I'd be shocked if you could see more than two things. It's probably just zero. In the Bay Area, the most I've ever seen was six. And maybe it's 15 or 20, but not 10,000. So the sky is very ample and, and, and very empty and very free. So the vision is, can we build a socially acceptable mass trans- transit solution for daily transportation that, that is affordable? And we have an existence proof. Heaviside can fly 100 miles in range with still 30% electric reserves. It can fly up to, like, 180 miles an hour. We know that that solution, that scale would make your ground transportation ten times as fast as a car based on, uh, used census or statistics data, uh, which means you would take your 300 hours of day, of yearly commute down to 30 hours and give you 270 hours back. Who wouldn't want... I mean, who doesn't hate traffic?

    12. LF

      (laughs)

    13. ST

      Like, I hate... Th- give me the person that doesn't hate traffic. I hate traffic. Every day, every time I'm in traffic, I hate it. Uh, and, and if, if we could free the world from traffic-

    14. LF

      (laughs)

    15. ST

      ... we have technology. We can free the world from traffic.

    16. LF

      Yeah.

    17. ST

      We have the technology. It's there. We have an existence proof. There's... It's not a technological problem anymore.

    18. LF

      Do you think there is a future where tens of thousands, maybe hundreds of thousands, of both delivery drones and flying cars of this kind, eVTOLS, fill the sky?

    19. ST

      I absolutely believe this. And, um, there's obviously the... Societal acceptance is, is a major question and, of course, safety is, um... I believe we can... In safety, we're gonna exceed ground transportation safety, as has happened for aviation already, commercial aviation. And in terms of s- of, of acceptance, I think one of the key things is noise. That's why we are focusing relentlessly on, on noise, and we built perhaps the quietest electric, uh, VTOL vehicle ever built. Um, the nice thing about the sky, it's, it's three dimensional. So, so any mathematician will immediately recognize the difference between 1D of, like, a regular highway to 3D of a sky. Um, but to make it clear for, for the layman, um, say you want to make 100 vertical lanes of Highway 101 in San Francisco because you believe building 100 vertical lanes is the right solution. Imagine how much it would cost to stack 100 vertical lanes physically onto 101. That would be prohibitive. That would be consuming the world's GDP for an entire year just for one highway. Uh, (laughs) it's amazingly expensive, okay? In the sky, it would just be a, a re-compilation of a piece of software because all these lanes are virtual. That means any vehicle that is in conflict with another vehicle, would just go to different altitudes-

    20. LF

      (laughs)

    21. ST

      ... and then the conflict is gone. And if you don't believe this, that's exactly how, how commercial aviation works. When you fly from New York to San Francisco and another plane flies from San Francisco to New York, they're at different altitudes so they don't hit each other. It's a solved problem for the jet space, uh, and it will be a solved problem f- for the urban space. There's companies like Google Wing and a- and Amazon working on very innovative solutions, how do we have space management. They use exactly the same principles as we use today to route today's jets. There's nothing hard about this.

    22. LF

      Do you envision autonomy being a key part of it so the, that the, the flying vehicles are either s- semi-autonomous or fully autonomous?

    23. ST

      100% autonomous. You don't want idiots like me-

    24. LF

      (laughs)

    25. ST

      ... flying the sky, I, I promise you. And if you have 10,000... (laughs) Watch the, the movie The Fifth Element to get a feel for what would happen (laughs) if, if it's not autonomous.

    26. LF

      And a centralized... That's a really interesting idea of a centralized sort of, uh, management system for lanes and so on. So, actually just being able to have a... similar to what we have in the current commercial aviation, but scale it up to much, much more vehicles. That's a really interesting optimization problem.

    27. ST

      It is very... Mathematically, very, very straightforward.... like, the gap we leave between jets is gargantuous.

    28. LF

      Yes.

    29. ST

      And part of the reason is, there isn't that many jets, so it just feels like a good solution. Today, when you, uh, get vectored by air traffic control, someone talks to you, right? So any ATC controller might have up to maybe 20 planes on the same frequency, a- and then they talk to you, you have to talk back. And it feels right, because there isn't more than 20 planes around anyhow, so you can talk to everybody. But if there's 20,000 things around, you can't talk to everybody anymore. So we have to do something that's called digital, like text messaging. Like, we do have solutions. Like, we have, what, four or five billion smartphones in the world now, right?

    30. LF

      Yes.

  6. 1:15:001:18:28

    (laughs) …

    1. ST

      what we can accomplish, uh, what we can do. We live in a world that is so incredibly, vastly changing every day. Almost everything that we cherish, from your smartphone, to your flushing toilet, um, to all these basic inventions, your new clothes you're wearing, your watch, your plane, penicillin, uh, I don't know, uh, uh, anesthesia for surgery, um, penicillin, have been invented in the last 150 years. So, in the last 150 years, something magical happened. And I, I would trace it back to Gutenberg and the printing press that has been able to disseminate information more efficiently than before, that all of a sudden, we were able to invent agriculture and nitrogen fertilization, that made agriculture so much more potent, that we didn't have to work in the farms anymore, and we could start reading and writing, and we could become all these wonderful things we are today, from airline pilot, to massage therapist, to software engineer. It's just amazing. Like, living in that time is such a blessing. We, we should sometimes really think about this, right? Uh, Steven Pinker, who is a very famous author and philosopher whom I really adore, wrote a great book called Enlightenment Now, and that's maybe the one book I would recommend. And he asks the question, if there was only a single article written in the 20th century, only one article, what would it be? What's the most important innovation or the most important thing that happened? And he would say this article would credit a guy named Carl Bosh. And I, I challenge anybody, have you ever heard of the name Carl Bosh?

    2. LF

      (laughs)

    3. ST

      I hadn't, okay?

    4. LF

      No.

    5. ST

      Uh, there's a, there's a Bosch Corporation in Germany, but it's not associated with Carl Bosh. Um, so I, I looked it up. Carl Bosh invented nitrogen fertilization. And in doing so, together with an older invention of, of, uh, irrigation, was able to increase the yield per agricultural land by a factor of 26. So, a 2,500% increase in, in fertility of land. And that, so Steve Pinker argues, saved over two billion lives today, two billion people who would be dead if this man hadn't done what he had done, okay? Think about that impact and what that means to society. Um, that's, that's the way I look at the world. I mean, it's just so amazing to be alive and to be part of this. And I'm so glad I lived after Carl Bosh and not before. (laughs)

    6. LF

      I don't think there's a better way to end it, Sebastian. It's an honor to talk to you, to have had the chance to learn from you. Thank you so much for talking today.

    7. ST

      Thanks for coming on, Lex. A real pleasure.

    8. LF

      Thank you for listening to this conversation with Sebastian Thrun. And thank you to our presenting sponsor, Cash App. Download it, use code LEXPODCAST, you'll get $10, and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering of our future. If you enjoyed this podcast, subscribe on YouTube, get five stars on Apple Podcasts, support it on Patreon, or connect with me on Twitter. And now, let me leave you with some words of wisdom from Sebastian Thrun. "It's important to celebrate your failures as much as your successes. If you celebrate your failures really well, if you say, 'Wow, I failed. I tried. I was wrong, but I learned something,' then you realize you have no fear. And when your fear goes away, you can move the world." Thank you for listening, and hope to see you next time.

Episode duration: 1:18:34

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode ZPPAOakITeQ

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome