Skip to content
Lex Fridman PodcastLex Fridman Podcast

Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast #383

Mark Zuckerberg is CEO of Meta. SPONSORS: Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off EPISODE LINKS: Mark's Facebook: https://facebook.com/zuck Mark's Instagram: https://instagram.com/zuck Meta AI: https://ai.facebook.com/ Meta Quest: https://www.meta.com/quest/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 0:28 - Jiu-jitsu competition 17:51 - AI and open source movement 30:22 - Next AI model release 42:37 - Future of AI at Meta 1:03:15 - Bots 1:18:42 - Censorship 1:33:23 - Meta's new social network 1:40:10 - Elon Musk 1:44:15 - Layoffs and firing 1:51:45 - Hiring 1:57:37 - Meta Quest 3 2:04:34 - Apple Vision Pro 2:10:50 - AI existential risk 2:17:13 - Power 2:20:44 - AGI timeline 2:28:07 - Murph challenge 2:33:22 - Embodied AGI 2:36:29 - Faith SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostMark Zuckerbergguest
Jun 8, 20232h 41mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:28

    Introduction

    1. LF

      The following is a conversation with Mark Zuckerberg, his second time on this podcast. He's the CEO of Meta that owns Facebook, Instagram and WhatsApp, all services used by billions of people to connect with each other. We talk about his vision for the future of Meta, and the future of AI in our human world. This is the Lex Fridman Podcast, and now, dear friends, here's Mark Zuckerberg.

  2. 0:2817:51

    Jiu-jitsu competition

    1. LF

      So you competed in your first jujitsu tournament and, me as a fellow jujitsu practitioner and competitor, I think that's really inspiring given, uh, all the things you have going on. So, I gotta ask, what was that experience like?

    2. MZ

      Oh, it was fun.

    3. LF

      Fun?

    4. MZ

      I know- yeah, I mean, I'm- well, look, I'm a, I'm a pretty competitive person.

    5. LF

      Yeah? (laughs)

    6. MZ

      Um, doing sports that basically require your full attention, I, I think is really important to my, like, mental health and, and the way I just stay focused at doing everything that I'm doing. So like, I decided to, to get into martial arts and it's, um, it's awesome. I got like a ton of my friends into it, we all train together, um, and we have like a mini academy in my garage.

    7. LF

      Mm-hmm.

    8. MZ

      Um, and I guess, um, you know, one of my friends was like, "Hey, uh, we should go do a tournament." I was like, "Okay. Yeah, let's do it. I'm not gonna shy away from a challenge like that." So, yeah, it was- but it was, it was awesome. It was-

    9. LF

      Y-

    10. MZ

      ... it was just a lot of fun.

    11. LF

      You weren't scared? There was no fear?

    12. MZ

      I don't know. I, I was, I was pretty sure that I'd, that I'd do okay.

    13. LF

      I like the confidence.

    14. MZ

      (laughs)

    15. LF

      Um, well, so for people who don't know, jujitsu is a martial art where you're trying to break your opponent's limbs or choke them, uh, to sleep, uh, and do so with grace and, uh, elegance and efficiency and all that kind of stuff. It's a, uh, it's a kind of art form, I think, that you can do for your whole life and it's a, basically a game, a sport of human chess you could think of.

    16. MZ

      Yeah.

    17. LF

      There's a lot of strategy, there's a lot of sort of interesting human dynamics of using leverage and all that kind of stuff. And, uh, it's kind of incredible what you could do. You can, you could do things like a small opponent could defeat a much larger opponent, and you get to understand like the way the mechanics of the human body works because of that. But you certainly can't be distracted.

    18. MZ

      No.

    19. LF

      (laughs)

    20. MZ

      You g- it's, it's 100% focus, sport.

    21. LF

      Yeah.

    22. MZ

      To, to compete, I, I, you know, I needed to get around the fact that I didn't want it to be like this, this big thing. So I basically just I, I rolled up with a hat.

    23. LF

      Mm-hmm.

    24. MZ

      And sunglasses and I was wearing a COVID mask.

    25. LF

      Mm-hmm.

    26. MZ

      And I registered under my first and middle name, so Mark Elliot.

    27. LF

      Mm-hmm.

    28. MZ

      And, um, and it wasn't until I actually like pulled all that stuff off right before I got on the mat that I think people knew it was me. So, it was, it was pretty low-key.

    29. LF

      But you're still a public figure.

    30. MZ

      Yeah, I mean, I didn't wanna lose. (laughs)

  3. 17:5130:22

    AI and open source movement

    1. MZ

    2. LF

      Well, speaking of which, let me ask you about AI. It seems like this year, for the entirety of the human civilization, is an interesting year for the development of artificial intelligence. A lot of interesting stuff is happening. So Meta is a big part of that. Uh, Meta's developed LLaMA, which is a 65 billion parameter model. Uh, there's a lot of interesting questions they can ask here, one of which has to do with open source. But-First, can you tell the story of developing of this model and, uh, making a complicated decision of how to release it?

    3. MZ

      Yeah, sure. I think you're right, first of all, that in the last year there have been a bunch of advances on scaling up these large transformer models. So, there's the language equivalent of it with large language models, um, there's sort of the image generation equivalent with these large diffusion models. Um, there's a lot of fundamental research that's gone into this, and Meta has taken the approach of being quite open and academic in, in, in our development, um, of, of AI. Part of this is we wanna have the best people in the world researching this, and, um, and a lot of the best people wanna know that they're gonna be able to share their work, so that's part of the deal that we, that we have, is that, you know, we can get, you know, if, if you're one of the top AI researchers in the world, you can come here, you can get access to kind of industry scale, um, infrastructure and, and, and part of our ethos is that we, we wanna share what's, what's invented, um, broadly. We do that with a lot of the, the different AI tools that we create. And LLaMA is the language model that, that our research team made, and, you know, we, we did an, a limited, um, a limited open source release for it, right? Where, which was intended for researchers to be able to use it, um, but, you know, responsibility and, and getting safety right on these is, um, is very important. So, we didn't think that... For the first one, there were, there were a bunch of questions around whether we should be releasing this commercially, so we kinda punted on that for, for V1 of, of LLaMA and, and just released it from research. Now obviously, by releasing it for research, um, you know, it's out there, but, but companies know that, that they're, that they're not supposed to kinda put it into commercial releases. And, um, you know, we're, we're working on the follow-up models for this and, and thinking through how, how, um, what, what the, the, ho- how exactly this should work for, for follow-on now that we've had time to, to work on a lot more of the, the safety and, um, and the pieces around that. But, but overall, I mean, this is... I, I just kinda think that, that it would be good if there were a lot of different folks who had the ability to build state-of-the-art technology here. You know, the, it's, and, and not just a small number of, of big companies. Where, to train one of these AI models, the state-of-the-art models is, um, I dunno, just takes, you know, hundreds of millions of dollars of infrastructure, right? So there are not that many organizations in the world, um, that can do that at the biggest scale today. And, now, it gets, it gets more efficient every day, so, um, so I, I, I do think that will be available to more folks over time, but, but I just think, like, there's, there's all this innovation out there that people can create and, um, and, and I, I just think that we'll, we'll also learn a lot by, by seeing what the whole community of students and, um, and hackers and startups and, and different folks, um, build with this, and that's kinda, that's kinda been how we've approached this. And it's also how we've done a lot of our infrastructure. I mean, we took our whole data center design and our server design and we, we built this open compute project where we just made that public, and, um, part of the theory was, like, "All right, if we make it so that more people can use this server design, then, um, then that'll enable more innovation. It'll also make the server design more efficient and that'll, that'll make our business more efficient too." So, that's worked and we've, we've just done this with a lot of our, our infrastructure.

    4. LF

      So, for people who don't know, you did the limited release I think in February of la- of this year, of LLaMA, and it got quote unquote leaked, meaning, like, it, uh, escaped the, uh, the, the limited release aspect, but it was, you know, that something you probably anticipated given that it's just the release to researchers.

    5. MZ

      We shared it with the researchers, so.

    6. LF

      Right, so it's-

    7. MZ

      Yeah.

    8. LF

      It's just trying to make sure that there's, like, a slow release.

    9. MZ

      Yeah.

    10. LF

      Uh, but from there, I just would love to get your comment on what happened next, which is, like, there's a very vibrant open source community that just builds stuff on top of it. There's, uh, LLaMA CPP, th- basically s- stuff that makes it more efficient to run on smaller computers.

    11. MZ

      Yep.

    12. LF

      Uh, there's combining with, uh, uh, reinforcing learning with human feedback, so, some of the different interesting fine-tuning mechanisms. There's then also, like, fine-tuning in a GPT-3 generations. There's a lot of, uh, GPT4All, Alpaca, uh, Colossal AI, all these kinds of models you just kinda spring up, like, rrr, uh, run on top of. What-

    13. MZ

      Yeah.

    14. LF

      Like, what, what do you think about that?

    15. MZ

      No, I think it's been really neat to see. I mean, there's been folks who are getting it to run on local devices, right? So, if you're an individual who just, you know, wants to experiment th- uh, you know, with this at home, you probably don't have a large budget to get access to, like, a large amount of cloud compute, so getting it to run on your local laptop, um, you know, is, is, uh, is pretty good, right, and pretty relevant. Um, and then there are things like, yeah, LLaMA, CPP, um, reimplemented it more efficiently, so even now when we run our own versions of it, um, we can do it on way less compute and it just, way more efficient, save a lot of money, um, for everyone who, who uses this. So that, that is, is, is good. Um, I do think it's worth calling out that because this was a relatively early release, um, LLaMA isn't quite as on the frontier as, for example, the biggest OpenAI models or the biggest, um, Google models, right? I mean, you mentioned that the largest LLaMA model that we released had 65 billion parameters and-... I mean, no one knows, you know, or, or, I guess, outside of OpenAI, um, exactly what the specs are for, um, for, for GPT-4, but, but I, I think the, you know, my understanding is it's, like, ten times bigger. Um, and I think Google's PaLM model is, is also, I think, has about ten times as many parameters. Now, the LLaMA models are very efficient, so they, they perform well for, for something that's around 65 billion parameters. So, for me, that was also part of this, because, you know, there's this whole debate around, you know, is it good for everyone in the world to have access to, um, to the most frontier AI models? And I, I, I think as the AI models start approaching something that's like a superhuman intelligence, like, th- that's a bigger question that we'll have to grapple with. But right now, I mean, these are still, you know, v- very basic tools. They're, um, you know, they're, they're powerful in the sense that, you know, a lot of open source software, like databases or web servers, can enable a lot of pretty important things. Um, but I, I don't think anyone looks at the, the, you know, the current generation of LLaMA and thinks it's, um, you know, anywhere near a superintelligence. So, I, I think that a bunch of those questions around, like, is it, is it good to, to kind of get out there, I, I think, at this stage, surely. Y- you want more researchers working on it, for all the reasons that, um, that open source software has a lot of advantages, and we talked about efficiency before. But another one is just open source software tends to be more secure, because you have more people looking at it openly and scrutinizing it, um, and finding holes in it. Um, and that makes it more safe. So I think, at this point, it's more... I think it's generally agreed upon that open source software is generally more secure and safer, um, than things that are kinda developed in a silo, where people try to get th- security through obscurity. So, I think that for the scale of, of, of what we're seeing now with AI, I think we're more likely to get to, you know, good alignment and good, um, understanding of, of, of kinda what needs to do to make this work well by having it be open source. And, and that's something that I think is, is quite good to have out there and, and happening publicly at this point.

    16. LF

      Meta released a lot of models as open source, so, uh, the Massively Multilingual Speech model, the image-

    17. MZ

      Yeah, that was neat.

    18. LF

      ... PaLM model. That's, I mean, I'll a- I'll ask you questions about those, but the point is, uh, you've open sourced quite a lot. You've been spearheading the open source movement, whereas, um, that's really positive, inspiring to see from one angle. From the research angle. Of course, there's folks who are really terrified about the existential threat of artificial intelligence, and those folks will say that, you, you know, um, you have to be careful about the open sourcing, uh, step. But w- where do you see the future of open source here, uh, as part of Meta? Th- the tension here is, do you want to release the magic sauce? That's one tension. And the other one is, uh, do you want to put a powerful tool in the hands of, uh, bad actors, even though it probably has a huge amount of positive impact also?

    19. MZ

      Yeah, I mean, again, I think for the stage that we're at in the development of AI, I don't think anyone looks at the current state of things and thinks that this is superintelligence. Um, and, you know, the models that we're talking about, th- the LLaMA models here are, you know, generally an order of magnitude smaller than what OpenAI or, or Google are doing. So, I, I think that, at least for the stage that we're at now, the equity is balanced strongly, in my view, towards doing this more openly. Um, I, I think if you got something that was closer to superintelligence, then I think you'd have to discuss that more and, and think through that, um, a lot more. And w- we haven't made a decision yet as to what we would do if we were in that position, but I don't think... I, I think there's a good chance that we're pretty far off from that position. So, um, so I, I'm, I'm not... Uh, I'm certainly not saying that the position that we're taking on this now applies to every single thing that we would ever do and, you know, certainly inside the company. And we probably do more open source work than, you know, most of the other big tech companies. But we also don't open source everything. We're in a lot of our, the core kind of app code for WhatsApp or Instagram or something. I mean, we're, we're not open sourcing that. It's not like a, a general enough piece of software that it would be useful for a lot of people to do different things. Um, you know, whereas the software that we do, whether it's like a, an open source server design or, um, or basically, you know, things like Memcached, right? Like a, a good... You know, it was, it was probably our earliest project, um, that, that I worked on. It was probably one of the last things that I, that I coded-

    20. LF

      (laughs)

    21. MZ

      ... and, and led directly for the company.

    22. LF

      Yeah.

    23. MZ

      Um, but, but basically, this, like, caching tool, um, for, for quick data, data retrieval. Um, these are things that are just broadly useful across, like, anything that you want to build. And, and I think that some of the language models now have that feel, as well as some of the other things that we're building, like the translation tool that, that you just referenced.

    24. LF

      So, text-to-speech and speech-to-text, you've expanded it from around a hundred languages to more than 1,100 languages.

    25. MZ

      Yeah.

    26. LF

      And you can identify more than... The model can identify more than 4,000 spoken languages, which is 40 times more than any known previous technology. To me, that's really, really, really exciting in terms of connecting the world, breaking down barriers that language creates.

    27. MZ

      Yeah, I think being able to translate between all of these different pieces in real time, and this has been a kind of common sci-fi idea that we'd all have. You know, whether it's, I don't know, an earbud or glasses or something that can help translate in real time, um, between all these different languages, and that's one that I think technology is basically delivering now. So, I, I think, yeah, I think that's pretty, pretty exciting.

  4. 30:2242:37

    Next AI model release

    1. MZ

    2. LF

      Uh, you mentioned the next version of LLaMA. What can you say about the next version of LLaMA?

    3. MZ

      Uh-

    4. LF

      What, what can you say about, like, what, uh, what were you working on in terms of release, in terms of the vision for that?

    5. MZ

      Well, a lot of what we're doing is taking the first version, which was primarily, you know, this research version...

    6. LF

      Mm-hmm.

    7. MZ

      And trying to now build a version that has all of the latest state-of-the-art safety precautions built in, um, and, and we're, um, we're using some more data to train it, um, from across our services, but, but a lot of the, the work that we're doing internally is really just focused on making sure that this is, um, you know, as aligned and responsible as, as possible and... You know, we're building a lot of our own, you know, uh, w- we're talking about kind of the open source infrastructure, but, you know, the, the main thing that we focus on building here in a lot of product experiences to help people connect and express themselves. So, you know, we're gonna... I've t- I've talked about a bunch of this stuff, but, um, uh, you'll have, you know, an assistant that you can talk to in WhatsApp, um, you know, I think, I, I think in the future every creator will, will have kind of an AI agent that can kind of act on their behalf that their fans can talk to. I, I, I wanna get to the point where every small business basically has an AI agent that people can talk to for, you know, to do commerce and customer support and things like that, so there are gonna be all these different things and... LLaMA, or the language model underlying this, is, is basically gonna be the engine that powers that. The reason to open source it is that, um, th- the, as, as we did with, um, with the, the first version, is that it, uh, y- you know, basically it, it unlocks a lot of innovation in the ecosystem, w- will make our products better as well, um, and also gives us a lot of valuable feedback on security and safety, which is i- important for making this good, but... Yeah, I mean, the, the work that we're doing to advance the infrastructure, it's, um, it's basically, at this point, taking it beyond a research project (inhales deeply) into something which is ready to be kind of core infrastructure, not only for our own products but, um, you know, hopefully for, for a lot of other things out there too.

    8. LF

      Do you think the LLaMA or the language model underlying that version two will be open sourced? D- d- do you, do you have internal debate around that? The pros and cons and so on?

    9. MZ

      This is, I mean, we were talking about the debates that we have internally and I think, um... I think the question is how to do it.

    10. LF

      Mm-hmm.

    11. MZ

      Right? I mean, it's, uh, I think we, you know, we, we did the research license for v1 and, and I think the, the big thing that we're, that we're thinking about is, is basically, like, what's the, what's the right, the right way?

    12. LF

      So there was a leak that happened, I don't know if you can comment on it for v1.

    13. MZ

      You know, we released it as a research project, um, for researchers to be able to use-

    14. LF

      Mm-hmm.

    15. MZ

      But in doing so, we put it out there, so, um, you know, w- we were very clear that anyone who uses the, the code and the weights doesn't have a commercial license to put into products and we've, we've generally seen people respect that, right? It's like you don't have, you know, any reputable companies that are basically trying to put this into, into, um, their commercial products, but, but yeah, but by sharing it with, you know, so many researchers, it's, it's, you know-

    16. LF

      Yeah.

    17. MZ

      It did leave the building.

    18. LF

      But, uh, what have you learned from that process that you might be able to apply to v2 about how to release it safely, effectively, uh, if-

    19. MZ

      Yeah.

    20. LF

      ... if you release it?

    21. MZ

      Yeah, well, I mean, I think a lot of the feedback, like I said, is just around, you know, different things around, you know, how do you fine-tune models to make them more aligned and safer? And you see all the different data recipes that, um, you know, you mentioned a lot of different projects that are based on this. I mean, there's one at Berkeley, there's... You know, it's just, like, all over and, um, and people have tried a lot of different things and we've tried a bunch of stuff internally, so kinda we're, we're making progress here but also we're able to learn from some of the best ideas in the community and, you know, I think it, you know, we wanna just continue, continue pushing that forward, but-

    22. LF

      So, like-

    23. MZ

      But I don't have any news to announce-

    24. LF

      Oh, right. Oh, right.

    25. MZ

      ... on, on this, if that, if that's, if that's what you're, you're asking.

    26. LF

      All right.

    27. MZ

      I mean, this is, uh, a thing that we're, uh, we're still, we're still kind of, you know, actively working through the, the, the right way to move forward here.

    28. LF

      The details of the secret sauce are still being developed, I see. Uh, can you comment on... What do you think of, uh, the thing that worked for GPT which is the reinforcement learning with human feedback? So, doing this alignment process, do you find it interesting? And as part of that, let me ask 'cause I talked to Yann LeCun before talking to you today. He asked me to ask, or suggested that I ask, do you think LLM fine-tuning will need to be crowd-sourced Wikipedia style? So crowdsourcing. So this kind of idea of how to integrate the human in the fine-tuning of these foundation models.

    29. MZ

      Yeah, I think that's a really interesting idea that I've talked to Yann about a bunch, um... And you were talking about how do you basically train these models to be as, as safe and, and aligned and responsible as possible and, you know, different groups out there who are doing development test different data recipes in fine-tuning but th- this idea that you, that you just mentioned is... That at the end of the day, instead of having kind of one group fine-tune some stuff and then another group, you know, produce a different fine-tuning recipe and then-

    30. LF

      Mm-hmm.

  5. 42:371:03:15

    Future of AI at Meta

    1. MZ

    2. LF

      So I don't know if you know but, uh, you know, what is it? Over three billion people use WhatsApp, Facebook, and Instagram. Uh, so any kind of AI fueled products that go into that, like we're talking about, anything with LLMs will have a tremendous amount of impact. Do, do you have ideas and thoughts about possible products that might-

    3. MZ

      Yeah.

    4. LF

      ... start being integrated into, uh, into these platforms used by so many people?

    5. MZ

      Yeah. I th- I think there's three main categories of things that we're working on. Um, the first that I, that I think is probably the most interesting is, um, you know, there's this notion of, like, you're gonna have an assistant or, or an agent who you can talk to. And I think probably the biggest thing that's different about my view of how this plays out from what I see with, um, with OpenAI and Google and others is, you know, everyone else is building, like, the one singular AI, right? It's like, okay, you talk to ChatGPT, or you talk to Bard, or you talk to Bing. And my view is that, that there are going to be a lot of different AIs that people are gonna want to engage with just like you wanna use, um, you know, a number of different apps for different things and you have relationships with different people in your life who fill different emotional roles for you. Um, and I, um... So, I think that there are gonna be, uh, people have a reason that they, that I think you don't just want, like, a singular AI. And that, that I think is probably the biggest distinction in, in, in terms of how, how I think about this. And a bunch of these things I, I think you'll, you'll want an assistant. Um, I, I mean, I mentioned a couple of these before. I think, like, every creator who you interact with will ultimately want some kind of AI that can proxy them and be something that their fans can interact with or that allows them to-

    6. LF

      Mm-hmm.

    7. MZ

      ... interact with their fans. Um, but this is like the common creator problem is everyone's trying to build a community and engage with people and they want tools to be able to amplify themselves more and be able to do that. Um, but, but you only have 24 hours in a day. So, um, so I think having the ability to basically, like, bottle up your personality and, um, or, or, you know, like, give your fans information about when you're performing a concert or, or, or something like that. I mean, that's, that I think is gonna be something that's super valuable, but it's not just that... You know, again, it's not this idea that, uh, people are gonna want just one singular AI. I think you're gonna, you know, you're gonna wanna interact with a lot of different entities. And then I think there's the business version of this, too, which we've touched on a couple of times, which is, um... You know, I think every business in the world is gonna want basically an AI that, um, that, you know, it's like you have your page on Instagram or Facebook or WhatsApp or whatever and you wanna, you wanna point people to an AI that people can interact with-

    8. LF

      Mm-hmm.

    9. MZ

      ... but you wanna know that that AI is only gonna sell your products. You don't want it, you know-

    10. LF

      (laughs)

    11. MZ

      ... recommending your competitor's stuff, right?

    12. LF

      Yeah.

    13. MZ

      So, so it's not like there can be, like, just a, you know, one singular AI that, that can answer all the questions for a person because, you know, that que- uh, like, that AI might not actually be aligned with you as a business to-

    14. LF

      Mm-hmm.

    15. MZ

      ... um, to, to really just do the best job providing support for, for your product. So I think that there's gonna be a clear need, um, in the market and in people's lives for there to be a bunch of these.

    16. LF

      Part of that is figuring out the research, the technology that enables the personalization that you're talking about. So not one centralized god-like LLM, but one just, uh, a huge diversity of them that's-

    17. MZ

      Yeah.

    18. LF

      ... fine-tuned to particular needs, particular styles, particular businesses, particular brands, all that kinda stuff. And so-

    19. MZ

      And also enabling, just enabling people to create them really easily for the-

    20. LF

      Yeah.

    21. MZ

      ... you know, for ti- for your own business, or if you're a creator, to, to be able to w- help you engage with your fans. And I, I think that's, um... So yeah. I, I think that there, there's a clear kind of interesting product direction here that I think is fairly unique from, from what, you know, any o- any of the other big companies are, are taking. Um, it also aligns well with this sort of open source approach, because again, we, we sort of believe in this more community oriented, uh, more democratic approach to building out the products and technology around this. We don't think that there's gonna be the one true thing. We think that there, there should be kind of a lot of development. So that part of things, I think, is gonna be really interesting. And we could, we could go probably spend a lot of time talking about that and the, the kind of implications of, um, of that approach being different from what others are taking. Um, but then there's a bunch of other simpler things that I think we're also gonna do, just going back to your, your question around how this finds its way into, like, what, what do we build? Um, there are gonna be a lot of simpler things around, um, okay, you, you post photos on Instagram and Facebook and, you know, and WhatsApp and Messenger and, like, you want the photos to look as good as possible. So, like, having an AI that you can just, like, take a photo and then just tell it, like, "Okay, I wanna edit this thing," or, "Describe this." It's like, I think we're, we're gonna have tools that are just way better than, than what we've historically had on this. Um, and that's more in the image and media generation side than the large language model side, but, but it's, it all kind of, you know, plays off of advances, uh, in the same space. Um, so there are a lot of tools that I think are just gonna get built into every one of our products. I think every single thing that we do is gonna basically get evolved in, in this direction, right? It's like in the future if you're advertising on our services, like, do you need to make your own kind of ad creative? It's, no, you'll just, you know, you... You just tell us, "Okay, I'm, I'm a dog walker and I..."... you know, willing to walk people's dogs and help me find the right people and, like, create the ad unit that will perform the best, and, like, give an objective to, to the system and it just kind of, like, connects you with the right people.

    22. LF

      Well, that's a super powerful idea of generating the language, almost like, uh, rigorous A/B testing for you-

    23. MZ

      Yeah.

    24. LF

      ... uh, that works-

    25. MZ

      Yeah.

    26. LF

      ... to find the, the best customer for your thing. I mean, to me, advertisement, when done well, just finds a good match between a human being and a thing that will make that human being happy. (laughs)

    27. MZ

      Yeah, totally.

    28. LF

      And do that ef- as efficiently as possible.

    29. MZ

      When it's done well, people actually like it.

    30. LF

      Yeah.

  6. 1:03:151:18:42

    Bots

    1. MZ

    2. LF

      Do you worry about some of the concerns of bots being present on social networks? More and more human-like bots that are not necessarily trying to do a good thing, or they might be explicitly trying to do a bad thing, like phishing scams-

    3. MZ

      Yeah.

    4. LF

      ... like social engineering, all that kinda stuff, which has always been a- a very difficult problem-

    5. MZ

      Yeah.

    6. LF

      ... for social networks, but now it's becoming almost a more and more difficult problem.

    7. MZ

      Well, I think there- there's a few different parts of- of this. So one is, there are all these harms that we need to basically fight against and prevent. And- and that's been, you know, a lot of our focus over the last, you know, five or seven years, is basically ramping up very sophisticated AI systems, not generative AI systems, more kind of classical AI systems, to be able to, um, you know, categorize and, um, classify and identify, okay, this- this post looks like it's, um, promoting terrorism. This one is, you know, like, exploiting children. This one is, um, looks like it might be trying to incite violence. This one's an i- an intellectual pro- uh, property violation. So there's- there's like, it's like 18 different categories of- of violating kind of harmful content that we've had to build specific systems to be able to track and-

    8. LF

      Yeah.

    9. MZ

      ... um, I think it's certainly the case that advances in generative AI will test those. Um, but at least so far, it's been the case, and- and I'm optimistic that it will continue to be the case, that we will be able to bring more computing power to bear to have even stronger AIs that can help defend against those things. So, um, we've- we've had to deal with some adversarial issues before, right? It's, I mean, for- for some things like hate speech, it's like people aren't generally getting a lot more sophisticated. Like, the average person who, let's say, you know, if someone's saying some kind of racist thing, right? It's like, they're not necessarily getting more sophisticated at being racist, right? It just, it's okay, so that, the system can just find. But then there's other adversaries who actually are very sophisticated, like nation-states doing things, and, you know, we find, you know, whether it's Russia or, you know, d- just different countries that are basically standing up these networks of, um, of bots or- or, um, you know, inauthentic accounts is what w- is what we call them, 'cause they're not necessarily bots. They're- some of them could actually be real people who are kinda masquerading as other- as other people, um, but they're acting in a- in a coordinated way. And some of that behavior has gotten very sophisticated and it's very adversarial, so they, you know, each iteration, every time we find something and stop them, um, they kind of evolve their behavior. They don't just pack up their bags and go home and say, "Okay, we're not gonna try." You know, at some point, they might decide doing it on meta-services is not worth it. They'll go do it on someone else if it's easier to do it in another place but, um, but we have a fair amount of experience dealing with even those kind of adversarial attacks where they just keep on getting better and better. And I- I do think that as long as we can keep on putting more compute power against it and- and- and if we're kinda one of the leaders in developing some of these AI models, I'm- I'm quite optimistic that we're gonna be able to keep on, um, pushing against the kind of normal categories of harm that you talk about: fraud, scams, spam, um, IP violations, things like that.

    10. LF

      What about, like, creating narratives and controversy? To me, it's kind of amazing how a small collection of...

    11. NA

      Yeah.

    12. LF

      ... uh, what did you say? Inauthentic accounts, so it could, it could be bots, but it could be

    13. MZ

      Yeah, I mean, we have sort of this funny name for it, but we call it coordinated inauthentic behavior.

    14. LF

      Yeah. It's, it's kind of incredible how a small collection of folks can create narratives, create stories-

    15. MZ

      Yeah.

    16. LF

      ... uh, e- especially if they're viral, so if they... especially if they have a element that can, uh, catalyze the virality of the narrative.

    17. MZ

      Yeah, and I think there the question is you have to be, I think, very specific about what is bad about it, right? Because I think a set of people coming together or organically bouncing ideas off of each other and a narrative comes out of that is not necessarily a bad thing by itself if it's, if it's kind of authentic and organic.

    18. LF

      Mm-hmm.

    19. MZ

      That's like a lot of what happens and how culture gets created and how art gets created and a lot of good stuff, so that's why we've kind of focused on this sense of coordinated inauthentic behavior. So it's like if you have a network of, you know, whether it's bots, some, some people masquerading as different accounts, um, but you have kind of someone pulling the strings behind it, um, and trying to kind of act as if this is a more organic set of behavior, but really it's not, it's just like one coordinated thing, that seems problematic to me, right? I mean, I, I don't think people should be able to have coordinated networks and not disclose it as such.

    20. LF

      Mm-hmm.

    21. MZ

      Um, but that again, you know, we've been able to deploy pretty sophisticated AI and, you know, counter-terrorism groups and things like that to be able to identify a fair number of these, um, coordinated inauthentic networks of, of accounts and, and take them down. Um, and we continue to do that, and I think we're, we're... we've... you know, it's one thing that if you'd told me 20 years ago, it's like, "All right, you're starting this website to help people connect at a college and, you know, in the future you're gonna be, you know, part of your organization is gonna be a counter-terrorism organization with AI to, to find coordinated inauthentic n-" I would have thought that was pretty wild. But, um, but, but it's, um-

    22. LF

      There's a l-

    23. MZ

      But no, I think that that's, that's part of where we are. But, but look, I, I think that these questions that you're pushing on now, um... This is actually where I'd guess most of the challenge around AI will be for the foreseeable future. I think that there's a lot of debate around things like, is this going to create existential risk to humanity?

    24. LF

      Mm-hmm.

    25. MZ

      And I think that those are very hard things to disprove one way or another. My, my own intuition is that the point at which we become close to super intelligent, uh, is, uh, super intelligence is, um, I, I... it's, it's just really unclear to me that the current technology is gonna, gonna get there without another set of, of significant advances, but that doesn't mean that there's no danger. I think the danger is basically amplifying the kind of known set of, of harms that people or, or sets of accounts can do and we just need to make sure that we really focus on, um, on, on, on basically doing that as well as possible. So that, that's a, that's definitely a big focus for me.

    26. LF

      Well, you can basically use large language models as an assistant of how to cause harm on social networks. You can ask it a question, um, "You know, Meta has very impressive coordinated inauthentic account, uh, fighting capabilities. How do I do the coordinated inauthentic account, uh, creation where Meta doesn't detect it?" Like literally ask that question (laughs) and ba- and basically there's this kinda-

    27. MZ

      Yeah.

    28. LF

      ... um, part of it, I mean, that's what OpenAI showed, that they're concerned with those questions. Uh, perhaps you can comment on your approach to it, how to do a kind of moderation on the output of those models, that it can't be used to help you coordinate harm in all the full definition of what the harm means.

    29. MZ

      Yeah, and that's a lot of the fine tuning and the, the alignment training that we do is basically, you know, when we, when we ship AIs across the... our products, a lot of what we're trying to make sure is that, you know, if... you can't ask it to help you commit a crime, right? It's, um, uh... So I think training it to kind of understand that and... It's not that... It's not like any of these systems are ever gonna be 100% perfect, but, you know, just making it so that this isn't a, an easier way to go about doing something bad than the next best alternative, right? I mean, people still have Google, right? They... you know, you still have search engines, so-

    30. LF

      Mm-hmm.

Episode duration: 2:41:58

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Ff4fRgnuFgQ

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome