Skip to content
Lenny's PodcastLenny's Podcast

Keith Coleman & Jay Baxter: How bridging finds neutral truth

Through bridging-based scoring that rewards agreement between users who disagree; only 7% of proposed notes ever ship, and Meta now copies the algorithm.

Lenny RachitskyhostKeith ColemanguestJay Baxterguest
Feb 27, 20251h 47mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:006:56

    Introduction to Community Notes

    1. LR

      (instrumental music plays) The work that you guys do has had such a tremendous impact on the way the world works. I want to start with just giving people a brief understanding of what is Community Notes.

    2. KC

      Someone on X can see a post. If they think it's misleading, they can propose a note that they think other people might find informative. Other people can then rate that note.

    3. JB

      We actually look for agreement from people who have disagreed in the past. And, and what we see is when people actually have that sort of surprising agreement, that's what makes the notes so neutral and, and accurate and well-written really overall.

    4. LR

      There's many people that are very polarized. How do you deal with people that are like super anti-vax, super Jan 6th?

    5. KC

      One philosophical thing that's important is that we want all of humanity to participate. And sometimes people are surprised by that. We have all of humanity. We then have the data to understand what notes will be helpful to actual humanity. Every post is eligible for notes. We shouldn't exempt Elon, we shouldn't exempt government figures, we should... Like, everyone, even advertisers can get notes.

    6. JB

      There have been external studies, you know, run by people totally independent of us who have found that if you take a post with or without a Community Note, that actually people's agreement with the core claims in the post does change if they see it with the note versus without.

    7. LR

      Is there anything else along the lines of just working for Elon within an org Elon runs that might surprise people?

    8. KC

      If I were to start a company, that company, it would be even leaner than I would have made it before. I've been amazed with just how much the team is able to accomplish with a small group, and I think because of a small group.

    9. LR

      (instrumental music plays) Today my guests are Keith Coleman, product lead for Community Notes, and Jay Baxter, founding ML engineer and researcher for Community Notes. This conversation may be my newest favorite podcast episode so far. Community Notes is one of the most impactful and clever and also underappreciated products in the world right now. If you ever use X/Twitter and you see a note underneath a tweet correcting the misinformation in that tweet, that is Community Notes. I've never heard a deep dive into the story behind the product and the team that built it, and I'm excited to bring you just that. We get into the surprising origin story of the product, how the algorithm actually works, how the algorithm emerged out of an internal contest within Twitter, the principles behind Community Notes and why staying true to them has been so key to its success, also how it survived four different leaders including Elon and Jack, and why it's now a big part of the solution to solving misinformation on the internet, including recently being adopted by Meta as their main fact-checking tool. This is an incredibly special episode, and I'm so excited to bring it to you. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become a subscriber of my newsletter, you now get a year free of Notion and Superhuman and Granola and Linear and Perplexity Pro. Check that out at lennysnewsletter.com. With that, I bring you Keith Coleman and Jay Baxter. This episode is brought to you by WorkOS. If you're building a SaaS app, at some point your customers will start asking for enterprise features like SAML authentication and SCIM provisioning. That's where WorkOS comes in, making it fast and painless to add enterprise features to your app. Their APIs are easy to understand so that you can ship quickly and get back to building other features. Today, hundreds of companies are already powered by WorkOS, including ones you probably know, like Vercel, Webflow, and Loom. WorkOS also recently acquired Warrant, the fine-grain authorization service. Warrant's product is based on a groundbreaking authorization system called Zanzibar, which was originally designed for Google to power Google Docs and YouTube. This enables fast authorization checks at enormous scale while maintaining a flexible model that can be adapted to even the most complex use cases. If you're currently looking to build role-based access control or other enterprise features like single sign-on, SCIM, or user management, you should consider WorkOS. It's a drop-in replacement for Auth0 and supports up to one million monthly active users for free. Check it out at workos.com to learn more. That's workos.com. This episode is brought to you by Productboard, the leading product management platform for the enterprise. For over 10 years, Productboard has helped customer-centric organizations like Zoom, Salesforce, and Autodesk build the right products faster. And as an end-to-end platform, Productboard seamlessly supports all stages of the product development life cycle, from gathering customer insights, to planning a roadmap, to aligning stakeholders, to earning customer buy-in, all with a single source of truth. And now, product leaders can get even more visibility into customer needs with Productboard Pulse, a new voice of customer solution. Built-in intelligence helps you analyze trends across all of your feedback, and then dive deeper by asking AI your follow-up questions. See how Productboard can help your team deliver higher impact products that solve real customer needs and advance your business goals. For a special offer and free 15-day trial, visit productboard.com/lenny. That's productboard.com/L-E-N-N-Y. Keith and Jay, thank you so much for being here, and welcome to the podcast.

    10. KC

      It's great to be here.

    11. LR

      Thanks so much.

    12. JB

      Thanks for having us on.

    13. LR

      It's so my pleasure. I'm so thrilled to be having this conversation. The work that you guys do has had such a tremendous impact on the way the world works. So many product teams are always talking about driving impact, "I want to drive impact." Like, you guys have actually built things that have changed the world in meaningful ways and continue to do that. And I've never really heard the backstory of how Community Notes came to be and how it works and all these things. So, I'm really appreciative of you guys making time to chat.

    14. KC

      Yeah. First, you know, thanks for saying that. That's, that's why we built this thing, uh, is to help people. And it, it's great to hear, and it's great to see people enjoying it and finding it useful.

    15. LR

      I want to start with just giving people, uh, a brief understanding of what is Community Notes.I think a lot of people may have kind of heard about it, kind of maybe see it on X. As they scroll through they see these notes, but they're like, "I don't actually know what this is." So can you just kind of briefly describe what is Community Notes?

    16. KC

      Community Notes is a way for the people, like the public, to add context to posts that might be misleading. The basic way it works is that, uh, someone on X can see a post. If they think it's misleading, they can propose a note that they think other people might find informative. Other people can then rate that note, and if the note is found helpful by people who normally disagree with each other, indicating that it's probably accurate, it's probably really neutrally-worded, it's probably informative, then it will show to everyone on X. And the goal is just to get people more information about what they're seeing so they can make better decisions in their lives.

  2. 6:5613:33

    How the “bridging-based” algorithm works

    1. KC

    2. LR

      Amazing. And I think, like, hearing this, it's like absurd that this works. I think when people originally heard this idea, like, "No way this is gonna work." And so just to dive a little bit deeper, can you give us a sense, a- a deeper understanding of how it actually works? Because I think it's the algorithm that you guys designed that is so clever that allowed this to work. So talk a little bit about that algorithm.

    3. JB

      Yeah, so, so I think a key misunderstanding a lot of people have, if they haven't really dived into details, is they kind of just think that maybe someone can write a note and it appears immediately, or, or we're just taking a majority rules vote, uh, of who thinks the note's good. I think both of those approaches would probably lead to biased or inaccurate notes. I think the key thing, uh, really that we do is we actually look for agreement from people who have disagreed in the past. Uh, and, and what we see is when people actually have that sort of surprising agreement, that's, that's what makes the notes, uh, so neutral and, and accurate and well-written really overall, um, is just that people who are very polarized, um, overall, uh, often can't find agreement when things aren't accurate, right? I, I, I think it also provides some good anti-manipulation properties, I think. People are often... You know, if you said, I think like back in 2020 before we started building anything here, whether this could work at all, I think a room of ML engineers would say, "Oh, you have to keep it closed source. You know, people are gonna be manipulating this all the time. You have to use ground truth labels from fact-checkers. There's no way that you could, like, bootstrap the system without external labels." Uh, but it turns out that you can do that, um, with, with this kind of bridging-based agreement algorithm is what we call it.

    4. LR

      Okay, so just to summarize and make it super clear is basically people... Someone writes a note, misinformation is fault. What's like a good example just as we talk about this, like a classic example?

    5. KC

      A really, really classic example is an AI-generated image or an out-of-context image, like, "Look what's happening here," but it's actually from like five years ago in a different country and a different topic or something.

    6. LR

      Oh man, I've seen those so many times where it's like, "Look what's happening in San Francisco," and like, "No, this is a, a whole different city and that's not real."

    7. KC

      Totally. Yeah.

    8. LR

      Yeah, okay. Okay, so someone posts this AI image. Uh, someone writes a note, and this is actually five years ago in a different city, and this algorithm helps understand if this is a real... if this is true, if this note is true, and it's just people, regular people doing this.

    9. JB

      Yep, yep. Uh, regular people, uh, who have signed up to be Community Notes contributors. So, you know, there are a few checks, like you do have to have a verified phone number, for instance, um, but yeah at the end of the day these are regular people, uh, not necessarily professional fact-checkers or anything like that.

    10. KC

      And, you know, that was like... that was really important to us too. Like there was a question at the beginning, to the point Jay was making of like, "Well, did anyone think this was gonna work?" Obviously, it was kind of a crazy idea. We didn't know if regular people were gonna be able to do this task and certainly, other... you know, people had concerns about whether they would do it, do it effectively. Initially, some people inside the company were suggesting like, "Hey, why don't you have journalists or, you know, some select group be the first participants?" But very specifically, we're like, "No, that's like... we're trying to move away from the idea of curated editorial decisions being made around this. This is supposed to be open to everyone." So it is very... We very intentionally try to allow all humans in. There... People are randomly selected, um, and that's important to it, you know, feeling fair, feeling open, feeling trustable.

    11. LR

      Yeah, and again, it's just like this sounds like the holy grail of understanding what is true and it actually works and works so well that Meta recently, as you all know, uh, decided to, uh, adopt this exact system for them in- instead of having tens of thousands of fact-checkers reviewing things.

    12. JB

      One distinction that I would make which may- maybe can come off as nitpicky but I think is important is, uh, Community Notes adds additional context. It's not fact-checking necessarily, right? So, so there are cases where the post could be true but maybe it's just misleading because there's, there's no context, uh, or there's missing context, uh, and, you know, we cover those cases, and I think that's kind of an important distinction. We also... We just have the philosophy that users should be able to make up their own minds, right? Like here's the... here's extra context, take it or leave it, right?

    13. LR

      Yeah, what I think about, you shared this with me, this example of a, uh, a picture with a, with a cat and somebody's Community Note was just, "That's a dog," or is it the other way around? Or, "That's a cat."

    14. JB

      Yeah, yeah. "Palestinian boy shares his bread with a dog," was the post, and it's a picture of this cat, right? Uh, so like obviously this particular note is not super necessary 'cause it just says, "That's a cat," and links to Wikipedia for cat. Uh, it's kind of a good example that, uh, that like the system is... This is not something a, a professional fact-check or whatever, right, or think would need fact-checking, but it's proof that the system is really run by the users at the end of the day, uh, and, and adds some comic relief I guess. Uh, and you know, it's... The note is correct.

    15. LR

      And it could... You know, it's important. When does a post-... get triggered to even be considered for a community note? Is there, like, a threshold or is just you can write a community note on anything and people decide what they want

    16. KC

      So, every post is eligible for notes. Um, it's... And that was, again, another really important principle. It's like, it... We shouldn't exempt, uh, Elon, we shouldn't exempt government figures, we should... You know, like, everyone, even advertisers can get notes. So, any posts on the platform can get a note. And if you look in practice, um, you'll see notes appearing on world leaders, on Elon, on ads, on media organizations and on obviously, like, just regular people using social media. But, yeah, the idea is really that it's an even playing field. For a note to be proposed, the person proposing has to have earned the ability to write notes. So there is, there is that, that aspect where you have to, like, earn in to be able to do this. And the way you earn that ability is, is through your ratings by demonstrating the ability to help identify notes that are found helpful to a broad range of people. So basically, like, if you have an ability to, to sort of see and know, recognize what's helpful to a lot of people, then you, you have the ability to start proposing notes.

    17. LR

      I actually signed up to be a, an, uh... What do you, what do you call these people? Note, note, note takers?

    18. KC

      Contributors.

    19. LR

      Note con-... Oh, yeah, contributors.

    20. KC

      Yeah, yeah.

    21. LR

      Yes, I've been waiting. I haven't achieved-

    22. KC

      Nice.

    23. LR

      ... uh, can't write notes yet. Um-

    24. KC

      Yeah, it's not super easy. It takes, it takes some effort.

  3. 13:3317:24

    The impact and scale of Community Notes

    1. KC

    2. LR

      Mm-hmm. Are there stats you can share about the scale of community notes at this point, especially things that might surprise people?

    3. KC

      Yeah. Um, I mean, the service is growing rapidly, so there are hundreds of notes per day. And to put that into context, I saw some stats recently from someone at UC Berkeley saying there, there were something like 10 fact checks, traditional fact checks a day. So, in contrast, there's hundreds of notes a day, um, that are getting shown. They span a huge range of topics from obviously politics, news, um, out to entertainment, sports, gaming, just whatever's going on that day. In addition to there being hundred of the-... Hundreds of these individual notes, they can also be matched to multiple posts. So, if someone writes a note on an image or a video, like let's say it's AI generated or something like that, that note will automatically be matched to all posts that contain the same image. So you can have a single note matching to thousands of posts. And over the, let's say the last, the last year, 2024, we had something like 95,000 notes that were seen about 30 billion times. That's more than double the prior years. Prior year was something like 37K notes seen 14 billion times. So, that rate is increasing dramatically. I mean, think about it, like 30 billion views. That's a lot of information that is getting out there that might not have been out there otherwise, which is pretty cool. And the part of the reason it is expanding like that is the contributor base is expanding. Um, there's something like 950,000 contributors around the world. That's, you know, nearing a million people making this happen, um, which is amazing.

    4. LR

      Wow. And I'm one of those, right? Like, I count as a contributor.

    5. KC

      Yeah, yeah.

    6. LR

      Okay, okay.

    7. KC

      Yeah, if you're signed up as a contributor, you count.

    8. LR

      Okay. Okay, cool.

    9. KC

      And then there's more people on the wait list too. Uh, so there, there's plenty of headroom for more growth. Reg- regarding the, the, like, the matching, uh, the... On, on vid-... Media and URLs, um, I think that's a huge way to get extra coverage. Also, I, I do think, uh, we've been very careful to make sure that those matches are, uh, precise, 'cause I think one thing that people love about community notes compared to other types of fact checking is that actually the notes are custom written for the particular claim you're seeing, right? So, so often a fact check warning would just say something like, you know, "Get the facts here," and then there's a link to some generic page about voting, uh, like, uh, information, uh, which is, you know, so, so not helpful to have the- the information behind a click, so... So pulling the context up, you know, so that you have zero clicks that you need to make and, and keeping it specific is so important.

    10. LR

      I, uh... One feature I love that I imagine you guys thought deeply about is, if I like the post in the past, I get notified later if a community note shows up, so that I'm not, like, remembering this false information.

    11. KC

      Yeah, I mean, we, we try to make notes as fast as they, uh, as we can, so we want them to appear instantly if possible. But inevitably, there's gonna be a time gap between when a post goes live and when people figure out what's going on and when they get the note out there, and so we send those notifications to try to close that gap. Um, and yeah, we get, we, we get a lot of love for that. We see s-... People take screenshots and share them. They're excited about it. Um, and it's also a pretty cool example of something you can do on the internet in the social media world that was difficult in kind of like a print or standard news world, um, where you would see maybe a correction, like, the next day in a corner of a paper, but it was hard to read. Here, you're getting a ping about it if you've, if you've engaged with the post and the note shows up.

    12. LR

      Uh, one user feedback, uh, point is, I love the push to just tell me, "Here's what you got wrong," 'cause I, I find that I actually have to go into it and, like, read it, and I feel like the push could just be like, "Here's information. Here's more context of this thing you like."

    13. KC

      Agreed.

    14. LR

      (laughs)

    15. KC

      We'll go take a look at that.

    16. LR

      There we go. There we go.

    17. KC

      Yeah, thanks.

    18. LR

      Live user feedback.

  4. 17:2421:32

    Understanding the note publishing threshold

    1. LR

    2. KC

      Nice.

    3. LR

      Okay, I want to get into the origin story of this whole thing, but two more questions 'cause we're on this thread. One is, what- what's the, kind of the threshold for a note to show up on a note? Is that information you can share? Just how does that work?

    4. KC

      So, just because of the details of the way the algorithm works, it uses this machine learning algorithm, you know, called matrix factorization where, you know, we fit it with gradient descent and whatnot. The threshold is, you know, it's, it's, it's 0.4, uh, on this, you know, made up scale. I mean-

    5. LR

      0.4. Great. (laughs) There we go.

    6. KC

      (laughs) Uh, I mean, in practice what it means is, you know, basically, uh, uh, uh, majority of people, uh, if there is a polarized divide relevant to the notes, you know, obviously some notes are not about politics or something polarizing, but if there is then a majority, a, a sizable majority of people on both sides would, would generally, uh, find the note helpful.

    7. JB

      Uh, and then there are these, uh, there are other rules that come into play beyond that main one. So, you know, even if it's above that threshold, um, it might get filtered out if, um... There, there's a separate algorithm that's looking at agreement between people's incorrect tags. So like maybe, maybe people found the note helpful but incorrect, right? Like, it happens. Uh, and in those cases it doesn't matter if it's above the helpfulness threshold.

    8. LR

      So is this 0.4, uh, this is probably the wrong way to think about it, but is it 40% of people that normally disagree-

    9. JB

      No.

    10. LR

      ... agree? Okay, it's not that. (laughs)

    11. JB

      It means nothing like that.

    12. LR

      (laughs) Okay.

    13. JB

      It's, it's just like on some arbitrary scale. Um, yeah.

    14. LR

      Okay, okay. Okay.

    15. JB

      Yeah.

    16. KC

      Yeah. If we changed random other things about the algorithm, that number would also have to change-

    17. JB

      (laughs)

    18. KC

      ... to an equally seemingly arbitrary number.

    19. JB

      Yeah.

    20. KC

      That we, we arrived at, at some, like nu- some numbers like that by gauging user feedback. So we, we could share a lot of notes with people, get feedback on which ones were helpful, and there was sort of just a line emerged about, uh, indicating where, you know, where the, where things go from, like, questionable to pretty clearly helpful.

    21. JB

      Yeah. And, and it is set right now, by the way, to be really conservative, I think. Uh, we, we just are pretty particular about quality, and we really want note quality to be really high. I think, uh, I think Keith and I both believe that we will live or die based on the quality, (laughs) uh, of the notes at the end of the day. So, so we'd rather not show a note that may be good but we didn't have enough signal on, um, than the other way around.

    22. LR

      That makes so much sense. Like, I've never seen a community note that is wrong, and breaking that promise is a big deal. So I completely get why you guys are super conservative there. Uh, okay. Two more questions along these lines, 'cause I'm just curious. These weren't on my list of questions (laughs) to ask, but I feel like people wonder this. How many notes are written versus end up showing up in Triggery on a, on a...

    23. KC

      We probably show about 8% of notes that get proposed. Um, I think that's it. It's been between, let's say, 7% and 10% or 11%, something like that, over time. The number can vary a little bit. Um, and as Jay said, there are un- undoubtedly, and you can see it, there's, there's clearly more good notes than we show. But the goal is to hold a really high bar. Like, we wanna show a note when it's gonna be helpful, when it's not gonna appear, you know, biased and undermine trust in the system. Like, we want these to be neutral, informative, helpful. And, um, you know, as Jay was saying, like, we view the worst possible mistake as showing a bad note, 'cause that's gonna undermine trust. And the trust is, is, is why people like the product. So, so yeah. We, the bar is there. And, you know, like I said, there's, there's clearly some, um, some in that remaining, let's call it 90%, that are good, and then there's a lot that are just, like, not that great, and there's some that are bad. And if you write one of these ones that, uh, are bad, which bad being defined as people who normally disagree find the note not helpful. So it's like the inverse of the ones we show. If you write one that people, you know, people who normally disagree find not helpful, you actually will ultimately lose your ability to write and have to earn it back. So that, that range, that other 90% is a mix. Sometimes people look at the number, they're like, "Oh, why don't you show more?" It's like, well, you probably actually don't really want us showing most of those. It's, the, the, the goal here is that the system is able to filter out the good ones.

    24. LR

      That makes sense.

  5. 21:3226:26

    Challenges and philosophies

    1. LR

      Okay. One other question is, there's many people that are very polarized, like, very disagreeable with a lot of things. How do they filter into this algorithm? How do you deal with people that are, like, super anti-vax, super Jan 6th, like all these very extreme potential views?

    2. JB

      If people really are so polarized, uh, that there, there isn't agreement, uh, among people that typically disagree, you know, it's possible that this is one of those notes that might be correct but, but just wouldn't be useful contex- or wouldn't be, you know, helpful to show, uh, as, as context. Maybe, maybe it's about a claim that people have, you know, really entrenched opinions about and they've read hundreds of things about it already, uh, or right, like, probably, probably this is just, uh, not gonna improve people's understanding. It's just not gonna be a helpful user experience. So it might not be the worst thing, uh, in those cases to not show the note. People a few years ago were pretty pessimistic that maybe fact-checking never changes people's, uh, you know, understandings about what's true. Actually, there have been external, um, studies, you know, run by people totally independent of us, uh, who have found that if you take a community note, uh, or a post with or without a community note, that actually people's agreement with the core claims in the post does change if they see it with the note versus without. So we are having an impact, uh, on this thing that people previously thought was maybe not so easy to do. Um, and, um, so it, it's nice to focus on the cases where there is the bridging agreement. I would also say there is this reputation component to the algorithm as well. So if you consistently rate notes in a way that is counter to the, the bridging base consensus, then we'll stop counting your ratings, right? So, uh, you know, if, if you're the kind of person who constantly rates bad notes as helpful, um, uh, we, we do filter you out. So, so there's a difference between those types of people versus just the, the good but polarized ones.

    3. KC

      Yeah. I think, you know, one philosophical thing that's important is that we want ev- all of humanity to participate. And sometimes people are surprised by that. They'll be like, "Oh, aren't there people who are, like, you know, shouldn't be doing this?" Or, like, "There's some, you know, I don't... Their, their thinking is so extreme or something, maybe they shouldn't participate." But our view is, is actually we wanna have all of humanity here, because if we have all of humanity, we can, we then have the data to understand what notes will be helpful to actual humanity. You know, we can, we can better model that, better, better understand it, and better show those notes. So it's advantageous to have people who have all sorts of points of views. And, uh, we don't expect that every note will be loved by every single person. Um, you know, that's kind of an impossible bar. But-We do intend to show the notes that like 80% of people are gonna, you know, read and say, "Wow, I'm glad I knew that." And so, you know, in that sense, it doesn't matter how, you know, maybe extreme someone views a person's views as, it's still great to have them in the program. Um, so, you know, no matter what your views are, please sign up and participate in. It helps identify what's really helpful.

    4. LR

      Cool. And we'll link to people if they want to actually sign up. Uh, so they know how to do this. Something we didn't actually type, uh, spec- specify, these are all volunteers. No one's getting paid to be doing these notes and voting, right?

    5. KC

      Yeah. It's totally based on intrinsic motivation and, and we think that's a great reason to be doing it. Um, i- when you talk to the most active contributors, a lot of them, they just, they want to have better information out in the world and that's a great motivation. So yeah, that's why they

    6. JB

      Yeah.

    7. KC

      And you know, if you, if you think about like for these people, the impact they can have, it's kind of nuts. So, uh, when we first launched US wide, this was like in t- 2022, a note appeared on a White House tweet, and the White House deleted the tweet and reissued an updated statement. And like, like imagine being the person who wrote that. You probably have like 12 followers. Your, your posts probably get, you know, a couple likes, and here, you just put a, put a note on the White House and they changed their public talking points based on what you did. Like, that is an incredible amount of impact. So, it, you know, it, you, you could see why people are motivated to do it when they care about what's going on in the world. It r- um, you know, you don't have to be a big well-known person to shape the discourse and information flow in a, in a way that's helpful.

    8. LR

      It's insane. Like, there's so much to love about this. One is just the meritocracy of this whole operation of just anybody can, that is true and correct, can participate and have impact. Also just shows you how much information we get that is just wrong. Like, we had no idea how often we see things that are wrong, and now we do.

    9. KC

      Working on this product has made me realize just how many things I used to trust kind of by default that now I look at more skeptic, uh, skeptically.

  6. 26:2629:41

    The effect of notes on re-sharing content

    1. KC

    2. LR

      Definitely a meme these days. Okay, uh, before we get to the origin story, is there anything else along these lines you guys think might be really important to share? Really, really interesting.

    3. JB

      Sure. I guess one other thing just, um, is, is that although we don't actually use the fact that a post was noted, uh, in the core ranking algorithm, um, which, you know, we, we think is a, a nice property, uh, there is a really big impact just organically, meaning not from the algorithm but just from user behavior, where people will like and re-share or you know, quote, uh, posts way less, uh, when, when notes are applied. So just, I don't know, for, for people out there who typically run A/B tests on big, uh, you know, platforms you may already be familiar with this, but like 1% is typically an awesome effect size s- for any sort of algorithm change. We saw more like 30 to 40% engagement rate drops, uh, for likes and re-posts in an A/B test we ran, uh, when, uh, comparing, uh, showing a post with or without a note, which is just crazy big. Um, and then, and then if you actually look, that's, that's just an A/B test on the engagement rate. So that's not the network effect. If you capture the, the overall network effect of how a post, uh, you know, is spread less by that person's re-post, uh, because if you look top line with the difference in differences approach, different, multiple different external research groups have both found consistently that there's like a 50 or 60% drop in total re-posts, which is just nuts, um, after a note is applied. So it's having a really big impact on spread actually too.

    4. LR

      That's, uh, so ins- like, that's so great to hear.

    5. JB

      Yeah.

    6. LR

      It's what I want to see.

    7. JB

      Yeah.

    8. LR

      And it's incredible impact. Basically like a AI image of something false would just go crazy on Twitter and did before community notes came out. And now what you're saying is just adding that context, not actually, like algor- you're saying the algorithm doesn't demote it if there's something incorrect, it's just people are like, "Okay, this is false. Why would I want to re-tweet this?" That makes sense.

    9. JB

      (laughs) Right, right.

    10. KC

      Correct. Yeah, the, the notes just totally take the wind out of these stories.

    11. LR

      Yeah.

    12. KC

      So like the thing will be going viral, note appears, re-sharing drops 50 to 60% and like that's it. Like it just, you can f- at 50 to 60% per generation this, the virality quickly goes to zero.

    13. JB

      And, and by the way there's, uh, I, I have very mixed feelings about this next one. Uh, but authors become 80% more likely to decrease, or sorry, to delete their post after they get noted, which okay, that's great 'cause like less, less mis-info out there, but I'm panned about because those are usually the best notes. Like if it in c- if the note was so just good that you had noth- you had no other option but to delete your post, those notes don't get seen by other people, right? 'Cause-

    14. LR

      That is, that's hard. That's hard.

    15. JB

      ... they've not seen the post. There's an, there's an argument, by the way, that like seeing... It's just because you might see the same misleading claim elsewhere off X or somewhere else on X, uh, you know, it might be good to actually show, better to have seen the post with the note than not see it at all.

    16. LR

      Yeah.

    17. JB

      Um, unsure about that claim, but yeah.

    18. LR

      That is so interesting.

    19. JB

      Yeah.

    20. LR

      Uh, yeah. I could, I'd be so sad if I was that community r- note writer and just-

    21. JB

      Yeah.

    22. LR

      Oh man, they, it's so good they just can't even keep the post up.

  7. 29:4135:46

    Origin story

    1. LR

      Okay. So coming back from today's world where you're, this like small amount of code is changing the way people understand the world and what they believe and making the White House rescind their announcements, zooming back to the beginning of how this whole project started, uh, what I heard just briefly is, uh, Keith, you were just kind of tired of managing PMs. You wanted to just work on something yourself. You wanted to work on something impactful away from corporate BS, and you basically just started looking for something that was impactful, important, and you found this. Talk about just how it all came to be at the beginnings of the story.

    2. KC

      Yeah. So I mean, for me, the beginnings actually go back to why I joined, it was then Twitter, um, in 2016. I was, uh, at a startup and we were... we've had some acquisition offers and one of them was from S- company Twitter. And, uh, it was 2016, it was the middle of the election between Donald Trump and Hillary Clinton. And there were, like, something like three televised debates. But every day there was a debate happening on Twitter. And it was very clear, like, this is where people are talking about these things that matter, where, where, you know, where, where information is being shared, where their ideas are being formed. And as a user, it was obvious that I could get good information there but, uh, it was also obvious that there was kind of questionable information floating around. And I remember just looking as an outsider, thinking like, "Wow, like, this is a really hard problem but it also seems really important." So we ended up going to Twitter and, and the company was in a turnaround at that point. So, like, my first three years was just helping to get the company growing again. Um, you know, working on, uh, on everything that was the consumer product. Um, you know, getting user growth going back and people wanting to work there again, et cetera. Um, but a few years in, I was reflecting on what we had done, um, you know, I think we had done a lot of good work getting momentum, uh, going. But it... and, and people in the US and in the industry had tried things to kind of deal with, with misleading information, but nothing was really working. Like, it was obvious nothing was working. Um, nothing could handle the scale of the problem, nothing could handle the speed, and a lot of people just didn't trust the existing approaches. Like, this, the existing approaches were either fact checkers or internal trust and safety teams making decisions about what was or was not misleading. And, like, a lot of people just didn't want or trust that to be the way this was decided, which is very reasonable. And so, you know, I'm looking at that. I was, I was still managing a large PM team. Uh... You know, that's a whole story in itself. I, um, I felt like I would... That job required a lot of energy in and I, and I didn't feel like I always saw the output that I wanted to see from it. Like, I didn't see the change in the product I wanted to see. And, you know, I was contemplating, "Should I go start a company? Uh, well, should I do something else?" And I kept coming back to this problem. I'm like, "Man, like, the... How is the world gonna deal with the, with this information quality issue of, like, what we get on social media or wherever we get it?" And I'm like, "You know, I'm at, I'm at this company where you can make a difference on this problem. Like, why not go and try some crazy ideas and see if, like, one of them might work?" And so I came back. I had, I had a kid, I came back from paternity leave. I went to my boss Kayvon. I was like, "Hey, Kayvon. How about I just stop doing my job and I go work on this instead?" You know, this being try some crazy ideas to see if we can deal with this... with misleading info. He was stoked, uh, and so I went off and started working on that. Um, you know, it started with just reading wha- any research I could on the, the problem and existing solutions, what was or was not working or what were the issues, um, and then into prototyping and then, you know, it ultimately led to us building and piloting this idea that became Community Notes.

    3. LR

      Amazing. Okay. I have so many questions. I know we're gonna keep going through this story, but, uh, when you joined Twitter, what was kind of the, uh... and it was called Twitter at this point. I'm gonna try to call it, uh, call it X now, which I know is important to your boss. What, uh, era of Twitter was it at that point? Like, it was Kayvon run, uh, joined or... and who was the CEO? 'Cause there's been many.

    4. KC

      So, okay, yeah. I started... I, I came in December 2016. So Jack had rece- relatively recently come back as CEO to turn the company around. And just to give you a sense of, like, the state of the company, um, something like a third of employees were leaving every year. So just imagine that. Like, a, a third of your team gone every year. Um, you know, the stock was in the toilet. Um, the product was not really growing, um, and so, uh, Jack was working on a turnaround, um, and Kayvon was there already. Kayvon was running Periscope and a bunch of video stuff. And, you know, that, that group continued to... You know, Jack, Jack was there, uh, up through the start of the, the Community Notes then Birdwatch project. And, uh, yeah.

    5. LR

      Okay. And it was called Birdwatch. I haven't... I don't think we've used that term yet, but that's an important point. It was called Birdwatch initially.

    6. KC

      Yeah. So it was originally called Birdwatch when, um, uh, when we started the project. But obviously, somewhat famously, the name changed along the way.

    7. LR

      Yeah. Maybe let's just tell that story real quick, 'cause I know we're zooming in forward but just... I have, I have this, uh, Twitter thread that I saw where between Jack and Elon when they're (laughs) debating what to call it, and Elon's like, "Birdwatch sounds creepy. I want to change it." Uh, is there anything there you can share?

    8. KC

      Yeah. The story there, the story there is kind of funny. Um, Elon, Elon came in, acquired the company and, you know, we had just launched the product relatively recently in US. It'd been in pilot for a year, but we had just made it available US-wide. And he... I guess he'd been seeing the notes. And, um, he... This is soon after, soon after the acquisition, he DM me and he was like, "Hey, this Community Notes thing is awesome." And

  8. 35:4640:23

    Embracing small teams for big impact

    1. KC

      I was like, "Oh, I'm glad you like it." Like, "Let's, you know, talk." And so we talked the next day and he kept referring to it as this Community Notes thing. And I was like, "You know, it's interesting that you keep calling that, calling it that, because that's actually the very first thing that I called it." Like, the very first Figma mockup I made depicting this thing was called Community Notes. It just... I don't know why, it just felt really natural. And so that's what we had... That's the first prototype we had tested. Uh, you know, later, the project changes, changed its name to Birdwatch, but, you know, Elon was like, "Hey, let's just call it that." And so the next day, we just changed the name.... and, uh, you know, it's a, it was a, it's always, uh, you're notable for the team when you change your name, but really the, the team was excited about it. I think it is a much more understandable name. Jack has made fun of it, calling it, uh, like the ultimate Facebook name or something like that. But, uh...

    2. LR

      The most boring Facebook name ever, which-

    3. KC

      (laughs) The most boring Facebook name, which is funny 'cause they're now, you know, launching Commmunity Notes.

    4. LR

      (laughs)

    5. KC

      Um, but, uh, but I think it is a very understandable, intuitive name, and I think it has served the product really well. There's, there's a reason it was the name in the very first mock-up.

    6. LR

      Yeah. I think descriptive names just make sense.

    7. KC

      Yeah.

    8. LR

      Uh, this, uh, connection with Elon, and, uh, I want to talk later about just how you've dealt with so many strong personalities over, and kept this alive throughout so many changes. But before we get to that, um, you, you did something that I think a lot of product leaders, eng leaders, just people (laughs) that manage people dream of, give up all this "power" in air quotes, and career trajectory and influence and just like forget all that. I'm gonna go back to just building something awesome, small team. Is there any advice there that you could share from that experience that you think might be helpful for other leaders to share or to hear to help them maybe do that same jump? Because that's really difficult in practice. Easy to talk about, hard to do.

    9. KC

      Yeah. I think it is a difficult jump. Um, I've, I've done it a bunch of times in my career, and I've always been very happy with it, where, you know, I started with a small team that it kind of grew into something bigger, and then I was like, "You know, this is like, we're kind of dealing with a lot of big production stuff. Team's really big. I wanna go back to doing something like crazy and new with a small team again." And so I've kind of done that like sawtooth leap, um, a bunch of times. But it can be hard, because certainly the natural, like the classic career path is sort of, I don't know, rewards or, uh, you know, running a large organization or being a manager, things like that. But, I think at the end of the day, you gotta work on stuff you love. You've got to be having fun, and you, and I think people want to be having impact. And, I think there's one myth that, that can get in people's ways. The idea that the, the more people you manage or something, or the larger your scope is, the more impact you have. I definitely do not think that is true. Um, if you just look at, I mean, look at Commmunity Notes, for example. If I had stayed running a large consumer PM team, like what would I have produced? Like 16 more pages of OKRs? Like, I don't know, you know, a bunch of documents? And I think building Commmunity Notes has had way bigger impact on the world. It's become the industry standard for how to deal with this now, which is super cool. People love it. It's the first thing that is plausibly dealing with the, the internet scale, you know, issue of information quality. Um, you know, I, I think it's unquestionably a bigger impact than I would have had if I were just do, whatever, doing some standard management track thing like I was doing before. And, I think that's true of so many other, uh, you know, small companies and, and startups. I, I was just reading, (laughs) someone, someone screenshot it, I think it's Blake Scholl's, uh, LinkedIn the other day. He went from like director of coupons or something to building the first, uh, supersonic jet.

    10. LR

      Oh yeah, from Groupon. Yeah, yeah. (laughs)

    11. KC

      Yeah. And I, you know, those stories are everywhere when you look. And so, I definitely have found that, you know, for me, I love building hands-on. I love trying crazy new ideas. I, I love the zero to one experience. It's fun to scale things up too, and it can be fun to operate at, you know, at scale, but, um, doesn't you, you know, we're, this team is a good example of one that operates at a very large scale, but that is still very small.

    12. LR

      Yeah. I think the way you guys operate is what more and more companies are trying to do. Uh, remove middle management layers, create small teams that just execute and build impact, and just like ICs. Uh, whenever I say IC, I have a comment on YouTube where they're like, "What is IC?" So I'm just gonna explain. Individual contributor, non-manager is when I say the word IC.

  9. 40:2347:47

    The thermal project approach

    1. LR

      So let me follow this thread. And when I asked people about how you set up the team to operate effectively and, and protect it initially, there's this term thermal that came up a lot. Like it was like a thermal team, if that's how you'd describe it.

    2. KC

      Yeah.

    3. LR

      What is thermal?

    4. KC

      Yeah. So, um, anyone who's worked in a larger company probably knows that things can get kind of bureaucratic or bog down. Decision-making can be slow. Like there's these large planning cycles. People can like try to like take someone from one team and move 'em to another, like at random arbitrary times that can disrupt a project. Like all sorts of things like that. Um, uh, you know, our company, this is, you know, a number of years ago when we started this project, um, we had a lot of founders in the company, like Keyvan is an example of founder who was helping to run the company. And he had this idea like, "Hey, why don't we create this program, call it Thermal, where we could have teams that were somewhat isolated from that." They could run on, through their own process. They would have like one clear owner. The team would be entirely dedicated to that project. And, uh, we would just sort of like repeatedly make funding decisions as to whether to continue the effort. And so...

    5. LR

      Why was it called Thermal, by the way? What was the idea there?

    6. KC

      I think the, I think it was like an, an old bird, uh, analogy of like thermals lifting, you know, the, the birds on their wings. Twitter obvious, Twitter 1.0 obviously had a lot of bird analogies, bless its heart. Uh, and so, you know, that was one of them. Um, but the, you know, the idea, uh, I loved the idea as someone who, you know, liked the startup environment. And so when we were starting this project, I was like, "Hey, Keyvan, like why don't we make this the first Thermal project?" And he was like, "Yeah, let's do it." And so, so we started with that way of operating, and it gave us, you know, from day one, uh, a lot of freedom and autonomy that I think was really important to make the product work.

    7. LR

      So just be very specific about what it, what, what make it a, what makes it a Thermal project? How do you set that up? And this is asking from perspective of a company wants to build their own something like this. What does that look like?

    8. KC

      Yeah. I think there's a bunch of key attributes. So a key, one, one key attribute is, um, there's one clear driver of the project who's effectively like the founder.... um, I guess, I mean, maybe you could have two or something. But, like, it's, like, really clear. There's, like, driver of the project, and also there's one clear decision-maker that they go to. That was-

    9. LR

      Oh, outside of the team? Got it.

    10. KC

      ... outside of the team. And that was true back when we started, and it is true now. Like if we need, need something or have a question about something, I talk to Elon. And it wa- you know, it was like that from the beginning, it's like that now, and I think that's a big reason we're able to make decisions effectively, quickly, um, in a simple way.

    11. LR

      And it probably has to be someone very senior, not-

    12. KC

      Yes, it needs to be someone-

    13. LR

      ... just some manager.

    14. KC

      Someone senior who can make the decisions you need made.

    15. LR

      Yeah.

    16. KC

      Um, whatever they are. So th- I think that's really important, that clear decision-making structure. Another was 100% focus. So everyone on the project is expected to be totally focused on it. Um, that... A, at a lot of companies, it can be easy to have people's attention sort of spread across a bunch of things, and it makes it hard to get stuff done. Like, you'll go to, like, a- you'll t- talk to who, you know, whoever that person is. You'll ask them for help on something, and they'll be like, "Yeah, I'll help you. I gotta finish this thing, you know, and it'll take me like a week or two, and then I'll get to it." And like a week or two delay totally changes the momentum of a project. When, you know, we were 100% focused, you know, we talk, we talk in the morning. It's like, "Hey, Jay, why don't we, like, try this thing in the algorithm?" He's like, "Yeah." And then like the, you know, that afternoon or the next day, we're looking at results. And so y- because of that total focus, the rate of iteration goes way up. And then, you know, beyond that there was, uh, also just the ability to use whatever our own sort of, like, decision-making process was. We didn't need to write OKRs w- or, you know, follow others, like, standard practices. Obviously, like, we had to make sure we were responsi- responsibly building the product and everything. But, you know, we didn't use- need to use standard, um, the standard practices. And I think that's a- another great example. Like OKRs, I understand why they can be helpful. Um, but they can also be, um, you know, n- not necessarily the right cadence at which to set goals. Like, I don't... I think it's really unclear that quarterly or annual goals are actually, like, the right pace. Like, we would set our goals for what... Like, we would set the goal for the next milestone that mattered, and we would work on that. And when we reached that milestone, we would have an idea of what was coming after. And then we, a- when we hit that, we'd set the next milestone. Whether that was two weeks, a month, three months, like, what- whatever it was. Like, we set our own pace and, and goals at that pace, and that just, I think, is a lot more natural for the development of something.

    17. JB

      The, the whole OKR determination and planning process took longer than it would take us to pick a goal and then execute on it and finish it. (laughs)

    18. KC

      Mm-hmm.

    19. LR

      How big was the team early on, uh, that you set up? How many engineers?

    20. KC

      It started with just me, and then we, we, when we decided to build the thing, it, we, we figured we needed about five. Um, and we wanted to be as small as we possibly could. It was clear we needed someone on ML doing scoring. It was clear we needed someone to do some client engineering work, someone to do backend engine- engineering work. Um, there may have been like, w- you know, one or two other... Oh, we needed a designer and, um, a researcher to help us understand the customer base and, you know, make sure we were, we were building the thing in a way that was actually gonna, to resonate with people. And so I think that was, I think it was, like, backend, front end, um, ML design research. That was, that was the original team from what I remember.

    21. LR

      Amazing. So one, basically one of each function. A question I have for Jay actually is, there's all this talk of small teams and, uh, moving fast, but, you know, sometimes you just need more engineers to build a thing. Is there anything you've learned about just how to keep a team small while moving as fast as you are and not need, we need to hire more engineers, we need to hire more engineers?

    22. JB

      I think in the r- in the beginning when we were iterating on, you know, what, what should even the requirements be, um, it was, it was definitely good to just have a, you know, like, one ML engineer. But I think a- at some point, we got clear on what the goals of the, the algorithm should really be, and we tried... You know, we were rea- uh, I think at the very beginning, it wasn't clear that we needed to build this bridging based, uh, algorithm, right? The actual first algorithm that I put into production was very focused on anti-manipulation. It was this kind of page rank variant. Uh, but it didn't solve the problem of, you know, bias basically. So if s- if there were some... If there are more users on one side, the, a page rank type graph algorithm can actually amplify those biases. So I think, you know, after building that prototype and, and getting data from that, it was clear that, you know, uh, the, the bridging base algorithm was gonna be the way that we needed to solve it. And at that point, basically, uh, I set up a, a bake-off. Basically made, you know, this, this, uh, kind of like a, the, uh, you know, a Kaggle competition or something. So that was, like, the key time, uh, where it was really important to pull in other engineers.

    23. LR

      That is such a cool story. Uh, I want to follow that thread. Before we do that, you've, you just mentioned you guys yell thermal.

    24. JB

      Yeah. (laughs)

    25. LR

      What does that mean? Is that like YOLO, like a version of-

    26. JB

      Yeah.

    27. LR

      ... we're just, okay, we're just gonna ship, 'cause of our thermal project.

    28. JB

      Ship it.

  10. 47:4750:34

    Algorithm development and internal competitions

    1. JB

    2. LR

      Okay. (laughs) .......................... Marketers, I know that you love TLDRs, so let me get right to the point. Wix Studio gives you everything you need to cater to any client, at any scale, all in one place. Here's how your workflow could look. Scale content with dynamic pages and reusable assets effortlessly, fast track projects with built-in marketing integrations like Meta cAPI, Zapier, Google Ads, and more. A/B test landing pages in days, not weeks, with intuitive design tools. Connect the tracking and analytics tools like Google Analytics and SEMrush, and capture key business events without the hassle of manual setup. Manage all your client social media and communications from a unified dashboard, then create, schedule, and post content across all their channels. If you're working on content rich sites, Wix Studio's no-code CMS lets you build and manage without touching the design.And when you're ready for more, Wix Studio grows with you. Add your own code, create custom integrations with Wix-made APIs, or leverage robust native business solutions. Drive real client growth with Wix Studio. Go to wixstudio.com. Okay. So coming back to this algorithm, this is actually really interesting 'cause I, I've never heard any of this. S- I was gonna ask just what inspired th- this actual algorithm. And you basically did an internal competition amongst ML engineers to see who had the most successful algorithm, Netflix prod-, uh, contest style, Kaggle style.

    3. JB

      Yeah. Yeah. I, I think-

    4. LR

      Wow.

    5. JB

      So I mean, this particular idea of, of finding, you know, content that is liked by people on opposite sides of the, of a polarized divider who, who typically disagree, you know, this was not an, an idea out of thin air, right? Like, I think Keith had found some of Chris Bales' work he had done... He had, you know, made this list of accounts that were often liked by people who, uh, uh, you know, uh, were on both sides politically. Um, there is, um, you know, other, other projects like, uh, Polis out there that, that look for agreement among, uh, you know, people who typically disagree. But I think the... Yeah. It, it wasn't obvious that our project definitely needed to use that from the very beginning. But then i- when, you know, when, when you implement it and compare it against these other type... Like page rank seems obviously, you know, it's designed to be kind of manipulation-resistant. It's naturally, naturally... Like, if you just have a voting ring of people who all vote them- themselves up, then page rank can filter that out very well. But like, that just wasn't the main attack vector, I guess. So we, we, got to... We had to get some real data from the pilot to realize that, okay, the real thing going on here is people are polarized. (laughs) And, uh, and, and so it was only once we got that, uh, the, the real data from the pilot that I think it was, it was clear that the, the bridging base algorithm was, was, um, the direction we really needed to go. (laughs)

  11. 50:3458:56

    An inside look at how the team operates

    1. JB

    2. LR

      I wanna come back to the way you operate the team. I hear that you run the whole team off a single Google Doc that's like a four-year-old doc that you just keep adding, uh, goals to and bullet points. Is that true?

    3. KC

      There, there is a very long running doc, um, that has had to be, uh, chopped and purged 'cause it was breaking Google Docs and Chrome at various points in time. It's sort of like a note-taking doc. Um, it's really where we coordinate what we're doing. Um, the team meets on a daily basis. Um, we spend whatever amount of time we need to get on the same page about what we're building. It can be, you know, we might talk about anything from, you know, what's most important right now to what are... which we work on next, to what are we trying to launch right now and why is it not launched, like what's in the way of launching it? And we might review a new modeling or scoring algorithm update and, you know, try to understand what's working in it, what's not. Um, so we'll just cover whatever we want and, um, or whatever feels most important. And we, like, you know, as you said, we set our goals very dynamically. Um, so it's whatever seems like the most important thing for us to work on now and next is what we spend our time on. Uh, and that served the project really well. Um, versus ha- feeling attached to, like, some kind of quarterly goals or something. Like, we'll look at, like, what is gonna help people the most or, like, what's the biggest problem right now, what... either one of those, and we will go tackle it. And we can... We might change our roadmap, you know, multiple times in two weeks based on what we see.

    4. LR

      So I'm hearing no Jira, no Asana, no monday.com?

    5. KC

      No.

    6. LR

      Okay.

    7. KC

      Yeah. I mean, (laughs) we, we have to use Jira to, like, coordinate with some other teams. Like, sometimes when we file a request, we have to make a Jira ticket. But no, I, I am not a fan of heavyweight task management. I love, like, being on the same page, being able to keep most things in my head and having a really light way to write down the things that, you know, I can't or the team can't keep in its head.

    8. JB

      We, we did use Asana briefly. Um, but my memory of it is that it spent... You, you spent more time in the meeting grooming a backlog of irrelevant stuff than actually, you know, talking about the, the proper priorities. Um, so I think it's nice in the Google Doc that if something becomes irrelevant, it can kind of just fall off without needing explicit backlog grooming.

    9. LR

      Hmm. So just to maybe summarize a little bit of how you guys operate that might inspire other companies to set teams up like this. Uh, so I'm gonna go through a few things I, uh, you shared. One is one person in charge of the team, like, the founder almost. They're, like, basically the founder of the team.

    10. JB

      Mm-hmm.

    11. LR

      They have one very senior essentially sponsor/decision maker that they interface with, in your case, Elan. No big deal. In other cases, it could be the CTO, CPO, someone like that. Uh, the team is focused 100% on this product and, and, and goal. You, uh, keep the team very small, so you start with one person of each function, one front-end engineer or back-end, ML person, designer researcher.

    12. JB

      Yeah.

    13. LR

      And then, uh, Google Docs (laughs) almost basically for your (laughs) project management. Is that roughly like... Yeah, it's basically run it with Google Docs. Stop, don't use big complicated products.

    14. KC

      I think that's pretty good recipe. On the Google Docs, you know, take it... The people can do what they want.

    15. LR

      Okay. Okay.

    16. KC

      If they want to use thumbnails, go for it. I think those, those first ingredients are really... are, are key structurally. And then, you know, beyond that, it's a matter of having an ambitious goal that gets people fired up to go do great work.

    17. LR

      Yeah. Uh, awesome. I think there's a lot there that a lot of people kind of like think they should do when they set these teams up or they don't actually do, and it feels like each of these is just a really key ingredient to it to actually succeeding.

    18. KC

      It definitely really helped us succeed. I don't know that the project would be here if it was not for some of those e- elements.

    19. LR

      That's, that's a powerful statement. Like, this thing that has changed the way the world understands what is true would not have existed if you didn't set it up in this specific way.

    20. KC

      Yeah. I think, uh... You know, I, I don't know if I would've start... would have...... begun the project had I not known we had sort of that structure, that ability to make decisions, the autonomy, the, you know, speed, the ability to go fast. And, you know, working, we s- we started with that in 1.0 and, and it's been continued and if anything furthered in, in X. I mean, X as a whole company operates with a lot of those attributes. And, um, I think it's, it's one of the reasons the product is successful. I think it's a big, uh, those are big reasons why at least I, uh, Jay can speak for himself, (clears throat) I have so much fun working on this. Like, I, I, I love working on it. (clears throat) You know, it's great to wake up every day and solve these problems. We get to, you know, we get to do them efficiently, make decisions quickly, build stuff that helps a lot of people. It's, it's awesome.

    21. JB

      Yeah. This, this, like, uh, whether thermal or Elon way of operating is definitely more fun and, and the fact that, uh, like, that combined with the, the awesome mission is super important for internal recruiting. Like, I remember, uh, like, when I was first chatting to Keith about this back in early 2020, you know, I had another project. I was, you know, uh, worked on a few, but one-

    22. KC

      (laughs)

    23. JB

      ... one was, like, personalize the number of push notifications that we send, uh, and it was, it drove a lot of DAU, um, without, like, losing opt-outs, uh, uh, significantly. Uh, so, you know, that, that was, like, setting me on track, uh, or, you know, if I had kept working on that, I could have probably gotten a promotion from that, uh, with low risk, or I could take this huge career ri- I mean, it's not as big a, a career risk as, like, joining or founding an actual external startup, but there is still career risk, I guess, in joining a team like this. So, so just I think all of the same aspects of recruiting that apply to external startups apply internally, and, and-

    24. KC

      Mm-hmm.

    25. JB

      ... you know, if you can have an exciting vision, that, uh, is key.

    26. KC

      Related to that and your list, Lenny, one thing we missed that's super important is that on this project, when I think of s- successful projects like it and startups, is that people are self-selecting to join. We did not assign anyone to this project. Like, people reached out to join or they applied to join the job. You know, I and the team interviewed every single person that joined the team and we're like, "We want that person on the team. They wanna be on the team." And so people are totally bought in to the goal, mission, the way the team works, the other people they're gonna be working with, and that makes a huge difference. So obv- like, a great time to do that is at the start of one of these things. Like, don't, if you're gonna try something crazy, like, I would, I, it's gonna be tough if you're just assigning random people to it, but if you let people opt in and self-select, you're much more likely to be successful. And one thing that I have observed at X which really surprised me was that this is also possible at a large scale. You know, one of the things Elon did when he bought the company was he basically asked people to self-select to stay. Like, you had, you had to click the button. And, you know, he sent an email out that was like, "Hey, Twitter 2.0." Like...

    27. LR

      Fork, fork in the road, right? And that's what it said.

    28. KC

      Fork in the road. Fork in the road, exactly. It's like-

    29. LR

      (laughs)

    30. KC

      ... "Twitter 2.0, you know, now X, it's gonna be hardcore. We're gonna do ambitious things. You're gonna work your butt off," you know, what, and you had to click on the form and say, "Yes, I wanna join." And I think that was really important for the company, because you want people to opt in to that. You want the people to be saying like, "Yeah, that's what I wanna do." And the company's gonna be a lot more successful. If people are unsure, it's, like, better for them probably to go do something else and where they're naturally more aligned and happier. And I thought that was a great approach to taking a large company and getting it down to people who are really excited about, you know, working together on a, on a mission. So, you know, for us, we did it from day one, which, I think is an easy way to do it, but it's possible to do it later as well.

  12. 58:561:05:30

    Working with Elon

    1. LR

      along the lines of just working for Elon within an org Elon runs that might surprise people about just the way of working that's interesting or surprising or where you think other companies might wanna think about adopting?

    2. KC

      I've always liked lean teams, but this has made me, my, my experience at X has made me change the way I would, I would think about running in future. Or, you know, if I were to start a company now it'd change the way I'd, I think about starting that company. It would be even leaner than I would have made it before. I've been amazed with just how much the team is able to accomplish, um, with a small group, and I think because of a small group. Like, when, shortly after the acquisition, um, you know, we had this product called Spaces. Um, it was, uh, it had been in the product before but it was, it was pretty small scale. And Elon wanted to run these large Spaces. I forget who the first people he was gonna bring on were, but he was gonna be there, you know, ultimately these things have gone on to host politicians and things like that. And he's like, "Guys, we gotta scale this up." I forget the number. He's like, "I, we need, we need to be able to scale like a million people," or something like that. I'm getting the numbers wrong, but you need to be able to scale way up. This is the kind of thing at 1.0 that would've taken a year if it had ever happened. And the team did it in, like, two or three weeks. And, uh, it was really exciting and inspiring to see. Like, we w- I didn't work on that, but I watched it from the outside and I'm like, "Wow, with this tiny team motivated behind a big goal" that was like, "Hey guys, it's not like are we gonna do this, it's y- we are going to do this." They got it done in two or three weeks. That must have felt amazing for them. It was certainly exciting to see. Um, but it's j- uh, I, I have definitely come to appreciate just how, um, lean something can be and, and not just get by but actually thrive because it's that lean.

    3. LR

      I think the point you made about people opting into that is important, 'cause I think a lot of people hearing that would be like, "I would never want to be, (laughs) asked to build something like that in two weeks." And I think a lot of people do, and w- would love that kind of experience, especially working with Elon, especially shipping something at that scale. But I think there's an important element there of just like, "Okay, I- I don't want to do that. I have other things to do in my life other than ship spaces." So I think that's a, I think that's a key point you've raised, of just, there's an opt-in step.

    4. KC

      Totally. I think the opt-in is important, and it may even be that you want to opt in one par- you know, at one point in your life, and maybe at another point in your life, something else is better. I think, you know, whatever it is you're choosing to do, it's nice t- to be opted in, to feel like it's aligned with how you want to spend your time.

    5. LR

      Something on my mind, and I don't know if you guys want to go here, but it's something I think a lot of people think about, is when Elon came in, he let go of 80% of folks, and everyone's just like, "Twitter is dead. It's all gonna fall apart. There's no way they can run this thing with that small of a staff." And clearly, it, they were wrong. Clearly, it's working great. It's like becoming a h- like a massive deal in the world, and continues to grow. Is there anything about that that you were surprised by, or anything about just like how it, it continues to operate so well in spite of that big shift?

    6. KC

      I think the, the, the leaner team, the, the reduced kind of like process and bureaucracy is a big reason it does move as fast as it does. Um, it's easier to get stuff done faster here. And, uh, yeah, I mean, I think that's, uh, I think it, I think it's that, that shrink, shrinking is actually a big reason for the increased pace of launches, the increased pace of experimentation. One thing that I noticed that, as a result of that, is the people who are here, they seem to all really feel like owners. Like, they te- like, they take the sense of responsibility that an owner takes in the product. Um, they'll try to track down what's wrong, fix whatever is needed, um, jump into any, to, to help build or fix, improve any system that needs help, even if it's outside of their space. And there's the flip side of that, too. For people who've worked at big companies, they may have experienced this thing where, where there's like ano- you want to change some- something in some other system or product. And so you reach out to that team, and like maybe they're a little resistant, or maybe they're like, "Oh, we'll get to that next quarter."

    7. LR

      They have their, they have their own goals to hit. Yeah. They don't care about-

    8. KC

      Yeah, exactly. Like, they don't really necessarily want to help you, or they're busy. Here, you're like, "Hey guys, we need to do this thing with that other system you work on." And they're like, "Great. Here's the code. Here are the docs. You know, s- send us the fab if you have any questions, and we'll get it in." And it's just the thing, you can just jump in and, and get it done. And that kind of collaborative effort, like the sense of like shared ownership, I think, in, from my experience, came from a result- was a result of the shrinking of the team, down to people who, you know, wanted to be there and work together to build this thing. So, I think that's been a really positive impact. It's not always easy. Certainly, like a lot of people have a lot of responsibilities. But, you know, they're here because they're up for it.

    9. JB

      Yeah. I think one other thing that's key, uh, is, uh, when, when you are forced to have such a small team, you know, deleting, well, this is important anyways, but deleting code is more important than writing it, a lot of the time. Uh, so, I, I think, so often, maybe due to promotion incentives or just regular human tendency, you, uh, you know, engineers have a, a tendency to add these little incremental wins that actually ha- add, you know, more of a long-term maintenance cost than is clear, because you just run a little one-month A/B test, you see this, you know, significant win, and you don't realize the maintenance burden you just added, uh, to your team for the rest of eternity, until you turn the thing off. So, I think there's a lot to be gained, and you get forced to do this, um, uh, by the way, when you have such a small team, is just deleting, you know, uh, auditing, auditing parts of your system and deleting the things where the maintenance cost is, um, worse than the, than the gains. So, I think th- we did have to do this across the company, um, after the, the big layoffs. And um, you know, systems are leaner now, and, and they can be worked on by fewer numbers of people now.

    10. LR

      That's an amazing point. I remember Elons being like, "Here, we have to throw away the whole thing. We have to re-architect everything. It's-"

    11. JB

      Yeah.

    12. LR

      "... stupid the way it's built." And-

    13. JB

      Yeah, you don't have-

    14. LR

      ... sounds like it actually worked.

    15. JB

      ... yeah. You don't have... Well, we didn't, uh, you don't have to rewrite everything from scratch.

    16. LR

      Okay, I was just about to say.

    17. JB

      I mean, some things we did-

    18. LR

      Yeah.

    19. JB

      ... I guess, rewrite. But, uh, I, I mean, just even deleting the unnecessary cruft and keeping the rest of the core system, um-

    20. LR

      That's awesome. I love that we're creating kind of a formula to run these sorts of companies and teams. There's so much here. I wanna go back to the,

  13. 1:05:301:10:48

    Launching Birdwatch

    1. LR

      the building of the original product. We, I kind of took us on a long tangent, and an amazing tangent. But, uh, I heard a story of when you launched Birdwatch at that point. You specifically wanted to keep expectations very low. And there was like a gif in the thing, and it just looked like clearly this is not, um, ready for primetime. Talk about just how you do that, how you launched it in a way where people weren't like, um, "It's never gonna work."

    2. KC

      We were very disciplined, I guess you could say, about having the product prove itself at ev- at every given s- point. Um, you know, when, when we built the first mock-ups, we had just, these were just like pictures of, d- depicting what community notes might look like. We showed those to people across the political spectrum. We saw, like, "Hey, people really like these, whether they're on the right or left, like they seem very open to reading these community notes, even when they are critical to people of their own side." So we're like, "All right, that gives us confidence that if we can build this, like if we can actually make this a reality, it's gonna work." Then there's the question of like, can we make it a reality? Like are, will people in the real world be able to write notes that are of this quality? And so, um, you know, we built, we had an internal pilot test version of this where you could like write notes. And we first basically ran this through like a Amazon MTurk type of, uh-... uh, participant test, just to see, like, if you just, like, put some normal people in there, like will they be able to write these notes? And it, not, they weren't all, all those notes were good. But, like, it was clear that there were people out there who could write good notes. So then we're like, "Okay, this is possible." Like, "What will happen if we actually do this out in the real world?" And, like, "Let's run, let's run a pilot and find out." And so, we took that pilot, the, you know, we'd run the MTurk kind of test on, and we re- released it to, at first, a thousand people, tot- you know, totally out in public. And we didn't know what was gonna show up. Like, you could imagine, the notes could have been terrible. And, uh, and so we were talking, like, "Well, what do we do?" Like, "We're gonna put this out there. Everyone's gonna have all these questions. They're probably gonna be really skeptical." Like, and we know it might be a total dumpster fire. And so, like, "What do we do to, like, set expectations appropriately?" We felt like we could probably get there in the end, but we just didn't know it was gonna happen at first. We wanted to set expectations. And so we're like, "Well, why don't we just stick..." There's like the page where you see a post in the notes below. We're like, "Why don't we just stick a dumpster fire GIF, like, on that page?" And, uh, you know, you go there, you're like, "Hey, uh, you know, anything you see below here might just be a total dumpster fire." Um, at least it would show we, we were aware of that as a possible risk. Um, we, in the end, we did not do that. It cracked me up, um, but we thought it was kind of like-

    3. LR

      Oh, you didn't actually launch that? Okay, then so that was kind of just a concept, I guess?

    4. KC

      We, we had a, we had mock-ups of it, and every time I looked at the mock-up, I laughed. But, uh, uh, ultimately, we had so much to explain on that page, like, what is this thing and how does it work?

    5. LR

      Yeah.

    6. KC

      Ultimately, we were like, "Okay, this is probably gonna, like, distract from the point." Um, so we pulled it. So I somewhat, I kind of wish maybe it had, uh, seen the light of day at one point. But yeah, ultimately, we kept it simple, and we focused that page on explaining what was going on here. But again, you know, we, um, as has happened many time with, times with the project, um, you know, we put the pilot out there, and the notes were good. Like, they weren't all good. There was, it was a mixed bag, but, like, there was gold in there. And then from the very early days, with just the thousand contributors, it was obvious that, that people could write notes that were informative, that were neutral, that spoke to controversial, challenging topics, and that if we could just identify those from the rest, this was gonna work. Like, it was gonna work as well as the very first mock-ups we had made. So, that became the, the focus then, is how do we sift out the gold from the rest?

    7. LR

      I remember, uh, there was a, and I think you may have shared with, this with me, when someone noticed you guys were testing this, and they took screenshots and tweeted it, and I think Elon replied like, "This is cool."

    8. KC

      Yeah. Yeah, so in the, in the very, in the very early days when it was just a Figma prototype, we were running these, like, usertesting.com unmoderated studies. (laughs) And I guess one of the participants sent one to an NBC reporter, who like wrote a bunch of stories on it. Anyway, like that day, there was a lot of chatter about it on, on the service. And Elon, this is like, this is, to put this back in, you know, time perspective, this is, I think, 2020. So, two years before any acquisition stuff happened, Elon is just a Twitter user building rockets and electric cars and other cool stuff, and stumbles on this thing that dep- depicts the prototype that we've been testing, and he writes back, uh, "Definitely worth trying IMO." And I remember thinking that was cool back then. And it's interesting to see, like, eh, he- he's obviously had a very consistent point on it. Uh, I think that, you know, the idea was appealing, and he, uh, you know, has obviously been a big fan of it, um, in the product and been a big supporter and proponent. So, uh, yeah, it was kind of, it was kind of cool that it came from... That support has been from the very early days, before he was ever involved in the company.

    9. LR

      I love that moment. It must have felt really wild for Elon to be commenting on this Figma prototype retesting secretly.

    10. KC

      It was cool. It was cool.

    11. LR

      Oh, man. So when we were preparing for this

  14. 1:10:481:26:15

    The core principles behind Community Notes

    1. LR

      interview, I asked, uh, you guys, "What's, what's the main thing you want to make sure people get and understand about why community notes has been so effective?" And Keith, you specifically said that it was the principles behind how you wanted to approach this and how you continue to stick to this throughout. And we'll talk about how you kept it alive throughout all these different CEO changes and leaders.

    2. KC

      Mm-hmm.

    3. LR

      But just talk about these principles, like what the actual principles are and why that was so key to it working out.

    4. KC

      There are a number of principles that I think, when we first shared them with people at the company, seemed maybe a little bit crazy. Um, but I think they are, they are the reason the product works, and I think they've been very important. And we do, we come back to them regularly today, all the time. Probably the craziest one is just that this thing is going to be the voice of the people. It's going to represent the voice of people. It's not gonna represent the company's voice. So it is not a tech company deciding what shows. It is the people deciding what shows. And that had a lot of implications on the design. Like, first of all, there is no... We don't have a button that will change the status of the note, uh, of a note. So, if a note is showing because the people have rated it, and found it helpful, it is gonna show. Like, we can't change that. And that is the kind of thing that like ef- when we first proposed this, that's unsettling to people. They're like, "Wait, so like something can go up, and, and like we, you know, the company can't take it do-" or, you know, w- well, "Can't change its status, get it to stop showing." And we're like, "Yeah." And like, it has to work that well. If it doesn't work well enough to do that, then it doesn't work. If there's a problem with the note, this is like one of our, one of our key principles was if there's a problem with a note that's so bad you want to do something about it, it's a problem with the system. Like, we need to redesign the system to n- to n- be showing good notes. And so, so yeah. We had to, you know, get everyone comfortable with the idea that there was no button to, to change the status of a note. You know, similarly we, as we talked about earlier, we wanted this to represent all of humanity, and so we didn't want to be arbiters of who-... can come in and be a contributor and who can't. Like, so we, we, we open it to everyone. You just have to meet really basic objective criteria. Like, you have to have a verified phone to help reduce the likelihood of having, like, bots or things like that participating. But beyond that, it's random selection and it still is that way today. And, you know, again that people w- it took some time to get people comfortable with it. But I think the, the, the fact that this is the voice of the people and reflects their output through an open and transparent process is so key to both why it is good, like why it works, but also why it's trusted. So, I mean, that's number one. And, and y- it's, you know, will, I think will forever be at the heart of the product. Um, another one, um, that people thought was kind of crazy was transparency. You know, we're like, "We're, this, we, we..." The previous approaches to dealing with misleading info, they, it felt to a lot of people like sort of black box tech companies or media companies or elites or whatever making decisions. We're like, "They, people, people need to get comfortable with this. They need to trust this." So the whole thing has to be out in the open. Like, the code that decides what notes show has to be out in the open. All of the data and ratings that, that make it happen have to be out in the open. People should be able to take the code and data and replicate the whole service and vet that we have done exactly what we've said we've done. And they should be able to audit it. They should be able to go and look and say, like, "Hey, I think this part could be better." Or, like, if they think we're biased, they should be able to work, you know, work with the data and point it out, and if they, if people have good observations, that should factor back into the code. Um, and this is, again, something that's kinda difficult to get people comfortable with. Um, that everything is out there. Y- you can't cover anything up. But I think that's so essential to people trusting it. So, um, yeah. I mean, we, we, we set these out on day one. We go back to them constantly, because we're, we're, uh, we're always evolving the, the product and we are always, we're like, "Gotta make sure every new change is open." Like, whenever we update the code, there's a, or update the scoring system, there's an update in GitHub when the data is published daily so you can download it. And so, um, yeah, I think those have been, those have been really central to the thing working.

Episode duration: 1:47:57

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 8dgyqYHLcCI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome