Skip to content
Lenny's PodcastLenny's Podcast

The ultimate guide to A/B testing | Ronny Kohavi (Airbnb, Microsoft, Amazon)

Ronny Kohavi, PhD, is a consultant, teacher, and leading expert on the art and science of A/B testing. Previously, Ronny was Vice President and Technical Fellow at Airbnb, Technical Fellow and corporate VP at Microsoft (where he led the Experimentation Platform team), and Director of Data Mining and Personalization at Amazon. He was also honored with a lifetime achievement award by the Experimentation Culture Awards in September 2020 and teaches a popular course on experimentation on Maven. In today’s podcast, we discuss: • How to foster a culture of experimentation • How to avoid common pitfalls and misconceptions when running experiments • His most surprising experiment results • The critical role of trust in running successful experiments • When not to A/B test something • Best practices for helping your tests run faster • The future of experimentation Enroll in Ronny’s Maven class, Accelerating Innovation with A/B Testing, at https://bit.ly/ABClassLenny. Promo code “LENNYAB” will give $500 off the class for the first 10 people to use it. — Brought to you by Mixpanel—Event analytics that everyone can trust, use, and afford: https://mixpanel.com/startups | Round—The private network built by tech leaders for tech leaders: https://www.round.tech/apply?utm_campaign=lennys-letter&utm_medium=email-ad&utm_source=email-marketing&utm_content=send-2-2023-07-27 | Eppo—Run reliable, impactful experiments: https://www.geteppo.com/ Find the full transcript at: https://www.lennysnewsletter.com/p/the-ultimate-guide-to-ab-testing Where to find Ronny Kohavi: • Twitter: https://twitter.com/ronnyk • LinkedIn: https://www.linkedin.com/in/ronnyk/ • Website: http://ai.stanford.edu/~ronnyk/ Where to find Lenny: • Newsletter: https://www.lennysnewsletter.com • Twitter: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ In this episode, we cover: (00:00) Ronny’s background (04:29) How one A/B test helped Bing increase revenue by 12% (09:00) What data says about opening new tabs (10:34) Small effort, huge gains vs. incremental improvements  (13:16) Typical fail rates (15:28) UI resources (16:53) Institutional learning and the importance of documentation and sharing results (20:44) Testing incrementally and acting on high-risk, high-reward ideas (22:38) A failed experiment at Bing on integration with social apps (24:47) When not to A/B test something (27:59) Overall evaluation criterion (OEC) (32:41) Long-term experimentation vs. models (36:29) The problem with redesigns (39:31) How Ronny implemented testing at Microsoft (42:54) The stats on redesigns  (45:38) Testing at Airbnb (48:06) Covid’s impact and why testing is more important during times of upheaval  (50:06) Ronny’s book, Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing (51:45) The importance of trust (55:25) Sample ratio mismatch and other signs your experiment is flawed (1:00:44) Twyman’s law (1:02:14) P-value (1:06:27) Getting started running experiments (1:07:43) How to shift the culture in an org to push for more testing (1:10:18) Building platforms (1:12:25) How to improve speed when running experiments (1:14:09) Lightning round Referenced: • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing: https://experimentguide.com/ • Seven rules of thumb for website experimenters: https://exp-platform.com/rules-of-thumb/ • GoodUI: https://goodui.org • Defaults for A/B testing: http://bit.ly/CH2022Kohavi • Ronny’s LinkedIn post about A/B testing for startups: https://www.linkedin.com/posts/ronnyk_abtesting-experimentguide-statisticalpower-activity-6982142843297423360-Bc2U • Sanchan Saxena on Lenny’s Podcast: https://www.lennyspodcast.com/sanchan-saxena-vp-of-product-at-coinbase-on-the-inside-story-of-how-airbnb-made-it-through-covid-what-he8217s-learned-from-brian-chesky-brian-armstrong-and-kevin-systrom-much-more/ • Optimizely: https://www.optimizely.com/ • Optimizely was statistically naive: https://analythical.com/blog/optimizely-got-me-fired • SRM: https://www.linkedin.com/posts/ronnyk_seat-belt-wikipedia-activity-6917959519310401536-jV97 • SRM checker: http://bit.ly/srmCheck • Twyman’s law: http://bit.ly/twymanLaw • “What’s a p-value” question: http://bit.ly/ABTestingIntuitionBusters • Fisher’s method: https://en.wikipedia.org/wiki/Fisher%27s_method • Evolving experimentation: https://exp-platform.com/Documents/2017-05%20ICSE2017_EvolutionOfExP.pdf • CUPED for variance reduction/increased sensitivity: http://bit.ly/expCUPED • Ronny’s recommended books: https://bit.ly/BestBooksRonnyk • Chernobyl on HBO: https://www.hbo.com/chernobyl • Blink cameras: https://blinkforhome.com/ • Narrative not PowerPoint: https://exp-platform.com/narrative-not-powerpoint/ Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Lenny may be an investor in the companies discussed.

Ronny KohaviguestLenny Rachitskyhost
Jul 27, 20231h 23mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:29

    Ronny’s background

    1. RK

      I'm very clear that I'm a big fan of test everything, which is any code change that you make, any feature that you introduce has to be in some experiment because, again, I've observed this sort of surprising result that even small bug fixes, even small changes can sometimes have surprising unexpected impact. And so I don't think it's possible to experiment too much. You have to allocate some times to these high-risk, high-reward ideas. We're gonna try something that's most likely to fail, but if it does win, it's gonna h- be a home run. And you have to be ready to understand and agree that most will fail. And I've, it's amazing how many times I've seen people come up with new designs or a radical new idea and they believe in it, and that's okay. I'm just cautioning them all the time to say, "If you go for something big, try it out, but be ready to fail 80% of the time."

    2. LR

      (instrumental music) Welcome to Lenny's Podcast, where I interview world-class product leaders and growth experts to learn from their hard-won experiences building and growing today's most successful products. Today my guest is Rony Kohavi. Rony is seen by many as the world expert on A/B testing and experimentation. Most recently, he was VP and technical fellow of Relevance at Airbnb, where he led their search experience team. Prior to that, he was corporate vice president at Microsoft, where he led Microsoft's experimentation platform team. Before that, he was director of data mining and personalization at Amazon. He's currently a full-time advisor and instructor. He's also the author of the go-to book on experimentation called Trustworthy: Online Controlled Experiments. And in our show notes, you'll find a code to get a discount on taking his live cohort-based course on Maven. In our conversation, we get super tactical about A/B testing. Rony shares his advice for when you should start considering running experiments at your company, how to change your company's culture to be more experiment-driven, what are signs your experiments are potentially invalid, why trust is the most important element of a successful experiment culture and platform, how to get started if you want to start running experiments at your company. He also explains what actually is a p-value and something called Twyman's law, plus some hot takes about Airbnb and experiments in general. This episode is for anyone who is interested in either creating an experiment-driven culture at their company or just fine-tuning one that already exists. Enjoy this episode with Rony Kohavi after a short word from our sponsors. This episode is brought to you by Mixpanel. Get deep insights into what your users are doing at every stage of the funnel at a fair price that scales as you grow. Mixpanel gives you quick answers about your users from awareness to acquisition through retention. And by capturing website activity, ad data, and multi-touch attribution right in Mixpanel, you can improve every aspect of the full user funnel. Powered by first-party behavioral data instead of third-party cookies, Mixpanel is built to be more powerful and easier to use than Google Analytics. Explore plans for teams of every size and see what Mixpanel can do for you at mixpanel.com/friends/lenny. And while you're at it, they're also hiring, so check it out at mixpanel.com/friends/lenny. This episode is brought to you by Round. Round is the private network built by tech leaders for tech leaders. Round combines the best of coaching, learning, and authentic relationships to help you identify where you want to go and accelerate your path to get there, which is why their wait list tops thousands of tech execs. Round is on a mission to shape the future of technology and its impact on society. Leading in tech is uniquely challenging, and doing it well is easiest when surrounded by leaders who understand your day-to-day experiences. When we're meeting and building relationships with the right people, we're more likely to learn, find new opportunities, be dynamic in our thinking, and achieve our goals. Building and managing your network doesn't have to feel like networking. Join Round to surround yourself with leaders from tech's most innovative companies. Build relationships, be inspired, take action. Visit round.tech/apply and use promo code LENNY to skip the wait list. That's round.tech/apply.

  2. 4:299:00

    How one A/B test helped Bing increase revenue by 12

    1. LR

      Rony, welcome to the podcast.

    2. RK

      Thank you for having me.

    3. LR

      So you're known by many as maybe the leading expert on A/B testing and experimentation, which I think is something every product company eventually ends up trying to do, often badly. And so I'm excited to dig quite deep into the world of experimentation and A/B testing to help people run better experiments. Uh, so thank you again for being here.

    4. RK

      Great goal. Thank you.

    5. LR

      Let me start with kind of a fun question. What is maybe the most unexpected A/B test you've run or maybe the most surprising result from an A/B test that you've run?

    6. RK

      Yeah, so I think the, the opening example that I use in my book and in my class is the most surprising public example we can talk about, and this is, this was kind of an interesting experiment. Somebody proposed to change the way that ads were displayed on Bing, the search engine, and he basically said, "Let's take the second line and move it, promote it to the first line so that the title line becomes larger." And when you think about that, and there's, you know, if you're gonna look in my, uh, book or in the class, there's a, an actual diagram of what happened, uh, the screenshots. But if you think about it, it just, realistically, it looks like a meh idea. Like why would this be such a reasonable, interesting thing to do? Uh, and indeed when we went back to the backlog, it was on the backlog for months and languished there and many things were rated higher.But the point about this is it's trivial to implement. So if you think about return on investment, we could get the data by having some engineer spend a couple of hours implementing it. And that's exactly what happened. Somebody, a Ed Bing, who kept seeing this in the backlog and said, "My God, we're spending too much time discussing it, I could just implement it." And he did. He spent a couple of days implementing it, and as is, you know, the common, uh, thing at Bing, he launched the experiment. Uh, and a funny thing happened. We had an alarm, big escalation, something is wrong with the revenue metric. Now, this alarm fired several times in the past when there were real mistakes, where somebody would log revenue twice, or, you know, there's some data problem. But in this case, there was no bug. We, that simple idea increased revenue by about 12%. And this is something that just doesn't happen. Uh, we can talk later about Twyman's law, but that was the first reaction, which is, "This is too good to be true. Let's find a bug." Uh, and we did, and we looked for several times, and we replicated the experiment several times, and there was nothing wrong with it. This thing was worth 100 million dollars at the time, when Bing was a lot smaller. And the key thing is, it didn't hurt the user metrics. So, it's very easy to increase revenue by doing theatrics that, you know, displaying more ads is a trivial way to raise revenue. But it hurts the user experience, and we've done the experiments to show this. In this case, this was just a home run that improved revenue, didn't significantly hurt the, the guardrail metrics, and so I was, we were like in awe of, you know, what a trivial change that was the biggest revenue impact of Bing in all its history.

    7. LR

      And that was basically shifting it two lines, right? Switching two lines in the search results, right?

    8. RK

      And this was just moving the second line to the first line. Now, you then go and run a lot of experiments to understand what happened here. Is it the fact that the title line has a bigger font? Sometimes different color? So we ran a whole bunch of experiments. And this is what usually happens when you have a breakthrough, you start to understand more about, what can we do? And there was suddenly a shift towards, okay, what are other things we could do that would allow us to improve revenue? And we came up with a lot of follow-on ideas that helped a lot. But to me, this was an example of a tiny change that was the best revenue-generating idea in Bing's history. And we didn't rate it properly, right? Nobody gave this the, the priority that, in hindsight, it deserves. And that, that's something that happens often. I mean, we are often humbled by how bad we are at predicting the outcome of experiments.

  3. 9:0010:34

    What data says about opening new tabs

    1. RK

    2. LR

      This reminds me of a classic experiment at Airbnb while I was there, and we'll talk about Airbnb in a bit. The search team just ran a small experiment of, what if we were to open a new tab every time someone clicked on a search result, instead of just going straight to that listing? And that was one of the biggest wins in search experiences.

    3. RK

      Yeah. And by the way, I, I don't know if you know the history of this, but I tell about this in class. We did this experiment way back around 2008, I think. And so this predates Airbnb.

    4. LR

      Hmm.

    5. RK

      And I, I remember it was heavily debated. Like, why would you open something in a new tab? The users didn't ask for it. Uh, there was a lot of pushback from the designers. And we ran that experiment, and again, it was one of these highly surprising results that made it, that we learned so much from it. So we, we first did this, it was done in the UK for opening Hotmail, and then we moved it to MSN so it would open search in new tab. And all the set of experiments were highly, highly beneficial. We published this, and I have to tell you, when I came to Airbnb, I talked to our joint friend, Ricardo, about this, and it was sort of done. It was very beneficial and then it was semi-forgotten, which is one of the things you learn about institutional memory, is when you have winners, make sure to address them and remember them. So, it was at Airbnb done for a long time before I joined, that listings open in a new tab. But other things that were designed in the future were not done. And I reintroduced this to the team and we saw big improvements.

  4. 10:3413:16

    Small effort, huge gains vs. incremental improvements

    1. RK

    2. LR

      Shout out to Ricardo, our mutual friend who helped make this conversation happen. There's this, like, holy grail of experiments that I think people are always looking for, of like one, you know, hour of work and it creates this massive result. I imagine this is very rare, and, uh, don't expect this to happen. I guess, in your experience, how often do you find kind of one of these gold nuggets just lying around?

    3. RK

      Yeah. So again, this is a topic that's, uh, near and dear to my heart. Everybody wants these amazing results, and, you know, I show them in chapter one in my book multiple of these, you know, small effort, huge gain. But as you said, they're very rare. I think most of the time the winnings are made sort of this inch by inch. And there's a graph that I show in my book, a real graph of how Bing Ads has managed to improve their revenue per 1,000 searches over time. And every month, you can see a small improvement and a small improvement. Sometimes the degradation because of legal reasons or other things we, you know, there was some concern that we were not marking the ads properly. So you have to suddenly do something that you know is gonna hurt revenue. But yes, I think most results are inch by inch. You improve small amounts, lots of them. I think that the best example that, uh, I can say is, a couple of them that I can speak about. One is at Bing, the relevance team, hundreds of people all working to improve Bing Relevance.They have a, a metric. We'll talk about OVC, the overall evaluation criterion. But, they have a metric that their goal is to improve it by two percent every year. It's a small amount. And that two percent, you can see here's a point one, here's a point one five, here's a point two. And then they add up to around two percent every year, which is amazing. Another example that I am allowed to speak about, uh, from Airbnb is the fact that we ran some 250 experiments in my tenure there in search relevance. And again, small improvements added up. So this became overall a six percent improvement to revenue, you know. So when you think about six percent, it's a big number, but it became, it came out not of one idea, but many, many smaller ideas that each gave you a small gain. And in fact, um, again, there's another number I'm allowed to say. Of these experiments, 92% failed to improve the metric that we were trying to move. So only 8% of our ideas actually were successful at moving the key metrics.

  5. 13:1615:28

    Typical fail rates

    1. RK

    2. LR

      There's so many threads I want to follow here. But let me follow this one right here. You just mentioned if 92% of experiments failed, is that typical in your experience running, seeing experiments run a lot of companies? Like, what should people expect when they're running experiments? What percentage should they expect to fail?

    3. RK

      Well, first of all, I published three different numbers for my career. So overall at Microsoft, about 66%, two thirds of ideas fail, right? And don't think the 66 is accurate. Like, you know, it's about two thirds. At Bing, which is a much more optimized domain after we've been optimizing it for a while, the failure rate was around 85%. So it's harder to improve something that you've been optimizing for a while. Uh, and then at Airbnb, uh, this 92% number is, you know, the highest failure rate that I've observed. Now, I've quoted other sources that, you know, it's not that I worked at groups that were particularly bad. Uh, booking, uh, Google Ads, other companies published numbers that are around 80 to 90% failure rate of ideas. Er, this is where it's important, of experiments. It is important to realize that when you have a platform, it's easy to get this number. You look at how many experiments were run and how many of them launched. Not every experiment maps to an idea. So it's possible that when you have an idea, your first implementation, you start an experiment, boom, it's egregiously bad because you have a bug. In fact, 10% of experiments tend to be aborted on the first day. Those are usually not that the idea is bad, but that there is an implementation issue or something we haven't thought about that forces an abort. You may iterate and pivot again. And ultimately, if you do two or three or four pivots or bug fixes, you may get to a successful launch. But those numbers of 80 to 92% failure rate are of experiments. Very humbling. Uh, I know that every group that starts to run experiments, they always start off by thinking that somehow they're different and their success rate is going to be much, much higher. And they're all

  6. 15:2816:53

    UI resources

    1. RK

      humbled.

    2. LR

      You mentioned that you had this, uh, pattern of clicking a link and opening a new tab is a thing that just worked at a lot of different places.

    3. RK

      Yeah.

    4. LR

      Are there other versions of this? Do, do you collect kind of a list of like, here's things that often work when we want to move?

    5. RK

      Oh, absolutely.

    6. LR

      Yeah. Are there some you, some you could share? I don't know if you have a list in your head.

    7. RK

      I can give you two resources. Uh, one of them is a paper that we wrote called Rules of Thumb. And what we tried to do at that time at Microsoft was to just look at thousands of experiments that run and extract some patterns. And so that's, that's one paper that, uh, we can then put in the notes.

    8. LR

      Perfect.

    9. RK

      Um, but there's another, more, more accurate, I would say, uh, resource that's useful that I recommend to people. And it's a site called goodui.org. And goodui.org is exactly the site that tries to do what you're saying at scale. So guy, his name is Jacob Linovski, he asks people to send him results of experiments. And he derives, he puts them into patterns. There's probably like 140 patterns, I think, at this point. And then for each pattern, he says, um, "Well, who has that helped? How many times? And by how much?" So you have an idea of, you know, this worked three out of five times. And it was a huge win. In fact, you can find that, open

  7. 16:5320:44

    Institutional learning and the importance of documentation and sharing results

    1. RK

      a new window in there.

    2. LR

      I feel like you feed that into ChatGPT, and you have basically a product manager creating a roadmap, uh, tool.

    3. RK

      In general, by the way, this is all about... A lot of that is institutional memory, right? Which is, can you document things well enough so that the organization remembers the successes and failures and learns from them? I think one of the mistakes that some company makes is they launch a lot of experiments and never go back and summarize the learnings. So I've actually put a lot of effort in this idea of institutional learning, of doing the quarterly meeting of the most surprising experiments. By the way, surprising is a, is another, uh, question that people, uh, often are not clear about. What is a surprising experiment? To me, a surprising experiment is one where the estimated result beforehand and the actual result differ by a lot. So that absolute value of the difference is large. Now you can expect something to be great and it's flat. Well, you learn something. But if you expect something to be small and it turns out to be great, like that ad title, uh, promotion, then you've learned a lot. Or conversely, if you expect that something will be small and it's very negative, you can learn a lot by understanding why this was so negative.... and that's interesting. So we focus not just on the winners, but also surprising losers, things that people thought would be a no-brainer to run, and then for some reason it was very negative. And sometimes, it's that negative that gives you insight. I'll actually, you know, I'm just coming up with one example of, of that, that, uh, I should mention. We were running this experiment at Microsoft to improve the Windows indexer, and the team was able to show on offline tests that it does much better at indexing, and, you know, they showed some relevance is higher and all these good things. And then they ran it as an experiment. And you know what happened? Surprising result. Indexing, the relevance was actually high, but it killed the battery life.

    4. LR

      Hmm.

    5. RK

      So here's something that comes from left field that you didn't expect. It was consuming a lot more CPU on laptops, it was killing the laptops, and therefore, okay, we learned something. Let's document that, let's remember this so that, you know, we now take this other factor into account as we design the next iteration.

    6. LR

      What advice do you have for people to actually remember these surprises? You said that a lot of it is institutional. What do you recommend people do so that they can actually remember this when people leave, say, three years later?

    7. RK

      Document it, you know, right now. We had a large deck internally of these successes and failures, and we encouraged people to look at them. The other thing that's very beneficial is just to have your whole history of experiments and do some ability to search by keywords, right? So I'm, I have an idea, type a few keywords and see if from the thousands of experiments that ran... And by the way, these are very reasonable numbers. At Microsoft, just to let you know, when I left in 2019, we were on a rate of about 20 to 25,000 experiments every year. So every working day, we were starting something like 100 new treatments. Big numbers. So when you're running in a group like Bing, which is running thousands and thousands of experiments, you want to be able to ask, "Has anybody did an experiment on this or this or this?" And so that searching capability is in the platform, but more than that, I think just doing the quarterly meeting of the most successful, most interesting, sorry, not just successful, uh, most interesting experiments is very key. And that also helps the flywheel of experimentation.

  8. 20:4422:38

    Testing incrementally and acting on high-risk, high-reward ideas

    1. RK

    2. LR

      That's a good segue to something I wanted to touch on, which is there's often a, I guess a weariness of running too many experiments and being too data-driven, and the sense that experimentation just leads you to these micro-optimizations and you don't really innovate and do big, do big things. What's your perspective on that? And then just, can you be too experiment-driven in your experience?

    3. RK

      I'm very clear that I'm a big fan of test everything, which is any code change that you make, any feature that you introduce has to be in some experiment. Because again, I've observed this sort of surprising result that even small bug fixes, even small changes can sometimes have surprising unexpected impact. And so I don't think it's possible to experiment too much. I think it is possible to focus on incremental changes, because some people say, "Well, you know, if we only tested 17 things around this..." and you have to think about, it's not just, uh, it's like in stock. You need a portfolio, you need some experiments that are incremental that move you in the direction that you know you're going to s- be successful over time if you just try enough. But some experiments have... You have to allocate sometimes to these high-risk, high-reward ideas. We're going to try something that's most likely to fail, but if it does win, it's going to h- be a home run. And so you have to allocate some efforts to that, and you have to be ready to understand and agree that most will fail. Most of these high... And I've, it's amazing how many times I've seen people come up with new designs or a radical new idea and they believe in it, and that's okay. I'm just cautioning them all the time to say, "If you go for something big,

  9. 22:3824:47

    A failed experiment at Bing on integration with social apps

    1. RK

      try it out, but be ready to fail 80% of the time." Right? And one true example that again I'm, I'm able to talk about, because we put it in my book, is we were at Bing trying to change the landscape of search. And one of the ideas, the big ideas was we're going to integrate with social. So we hooked into the Twitter, uh, firehose feed and we hooked into Facebook, and we spent 100 person-years on this idea, and it failed. (laughs) You don't see it anymore. (laughs) It existed for about a year and a half, and all the experiments were just negative to flat, and you know, it was an attempt. It was fair to try it. I think it took us a little long to fail, d- d- to decide to, that this is a failure, but at least we had the data. We had hundreds of experiments that we tried. None of them were a breakthrough. And I remember sort of mailing, uh, Qi Lu with some statistics showing that, you know, it's time to abort, it's time to fail on this. Uh, and you know, he decided to continue more, and it's a million dollar question, you know, do you continue and then maybe the breakthrough will come next month, or, uh, do you abort? And a few, a few months later, we, we aborted.

    2. LR

      That reminds me of at Netflix they tried a social component that also failed. Airbnb early on, there was a big social attempt to make like, here's your friends have stayed at these Airbnbs. Completely not, had no impact.So maybe that's one of these learnings that we should talk about.

    3. RK

      Yeah, this is hard. This is hard. And, uh, but that's... Again, that's the value of experiments.

    4. LR

      Yeah.

    5. RK

      Which are this oracle that gives you the data. You may be excited about things, you might believe it's a good idea, but ultimately the arbiter, the oracle, is the controlled experiment. It tells you whether users are actually benefiting from it. Whether you and the users, the company

  10. 24:4727:59

    When not to A/B test something

    1. RK

      and the users.

    2. LR

      There's obviously a bit of overhead and downsides running an experiment, setting it all up and making sure... You know, analyzing the results. Is there anything that you ever don't think is worth A/B testing?

    3. RK

      First of all, there are some necessary ingredients to A/B testing. And I'll just say it outright, not every domain is amenable to A/B testing, right? You can't A/B test mergers and acquisitions, right?

    4. LR

      Hm.

    5. RK

      It's something that happens once. You either acquire or you don't acquire. So you do have to have some necessary ingredient. You need to have enough units, mostly users, in order for the statistics to work out. So yeah, if you're too small, it may be too early to A/B test. But what I find is that in software, it is so easy to run A/B testing and it is so easy to build a platform... I don't say it's easy to build a platform, but once you built a platform, the incremental cost of running an experiment should approach zero. And we got to that at Microsoft where, after a while, the cost of running experiments was so low that nobody was questioning the idea that everything should be experimented with. Now, I don't think we were there at Airbnb, for example. The platform at Airbnb was much less mature and required a lot more analysts in order to interpret the results and to find issues with it. So I do think there's this trade-off. You're willing to invest in a platform, it is possible to get the marginal cost to be close to zero. But when you're not there, it's still expensive and there are... There may be reasons why not to run A/B tests.

    6. LR

      You talked about how you may be too small to run A/B e- tests. And this is a constant question for startups, is when should we start running A/B tests?

    7. RK

      Right.

    8. LR

      Do you have kind of a heuristic or a rule of thumb of just like here's a time you should really start thinking about running an

    9. NA

      (...) test?

    10. RK

      Yeah, yeah. The million dollar question that everybody asks. (laughs)

    11. LR

      (laughs)

    12. RK

      So, I actually... We'll put this in the notes, but I gave a talk last year. What I called it is Practical Defaults. And one of the things I show there is that unless you have at least tens of thousands of users, the math, the statistics just don't work out for most of the metrics that you're interested in. In fact, you know, I gave an actual practical number of a retail site with some conversion rate, trying to detect changes that are at least, you know, 5% beneficial, which is something that startups should focus on. They shouldn't focus on the 1%, they should focus on the 5 and 10%. Then you need something like 200,000 users, right? So start experimenting when you're in the tens of thousands of users. You'll be only be able to detect large effects, and then once you get to 200,000 users, then the magic starts happening. Then you can start testing a lot more. Then you have, uh, the ability to test everything and make sure that you're not degrading, uh, and getting value out of experimentation. So, you ask for rule of thumb? 200,000 users, you're magical. Below that, start building the culture, start building the platform, start integrating so that as you scale, you, you start to

  11. 27:5932:41

    Overall evaluation criterion (OEC)

    1. RK

      see the value.

    2. LR

      Love it. Coming back to this kind of concern people have of experimentation keeps you from innovating and taking big bets. I know you have this framework, uh, overall evaluation criterion, and I think that helps with this. Can you talk a bit about that?

    3. RK

      The OEC or the Overall Evaluation Criterion is something that I think many people that start to dabble in A/B testing miss. And the question is, what are you optimizing for? And it's a much harder question that people think, because it's very easy to say, "We're gonna optimize for money, revenue." But that's the wrong question, because you can do a lot of bad things that will improve revenue. So there has to be some countervailing metric that tells you, how do I improve revenue without hurting the user experience? Okay, so let's take a good example, uh, with search. You can put more ads on a page and you will make more money. There's no doubt about it. You will make more money in the short term. The question is, what happens to the user experience and how is that gonna impact you in the long term? So we've run those experiments and we were able to map out, you know, this number of ads causes this much increase to churn. This number of ads causes this much increase to the time that users take to find a successful result. And we came up with an OEC that is based on all these metrics that allows you to say, "Okay, I'm willing to take this additional money if I'm not hurting the user experience by more than this much." Right? So there's a trade-off there. Uh, one of the nice ways to phrase this as a constraint optimization problem, I want you to increase revenue, but I'm gonna give you a fixed amount of average real estate that you can use, right? So, you can for one query, you can have zero ads. For another query, you can have three ads. For a third query, you can have wider, bigger ads. I'm just gonna count the pixels that you take, the vertical pixels, and I will give you some budget. And if you can, under the same budget, make more money, you're good to go, right? So that to me turns the problem from......a badly defined, let's just make more money, right? Any page can start plastering more ads and make more money short term, but that's not the goal. The goal is long-term growth and revenue. Then you need to insert these other criteria and what am I doing to the user experience? One way around it is to put this constraint. Another one is just to have these other metrics. Again, something that we did to look at the user experience. How long does it take the user to reach a successful click? What percentage of sessions are successful? These are key metrics that were part of the overall evaluation criterion that we've used. I can give you another example, by the way, from-

    4. LR

      Yeah.

    5. RK

      ...you know, the hotel industry or Airbnb that we both worked at. Um, you can say, I want to improve conversion rate. But you can be smarter about it and say, it's not just enough to convert a user to buy or to pay for a listing. I want them to be happy with it several months down the road when they actually stay there, right? So that could be part of your OEC to say, what is the rating that they will give to that listing when they actually stay there. And that's a, that causes an interesting problem because you don't have this data now. You're gonna have it three months from now when they actually stay. So you have to build the training set that allows you to make a prediction about whether this user, whether Lenny is going to be happy at this cheap place, or whether, no, I should offer him something more expensive because Lenny likes to stay at nicer places where the water actually is hot and comes out of the faucet.

    6. LR

      That is true.

    7. RK

      (laughs)

    8. LR

      Okay. So it sounds like the core to this approach is basically have a kind of a drag metric that makes sure you're not hurting something that's really important to the business. And then being very clear on what's the long-term metric we care most about.

    9. RK

      To me, the, the key here, the key word is lifetime value.

    10. LR

      Mm. Mm-hmm.

    11. RK

      Which is, you have to define the OEC such that it is causally predictive of the lifetime value of the user, right? And that's what causes you to think about things properly, which is, am I doing something that just helps me short term or am I doing something that will help me in the long term? Once you put that model of lifetime value, people say, "Okay, what about retention rates?" We can measure that. What about the time to achieve a task? We can measure that. And those are these countervailing metrics that make it,

  12. 32:4136:29

    Long-term experimentation vs. models

    1. RK

      make the OEC useful.

    2. LR

      And to understand these longer term metrics, what I'm hearing is, use kind of models and forecasts and predictions? Or would you suggest sometimes use like a long-term holdout or some other approach? Like what do you find is the best way to think, see these long term ?

    3. RK

      Yeah. So there's two ways that, uh, I like to think about it. One is you can run long term experiments for the goal of learning something. So I mentioned at, at Bing, we did run these experiments where we increased the ads and decreased the ads so that we will understand what happens to key metrics. The other thing is you can just build models that, um, use some of our background knowledge or use some, you know, data science to look at historical. I'll give you another good example of this. When I came to Amazon, one of the teams, uh, that reported to me was the email team that... It was not the transactional emails when you buy something in an email, but it was the team that sent these recommendations, you know. Here's a book by an author that you bought. Uh, here's a product that we recommend. And the question is, how do we give credit to that team? And the initial version was, well, whenever a user comes from the email and purchases something on Amazon, we're going to give that email credit. Well, it turned out this had no countervailing metric. The more emails you send, the more money you're going to credit the team. And so that led to spam. Literally a really interesting problem. The team just ramped up the number of emails that they were sending out and claimed to make more money and their fitness function improved. And, and then, so then we backed off and then we said, okay, there, we can either phrase this as a constraint satisfaction problem. You're allowed to send user an email every X days. Or which is what we ended up doing, is let's model the cost of spamming the users. Okay, what's that cost? Well, when they unsubscribe, we can't mail them. Okay? So we did some data science study on the side and we said, what is the value that we're losing from an unsubscribe? Right? And we came up with a number. It was a few dollars, but the point was now we have this countervailing metric. We say, here's the money that we generate from the emails. Here's the money that we're losing on long-term value. What's the trade-off? And then when we started to incorporate this formula, more than half the campaigns that were being sent were negative.

    4. LR

      Hmm.

    5. RK

      Okay? So it was a huge insight, uh, at Amazon about how to send the right campaigns. And this led, and this is what I like about these discoveries, this fact that we integrated the unsubscribe led us to a new feature to say, well, let's not lose their future lifetime value through email. When they unsubscribe, let's offer them by default to unsubscribe from this campaign. So when you get an email, uh, you know, there's a new book by the author, the default to unsubscribe would be unsubscribe me from author emails. And so now the, the negative, the countervailing metric is much smaller. And so again, this was a breakthrough in our ability to send more emails and understand based on what users were unsubscribing from, which ones are really beneficial.

    6. LR

      I love the surprising results.

    7. RK

      We all love them. Lots of, I mean this is, this is the humbling reality and, you know, people talk about the fact that A-B testing sometimes leads you to incremental. I actually think that many of these small insights lead to fundamental insights about...... you know, which areas to go, some strategies we should take, some things we should develop.

  13. 36:2939:31

    The problem with redesigns

    1. RK

      Helps a lot.

    2. LR

      This makes me think about how every time I've done a full redesign of a product, I don't think ever has it ever been a positive result, and then the team always ends up having to claw back what they just hurt-

    3. RK

      Yep.

    4. LR

      ... and try to figure out what (laughs) they messed up. Is that your experience too?

    5. RK

      Absolutely. Yeah, in fact I, I've published, uh, some of these in, in LinkedIn posts, showing a large set of, you know, big launches, the redesigns that dramatically failed, uh, and it happens very often. So, the right way to do this is to say, "Yes, we wanna do a redesign, but let's do it in steps and test on the way, and adjust." So you don't need to take 17 new changes that many of them are going to fail. Start to move incrementally in a direction that you believe is beneficial, adjust on the way.

    6. LR

      The worst part of tha- those experiences, I find, is it took, like, I dunno, three, six months, three to six months to build it, and by the time it's launched, it's just like, "We're not gonna un-launch this." Everyone's been working in this direction, all the new features are assuming this is gonna work, and you're basically stuck, right?

    7. RK

      I mean, this is the sunk cost fallacy, right? "We invested so many years in it, let's launch this even though it's bad for the user." No, that's terrible. Yeah. Yeah, so, uh, this i- this is the other advantage of recognizing this humble reality that most ideas fail, right? If, if you believe in that statistics that I published, then doing 17 changes together is more likely to be negative. Do them in smaller increments, learn from, it's called OFAT, one factor at a time. Do one factor, learn from it, and adjust. Of the 17, maybe you have four good ideas. Those are the ones that will launch and be positive.

    8. LR

      I generally agree with that and always try to avoid a big redesign, but it's hard to avoid them completely. There's often team members that are really passionate and like, "We just need to rethink this whole experience. We're not gonna incrementally get there." Have you found anything effective in helping people either see this perspective, or just making a larger bet more successful?

    9. RK

      And by the way, I, I'm not opposed to large redesigns. I try to give the team the data to say, "Look, here are lots of examples where big redesigns fail. Try to decompose your redesign, if you can't decompose it to one factor at a time, to a small set of factors at a time, and learn from these smaller changes what works and what doesn't." Now, it's also possible to do a complete redesign and just, as you said yourself, they'd be ready to fail. Right?

    10. LR

      Mm-hmm.

    11. RK

      I mean, do you, do you really wanna work on something for six months or a year and then run the A/B test and realize that you've hurt revenues or other key metrics by several percentage points? And a data-driven organization will not allow you to launch, what are you

  14. 39:3142:54

    How Ronny implemented testing at Microsoft

    1. RK

      gonna write in your annual review?

    2. LR

      Yeah, but nobody ever thinks it's gonna fail (laughs) . They go, "No, we got this. We've talked to so many people-"

    3. RK

      But, but I think organizations-

    4. LR

      "... so many users."

    5. RK

      ... that start to run experiments are humbled early on from the smaller changes.

    6. LR

      Yeah.

    7. RK

      Right? You're right. Nobody... (laughs) I'll tell you a funny story. When I came from Amazon to Microsoft, I joined a group and for one reason or another, that group disbanded a month after I joined. And so, people came to me and said, "Look, you just joined the company, you're at partner level, you figure out w- how you can help Microsoft." And I said, "I'm gonna build an experimentation platform, because nobody at Microsoft is running experiments, and 50%, more than 50% of ideas at Amazon that we tried failed." And the classical response was, "We have better PMs here."

    8. LR

      Hm.

    9. RK

      Right? There was this complete denial that it's possible that 50% of ideas that Microsoft is implementing, in a three-year development cycle, by the way, this is how long it took Office to release, it was a classical, every three years we release. And the, the data came about showing that, uh, yeah, you know, Bing was the first to truly implement experimentation at scale, and we shared with the rest of the companies the surprising results. And so when Office was, and this was, you know, credit to Qi Lu and, uh, Satya Nadella, they were ones that says, "Roni, you know, you try to get Office to run experiments. We'll give you the air support." Uh, and it was hard, but we did it. You know, it was, it took a while, but Office started to run experiments, and they realized that many of their ideas were failing.

    10. LR

      You said that there's a site of failed redesigns. Was that, is that in your book or is that a site that you can point people to, to kind of help build this case?

    11. RK

      It's a, I teach this in my class, but I-

    12. LR

      Okay.

    13. RK

      ... I've, I think I've posted this on LinkedIn and answered some questions, I'm happy to put that in the notes.

    14. LR

      Okay, cool. We'll put that in the show notes, 'cause I think that's the kind of, uh, data that often helps convince a team, "Maybe we shouldn't rethink this entire onboarding flow from scratch. Maybe we should kind of iterate towards and learn as we go." This episode is brought to you by Eppo. Eppo is a next generation A/B testing platform built by Airbnb alums for modern growth teams. Companies like DraftKings, Zapier, ClickUp, Twitch, and Cameo rely on Eppo to power their experiments. Wherever you work, running experiments is increasingly essential, but there are no commercial tools that integrate with a modern grow team stack. This leads to wasted time building internal tools or trying to run your own experiments through a clunky marketing tool. When I was at Airbnb, one of the things that I loved most about working there was our experimentation platform, where I was able to slice and dice data by device types, country, user stage. Eppo does all that and more, delivering results quickly, avoiding annoying prolonged analytic cycles, and helping you easily get to the root cause of any issue you discover.Eppo lets you go beyond basic clickthrough metrics and instead use your North Star metrics like activation, retention, subscription, and payments. Eppo supports tests on the front end, on the back end, email marketing, even machine learning clients. Check out Eppo at GetEppo.com. That's GetEppo.com and 10X your experiment

  15. 42:5445:38

    The stats on redesigns

    1. LR

      velocity. Is it ever worth just going, "Let's just rethink this whole thing and just give it a shot to break out of a lo- uh, local minima or local maxima," essentially?

    2. RK

      Yeah. So I think what you said is fair. I mean, I do want to allocate some percentage of resources to big bets. As you said, we've been optimizing this thing to hell. Could we completely redesign it? It's a very valid idea. You may be able to break out of a local minima. What I'm telling you is 80% of the time you will fail. So be ready for that, right? What, what people usually expect is, "My redesign is going to work." No. You're most likely going to fail. But if you do succeed, it's a breakthrough.

    3. LR

      I like this 80% rule of thumb. Is that just like a simple way of thinking about it, 80% of your experiments That's my... ...you feel?

    4. RK

      Yeah, rule of thumb. And you know, you had, uh... You know, I've heard people say it's 70% or, or 80%, but it's in that area where I think, you know, when you talk about how much to invest in the known versus the high risk, high reward, that's usually the right percentage that most organizations end up doing this allocation, right? You interviewed Shreyas, I think he mentioned that, uh, you know, that at Google it was like 70%, you know, the search and ads, and it's a 20% for some of the apps and new stuff, and then it's the 10% for infrastructure.

    5. LR

      Yeah. And I think the, the most important point there is if you're not running an experiment, 70% of stuff you're shipping is hurting your business.

    6. RK

      Well, not hurting. It may... It's flat to negative. Some of them are flat. Uh, and by the way, flat to me, if something is not stat sig, that's a no-ship 'cause you've just introduced more code. There is a maintenance overhead to shipping your stuff. I've heard people say, "Look, we already spent all this time. The team will be demotivated if we don't ship it." And I'm, "No, that's wrong, guys." Right? You know, let's make sure that we understand that shipping this project that has no value is complicating the code base. Maintenance costs will go up. You don't ship on flat unless it's a sort of a legal requirement, right? When legal comes along and says, "You have to do X or Y," you have to ship on flat or even negative, and that's understandable. But again, I think that's something that a lot of people make the mistake of saying, "Legal told us we have to do this, therefore we're gonna take the hits." No. Legal gave you a framework that you have to work under. Try three different things and ship the one that hurts the least.

    7. LR

      I love that. It reminds me when Airbnb launched the rebrand, even that they ran as an experiment with the entire homepage redesign, the new logo and all that. And I think there was a long-term holdout even, and I think it was positive in the end from what I remember.

  16. 45:3848:06

    Testing at Airbnb

    1. LR

      Hmm.

    2. RK

      Yeah.

    3. LR

      Speaking of Airbnb, I want to chat about Airbnb briefly. I know there's... I know you're limited in what you can share, but, uh, it's interesting that Airbnb seems to be moving in this other direction where it's becoming a lot more top-down, Brian vision oriented. And Brian's even talked about how he's less, uh, motivated to run experiments. He doesn't want to run as many experiments as they used to. Things are going well, and so, you know, it's hard to argue with the success potentially. You worked there for many years. You ran the search team essentially. I guess just what was your experience like there? And then roughly what's your sense of how things are going, where it's going?

    4. RK

      So as you, as you note, I'm, I'm restricted from talking about Air- Airbnb. I will say a few things that I am allowed to say. One is, in my team, in search relevance, everything was A/B tested. So while Brian can focus on some of the design aspects, the people who are actually doing, you know, the neural networks and the search, everything was A/B tested to hell. So nothing was launching without an A/B test. We had targets around improving, uh, cer- certain metrics and everything was done A/B tests. Now, other teams, some did, some did not. I, I will say that, you know, when you say things are going well, I think we don't know the counterfactual.

    5. LR

      Mm-hmm.

    6. RK

      Uh, I believe that had Airbnb kept people like Greg Reilly, which was pushing for a lot more data-driven and had Airbnb run more experiments, it would've been in a better state than today. But it's the counterfactual we don't know.

    7. LR

      That's a really interesting perspective. Yeah. Airbnb's such an interesting natural experiment of a way of doing things differently. There's like de-emphasizing experiments and also they turned off paid ads during COVID and I think... I don't know where it is now, but it feels like it's become a much smaller part of the growth strategy. Who knows if they've ramped it up to back to where it is today? But I think it's gonna be a really interesting case study looking back, I don't know, five, 10 years from now.

    8. RK

      It's a one-off experiment where it is hard to assign value to some of the things that Airbnb is doing. I personally believe it could have been a lot bigger-

    9. LR

      Hmm.

    10. RK

      ... and a lot more successful if it had run more controlled experiments. But I can't speak about some of those that I ran and that showed, uh, that some of the things that were initially untested were actually negative and could be better.

    11. LR

      All right. Mysterious.

  17. 48:0650:06

    Covid’s impact and why testing is more important during times of upheaval

    1. LR

      One more question. Airbnb, you were there during COVID, which was quite a wild time for Airbnb with Sanjay on the podcast talking about all the craziness that went on when travel basically stopped and there was a sense that Airbnb was done and travel is not gonna happen for years and years. What's your take on experimentation in that world where you have to really move fast, make crazy decisions, and make big decisions? What did you... What was it like during that time?

    2. RK

      So I, I think actually in a state like that, it's even more important to run A/B tests, right? Because-What you want to be able to see is, if we're making this change, is it actually helping in the current environment? You know, there's this idea of external generalizability. Is it gonna work out now, during COVID? Is it gonna generalize later on? These are things that you can really answer with the controlled experiments. And sometimes it means that you might have to replicate them six months down, when COVID, say, uh, is not as impactful as it is. Uh, saying that you have to make decisions quickly, to me, I'll point you to the success rate. Like, if, if in peacetime you're wrong two thirds to 80% of the time, why would you be suddenly right in wartime, right, and COVID time?

    3. LR

      (laughs)

    4. RK

      So, uh, I, I don't believe in the idea that because bookings went down materially, the company should suddenly, you know, not be data-driven and do things differently. I think if Airbnb stayed the course, did nothing, their revenue would have gone up in the same way.

    5. LR

      Fascinating.

    6. RK

      Right? In fact, if you look at one investment, one big investment that was done at the time, was online experiences. And the initial data wasn't very promising, and I think today it's a footnote.

    7. LR

      Yeah. What a, another case study for the history books, Airbnb Experiences.

  18. 50:0651:45

    Ronny’s book, Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

    1. LR

      I wanna shift a little bit and talk about your book, which you mentioned a couple of times. It's called Trustworthy Online Controlled Experiments, and I think it's basically the book on A/B testing. Let me ask you just, uh, what surprised you most about writing this book and putting it out and, and the reaction to it?

    2. RK

      I was pleasantly surprised that it sold more than what we thought, and more ca- more than what Cambridge predicted.

    3. LR

      Hm.

    4. RK

      So when, when first we were approached by Cambridge after a tutorial that we did to write a book, I was like, "I don't know. This is too small of a niche area."

    5. LR

      Yeah. (laughs)

    6. RK

      Um, and I, uh, you know, they were saying, "So, you'll be able to sell a few thousand copies and help the world." And, uh, I found, you know, my co-authors, which were great, and, you know, we wrote a book that we thought is not... statistically oriented, has fewer formulas than you normally see, and focuses on the practical aspects and on trust, which is the key. The book, as I said, you know, was more successful. It sold over 20,000 copies in English. It was translated to Chinese, Korean, Japanese, and Russian. And so, it's, it's great to see that we helped the world become more data-driven with experimentation, and I'm happy because of that. I was pleasantly surprised. By the way, all proceeds from the book are donated to charity, so I'm, if I'm pitching the book here, uh, I, there is no financial gain for me, uh (laughs) , from having more copies sold. I think we made that decision, which was a good decision. All

  19. 51:4555:25

    The importance of trust

    1. RK

      proceeds go to charity.

    2. LR

      Amazing. I didn't know that. We'll link to the book in the show notes. You talked about how trust... Like, it's, trust is in the title. You just mentioned how important trust is to experimentation. A lot of people talk about, "How do I run experiments faster?" You focus a lot on trust. Why is trust so important in running experiments?

    3. RK

      So to me, the experimentation platform is the safety net, and it's an oracle. So, it serves really two purposes. The safety net means that if you launch something bad, you should be able to abort quickly, right? Safe deployments, safe velocity. There are some names for this. But this is one key value that the platform can give you. The other one, which is the more standard one, is at the end of the two-week experiment, we will tell you what happened to your key metric and to many of the others, surrogate and debugging and guardrail metrics. Trust builds up. It's easy to lose. And so, to me, it is very important that when you present this and say, "This is science. This is a controlled experiment. This is the result," you better believe that this is trustworthy. And so, I focused on that a lot. I think it allowed us to gain the organizational trust that this is really... And the nice thing is, when we, we built all these checks to make sure that the experiment is correct, if there was something wrong with it, we would stop and say, "Hey, something is wrong with the experiment." Uh, and I think that's something that some of the early implementations in other places did not do, and it was a big mistake.

    4. LR

      Mm-hmm.

    5. RK

      Uh, I mentioned this in my book, so I can mention this here. Optimizely, in its early days, were very statistically naive. They sort of said, "Hey, we're... real time. We can compute your P-values in real time, uh, and then you can stop an experiment when the P-value is statistically significant." That is a big mistake. That inflates your what's called type 1 error, or the false positive rate, materially. So if you think you've got a 5% type 1 error, or you aim for that, P-value less than 0.05, using real time sort of P-value monitoring to optimize the offered, you would probably have a 30% error rate. So, what this led is that people that started using Optimizely thought that the platform was telling them they're very successful. But when they actually started to see, "Well, it told us this is positive revenue, but I don't see this over time. Like, by now, we should have made double the money." Uh, so their questions started to come up around the trust in the platform. There's a very famous post that somebody wrote about how Optimizely almost got me fired-

    6. LR

      Hm.

    7. RK

      ... by a person who basically said, "Look, I came to the organ. I said, 'We have all these successes,' but then I said, 'Something is wrong.'" And he tells of how he ran an A-A test when there is no difference between the A and the B, and Optimizely told him that it was statistically significant too many times-Um, Optimizely learned, Optimizely, you know, several people pointed... I pointed this out, (claps hands) uh, in my Amazon review of the book, uh, that the Optimizely authors wrote early on. I said, "Hey, you're not doing the statistics correctly." Other... You know, Ramesh Johari at Stanford pointed this out, became a consultant to the company, and then they fixed it. But, to me, that's a very good example of how to lose trust. They lost a lot of trust in the market, they lost all this trust because they built something that had very much inflated

  20. 55:251:00:44

    Sample ratio mismatch and other signs your experiment is flawed

    1. RK

      error rate.

    2. LR

      That is, uh, pretty scary to think about. You've been running all these experiments and they weren't actually telling you accurate results? What are signs that what you're doing may not be valid, uh, if you're starting to run experiments, and then just how do you avoid having that situation? What kind of tips can you share for people trying to run experiments?

    3. RK

      Yeah. You know, there's a whole chapter of that in my book, but I'll say maybe one of the things that is the most common occurrence by far, which is a sample ratio mismatch. Now, what is a sample ratio mismatch? If you design the experiment to send 50% of users to control and 50% of users to treatment, supposed to be a random number, uh, or you know, hash function. If you get something off from 50%, it's a red flag. So let's take a real example. Uh, let's say you're running an experiment and it's large, it's got a million users, and you got 50.2. So people say, "Well, I don't know, it's not gonna be exactly the same. Is 50.2 reasonable or not?" Well, there's a formula that you can plug in. I have a spreadsheet available for those that are interested. And you can tell here's how many users are in control, here's how many users are having treatment, my design was 50/50. And it tells you the probability that this could have happened by chance. Now, in a case like this, you plug in the numbers, it might tell you that this should happen one in half a million experiments. Well, unless you've run half a million experiment, very unlikely that you would get a 50.2 versus 49.8 split, and therefore something is wrong with the experiment, right? Now, people... I, I remember when we first implemented this check, we were surprised to see how many experiments suffered from this, right? And there's a paper that was published in 2018 and where we share that, at Microsoft, even though we'd be running experiments for a while, is around 8% of experiments that suffered from the sample ratio mismatch. And it's a big number, right? Think about this, you're running 20,000 experiments a year, so many of them, 8% of them are invalid, and we... somebody has to go down and understand what happened here. We know that we can't trust the results, but why? So over time, you begin to understand there's something wrong with the, uh, pipe, data pipeline. There's something that happens with bots. Bots are a very common factor for causing a sample ratio mismatch.

    4. LR

      Mm-hmm.

    5. RK

      Um, so there's a whole... That paper that, uh, was published by my team talks about how to diagnose sample ratio mismatches. In the last probably year and a half, it was amazing to see all these third party companies implement sample ratio mismatches. And all of them were reporting, "Oh my God, you know, 6%, 8%, 10%." Uh, so yeah, they were... It's- it's sometimes fun to go back and say, "How many of your results were in the past or invalid before you had this sample ratio mismatch test?"

    6. LR

      Yeah, that's frightening. Is the most common reason this happens is you're assigning users in, in kind of the wrong place in your, in your code?

    7. RK

      So when you say most common, I think the most common is bots. Somehow they hit the control or the treatment in different proportions because you changed the website, the bot may fail to parse the page and try to hit it more often. That's a, a classical example. Another one is just the data pipeline.

    8. LR

      Mm-hmm.

    9. RK

      Um, we've had cases where we were trying to remove bad traffic under certain conditions, and it was skewed because of the control and treatment. I've seen people that start an experiment in the middle of this site on some page, but they don't realize that some campaign is pushing people, uh, from the side. So there's multiple reasons. It is surprising how often this happens. And, uh, I'll tell you a funny story, which is when we first added this test to the platform, we just put a banner saying, "You have a sample ratio mismatch, do not trust these results." And we noticed that people ignored it. They were starting to present results that had this banner. And so we blanked out the scorecard, we put a big, you know, red, "Can't see this result, you have a sample ratio mismatch. Click okay to expose the results." And why we, do we need that okay? We need that okay button because you want to be able to debug the reasons. And sometimes the metrics help you understand why you have a sample ratio mismatch. So we blanked out the scorecard, we had this button, and then we started to see that people pressed the button still presented the results of experiments with sample ratio mismatches. So we ended up with a, an amazing compromise, which is every number in the scorecard was highlighted with a red line-

    10. LR

      (laughs)

    11. RK

      ... (laughs) so that if you took a screenshot-

    12. LR

      Mm-hmm.

    13. RK

      ... other people could tell you had a sample ratio mismatch. (laughs)

    14. LR

      Freaking product managers.

    15. RK

      This is, this is, uh, intuition. People just say, "Well, my SRM was small, therefore I can still present the results." People want to see success. I mean, this is a natural bias. And then we have to be very conscientious and fight that bias and say, "When something looks too good to be true, investigate."

  21. 1:00:441:02:14

    Twyman’s law

    1. RK

    2. LR

      Which is a great segue to something you mentioned briefly, uh, something called Toyman's Law.

    3. RK

      Yeah.

    4. LR

      Can you talk about that?

    5. RK

      Yeah, so Twyman's law, you know, the, the general statement is if any figure that looks interesting or different is usually wrong. Uh, it was first said by this person in the UK who worked in radio media. But I'm a big fan of it, and, uh, you know, my main claim to people is if the result looks too good to be true, if you suddenly moved your... You know, your normal movement of an experiment is under 1% and you suddenly have a 10% movement, hold a celebratory dinner.

    6. LR

      (laughs)

    7. RK

      Like, it was just your first reaction, right? Let's take everybody to a fancy dinner 'cause we just improved revenue by millions of dollars.

    8. LR

      (laughs)

    9. RK

      Hold that dinner, investigate, see, because there's a large probability that something is wrong with the result. And I will say that nine out of 10 when we call out Twyman's law, it is the case, that we find some flaw in the experiment. Now, there are obviously outliers, right? That first experiment that I shared where we promoted and made long ad titles, that was successful, but that was replicated multiple times and double and triple-checked and everything was good about it. Many other results that were so big turn out to be false.

    10. LR

      Hm.

    11. RK

      So I'm a big, I'm a big, big fan of Twyman's law.

    12. LR

      Mm-hmm.

    13. RK

      There's a deck, I can also give this in the note, where I shared some real examples, uh, of Twyman's law.

    14. LR

      Amazing.

  22. 1:02:141:06:27

    P-value

    1. LR

      I wanna talk about rolling this out at companies and things that ru- you run into that fail. But before I get to that, I'd love for you to explain just p-value. I know that people kind of misunderstand it, and this might be a good time to just help people understand what is it actually telling you, a p-value of say 0.05?

    2. RK

      Uh, I don't know if this is the, the right forum for explaining p-values, 'cause the definition of a p-value is simple. What it hides is very complicated.

    3. LR

      Mm-hmm.

    4. RK

      So I'll say one thing, which is many people assign 1 minus p-value as the probability that your treatment is better than control. So you ran an experiment, you got a p-value of 0.02. They think there's a 98% probability that the treatment is better than the control. That is wrong. Okay? So rather than defining p-values, I wanna cauciously, caution everybody that the most common interpretation is incorrect. P-value assumes, it's a conditional probability or an as- or an ass- assumed probability, it assumes that the null hypothesis is true, and we're computing the probability that the data we're seeing, it matches the hypothesis, this null hypothesis. In order to get the probability that most people want, we need to apply Bayes' rule and invert the probability from the probability of the data given the hypothesis to the probability of the hypothesis given the data. For that, we need an additional number, which is the probability, the prior probability that the, uh, hypothesis that you're testing is, uh, successful or not. That's an unknown. What we do is we can take historical data and say, "Look, people fail s- two thirds of the time or 80% of the time." And we can apply that number and compute that. We've done that in a paper that I will give in the notes, uh, so that you can assess the number that you really want, that what's called the false positive risk. So I think that's something for people to internalize, that what you really wanna look at is this false positive risk, which tends to be much, much higher than the 5% that people think. Right? So if you're... I think the classical example in the Airbnb where the failure rate was very, very high is that when you get a statistically significant result... Let me actually pull the notes-

    5. LR

      Mm-hmm.

    6. RK

      ... so that I know, have the actual number. If you're an Airbnb, where the success rate of, or Airbnb Search, where the success rate is only 8%, if you get a statistically significant result with a p-value less than 0.05, there is a 26% chance that this is a false positive result. Right? It's not 5%. It's 26%. So that's the number that you h- should have in your mind. And that's why when I worked at Airbnb, one of the things we did is we said, "Okay, if you're less than 0.05 abo- but above 0.01, rerun, replicate." When you replicate, you can combine the two experiments and get a combined p-value, uh, using something called Fisher's method or Stauffer's method, and that gives you the joint probability, and that's usually much, much lower. So if you get two 0.05s or something like that, then the join- then the probability that you've got them is much, much lower.

    7. LR

      Wow. I have never heard it described that way. Makes me think about how, like, even data scientists in our teams are always just like, "This isn't perfect. Like, we're not 100% sure this experiment is positive." But on balance, if we're launching positive experiments, we're probably doing good things. It's okay if sometimes we're wrong.

    8. RK

      By the way, it's true. On balance, you're probably better than 50/50. But people don't appreciate how much that 26% that I mentioned is high. And the reason that I want to be sure is that I think it leads to this idea of the learning, the institutional knowledge. What you wanna be able to say is share with the org a success. And so you wanna be really sure that you're successful. So by lowering the p-value, by forcing teams to work with a p-value maybe below 0.01 and do replication on hires, then you can be much more successful. And, and the false positive rate will be much, much lower.

    9. LR

      Fascinating. And also shows the value of keeping track of just what percentage of experiments are failing historically at the company, or within that specific

  23. 1:06:271:07:43

    Getting started running experiments

    1. LR

      product. Say someone listening wants to start running experiments. They ha- say they have tens of thousands of users at this point. What would be the first couple steps you'd recommend?

    2. RK

      Well, so if they have somebody in the org that has previously been involved in experiment, that's a good way to consult internally. Uh, I think the, the key decision is whether you want to build or buy. Uh, there's a whole series of eight sessions that I posted on LinkedIn where I invited guest speakers to talk about this problem. So if people are interested, they can look at how...... what the vendors say and what agencies said about build versus buy question. And it's usually not a zero or one, it's usually a both. You build some and you buy some, and it's a question of do you build 10% or do you build a 90%? I think for people starting, the third-party products that are available today are pretty good. This wasn't the case when I started working. So when I started building, running experiments at Amazon, we were building the platform because nothing existed. Same at Microsoft. I think today, there's enough vendors that provide good experimentation platforms that are trustworthy that, uh, I would say, not a good way to consider

  24. 1:07:431:10:18

    How to shift the culture in an org to push for more testing

    1. RK

      using one of those.

    2. LR

      Say you're at a company where there's resistance to experimentation and, and A/B testing. Whether it's a startup or a bigger company, what have you found works in helping shift that culture and how long does that usually take, especially at a larger company?

    3. RK

      My general experiences with Microsoft, where, you know, we, we went with this beachhead of Bing.

    4. LR

      Mm-hmm.

    5. RK

      We were running a few experiments, and then we were asked to focus on Bing. And we scaled experimentation and built a platform at scale at Bing. Once Bing was successful and we were able to share all these surprising results, I think that many, many more people in the company were amenable. And it's al-... It was also the case that helped a lot that, you know, there's the usual cross-pollination, people from being moved out to other groups, and that helped these other groups say, "Hey, there's a better way to build software." So I think if you're starting out, find a place, find a team where experimentation is easy to run. And by that, I mean they're launching often, right? Don't go with the team that launches every six months, or you know, Office used to launch every three years. Go with the team that launches frequently. You know, they're running on sprints, uh, they launch every week or two, sometimes they launch daily. I mean, Bing used to launch multiple times a day. And then make sure that you understand the question of the OEC. Is it clear what they're optimizing for, right? There are some groups where you can come up with a good OEC. Some groups are harder. You know, I remember one funny example was the microsoft.com website, which this is not MSN, this is microsoft.com, has like, multiple different constituencies that are trying to determine this is a support site and this is some, uh, the ability to sell software through this site and it's... and, and warn you about, you know, safety and updates. It has so many goals. I remember when the team said, "We wanna run experiments," and I s-... and I brought the group in and some of the managers, and I said, "Do you know what you're optimizing for?" It was very funny because th- they, they surprised me. They said, "Hey Ronnie, we read some of your papers. We know there's this term called OEC. We decided that time on site is our OEC." And I said, "Wait a minute. Some of your main goals as a support site, is people more, spending more time on a support site a good thing or a bad thing?" And then half the room thought that more time is better and half the room thought that more time is worse. So an OEC is bad if directionally you can't agree on it.

    6. LR

      Mm-hmm.

    7. RK

      (laughs)

  25. 1:10:181:12:25

    Building platforms

    1. RK

    2. LR

      That's a great tip. Along these same lines, I know you're a big fan of platforms and building a platform to run experiments versus just one-off experiments. Can you just talk briefly about that to give people a sense of where they probably should be going with their experimentation approach?

    3. RK

      Yeah, I mean, so I think the motivation is to bring the marginal cost of experiments down to zero. So the more you self-service, right? Go to a website, set up your experiment, define your targets, define the metrics that you want, right? Uh, people don't appreciate that the number of metrics starts to grow really fast if you're doing things right. At Bing, you could define 10,000 metrics that you wanted to be in your scorecard.

    4. LR

      Mm-hmm.

    5. RK

      Big numbers. So it was so big and people said it was computationally inefficient, we broke them into templates so that if you were launching a UI experiment, you would get this set of 2,000. If you were doing a revenue experiment, you would get this set of 2,000. If you're doing... Uh, so the point was build a platform that can quickly allow you to set up and run an experiment and then analyze it. I think, you know, one of the things that I will say at Airbnb is the analysis was relatively weak, and so lots of data scientists were hired to be able to compensate for the fact that the platform didn't do enough. And so... and this happens in other organizations too, where there's this trade-off. Be-... If you're building a good platform, invest in it so that more and more automation will allow people to look at the analysis without the need to involve a data scientist. We published a paper, again, I'll give it in the notes, with this, you know, sort of a nice matrix of six axes and how you move from crawl to walk to run to fly, and what you need to build on those six axes. So if... You know, one of the things that I do sometimes when I consult is I go into the org and say, "Where do you think you are on these se- six axes?" And that should be the guidance for what are the things you need to do next.

    6. LR

      This is gonna be the most epic show notes episode we've had yet. Maybe

  26. 1:12:251:14:09

    How to improve speed when running experiments

    1. LR

      a last question. We talked about how important trust is to running experiments and how even though people talk about speed, trust ends up being most important. Still, I wanna ask you about speed. Is there anything you recommend for helping people run experiments faster and get results more quickly that they can implement?

    2. RK

      Yeah. So I, I'll say a couple of things. One is if your platform is good, then when the experiment finishes, you should have a scorecard soon after.

    3. LR

      Hm.

    4. RK

      Maybe takes a day, but it shouldn't be that you have to wait a week for the data scientist. To me, this is the number one way to speed up things. Now, in terms of using the data efficiently, there are mechanisms out there under the title of variance reduction-... that help you reduce the variance of metrics so that you need less users, so that you can, um, get results faster. Some examples that you might think about are capping metrics. So if your revenue metric is very skewed, maybe you say, "Well, if somebody purchased over a thousand dollars, let's make that a thousand dollars." At Airbnb, one of the key metrics, for example, is nights booked. Well, it turns out that some people book tens of nights, they're at like an agency or something, hundreds of nights. You may say, "Okay, let's just cap this."

    5. LR

      Mm-hmm.

    6. RK

      "It's unlikely that, you know, people book more than 30 days in a given month." So that variance reduction technique will allow you to get statistically significant results faster. And, uh, a third technique is called Cupid, which is an article that we published. Again, I can give it in the notes, which uses the pre-experiment data to, uh, adjust the result. And we can show that you get the result as unbiased, but with lower variance and hence, uh, hence

  27. 1:14:091:23:06

    Lightning round

    1. RK

      it requires fewer users.

    2. LR

      Ronnie, is there anything else you want to share before we get to our very exciting lightning round?

    3. RK

      Uh, no, I think we've asked a lot of good questions. Uh, hope people enjoy this.

    4. LR

      I know they will.

    5. RK

      Let's go. Lightning round.

    6. LR

      (laughs) Lightning round, here we go. I was just going to roll right into it. What are two or three books that you've recommended most to other people?

    7. RK

      There's a fun book called Calling Bullshit.

    8. LR

      (laughs)

    9. RK

      Which is despite the sort of, uh, name, (laughs) which is a little extreme, I think, for a title, it actually has a lot of amazing insights, uh, that I love. And, and it sort of embodies, in my opinion, a lot of the Twyman's law of showing that things that are too extreme, your bullshit meter should go up and say, "Hey, I don't believe that." So that was, that's my number one recommendation. There's a slightly older book that I love called Hard Facts: Dangerous Half-truths and Total Nonsense, uh, by the Stanford professors from the Graduate School of Business. Very interesting to see many of the things that we grew up with as sort of well-understood turn out to have no, uh, justification. And then, uh, so a stranger book, which I love sort of on the verge of psychology, it's called Mistakes Were Made, But Not By Me.

    10. LR

      (laughs)

    11. RK

      Uh, about all the fallacies and, and that, that we fall into, uh, and the humbling results from that.

    12. LR

      The titles of these are hilarious, and there's a common theme across all these books.

    13. RK

      (laughs)

    14. LR

      Next question. What is a favorite recent movie or TV show?

    15. RK

      So I recently saw a short series called Chernobyl, on the disaster.

    16. LR

      Mm-hmm.

    17. RK

      It, I, I thought it was amazingly well done. Um...

    18. LR

      Yeah.

    19. RK

      Highly recommend it. You know, based on true events. Uh, you know, as usual, there's some freedom for the artistic, uh, movie. They, they, it was kind of interesting, at the end they say this woman in the movie wasn't really a woman, it was a bunch of 30 data scientists (laughs) -

    20. LR

      (laughs)

    21. RK

      ... not data scientists, 30 scientists-

    22. LR

      Yeah.

    23. RK

      ... that, uh, in real life presented all the data to the leadership of what to do.

    24. LR

      I remember that. Uh, fun fact, I was born in Odessa, Ukraine, which was not so far from Chernobyl. And I remember m- my dad told me, he had to go to work, they called him into work that day to clean some stuff off the trees, I think ash from the explosion or something. It was like far away where I don't think we were exposed, but, uh, but yeah, we were in the vicinity. Yeah, it's pretty scary. My wife thinks I've...

    25. RK

      Yeah.

    26. LR

      ... every, every time something's wrong with me, she's like, "Oh, that must be a Chernobyl, Chernobyl thing."

    27. RK

      (laughs)

    28. LR

      Okay, next question. Favorite interview question you like to ask people when you're interviewing them?

    29. RK

      So it depends on the interview, but I'll, I'll, I'll give you... When I do a technical interview, which I do less of, but, uh, one question that I love that, that is amazing how many people it throws away for languages like C++ is, tell me what the static qualifier does. And, uh, for, for multiple, you know, you could do it for a variable, you could do it for a function. Uh, and it is just amazing that I would say more than 50% of people that interview for an engineering job cannot get this and get it awfully wrong.

    30. LR

      Definitely the most technical, uh, answer to this question yet.

Episode duration: 1:23:07

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode hEzpiDuYFoE

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome