Skip to content
Lenny's PodcastLenny's Podcast

The science of product, big bets, and how AI is impacting the future of music | Gustav Söderström

Gustav Söderström is the Co-President and Chief Product and Technology Officer at Spotify. He is responsible for Spotify’s global product and technology strategy, overseeing the product, design, data, and engineering teams. Prior to Spotify, he founded 13th Lab, a startup that was later acquired by Facebook’s Oculus. He also served as the Director of Product and Business Development for Yahoo Mobile and founded Kenet Works, a company focused on community software for mobile phones, which was acquired by Yahoo in 2006. In today’s episode, we discuss: • How Spotify structures product teams to promote freedom of thought • Lessons on thinking long-term and navigating negative feedback • Why Gustav started a podcast and what he’s learned • How AI has impacted the work PMs, engineers, and designers do within Spotify • AI-generated music and its impact on artists • What’s next for Spotify and Spotify Podcasting — Brought to you by Microsoft Clarity—See how people actually use your product | Eppo—Run reliable, impactful experiments | Eco—Your most rewarding app Find the full transcript at: https://www.lennysnewsletter.com/p/lessons-from-scaling-spotify-the Where to find Gustav Söderström: • Twitter: https://twitter.com/GustavS • LinkedIn: https://www.linkedin.com/in/gustavsoderstrom/ Where to find Lenny: • Newsletter: https://www.lennysnewsletter.com • Twitter: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ In this episode, we cover: (00:00) Gustav’s background (04:08) The various roles Gustav has occupied at Spotify (06:54) Why Gustav launched a podcast and what he learned (12:37) How PMs and product teams should think about AI (21:23) AI-generated music (26:19) Will AI continue to be a magic trick for products? (28:27) How Spotify organizes product teams (34:33) How Spotify operationalized autonomy (35:45) Why Spotify uses a centralized model for structuring their organization (43:34) The big bet Spotify took with redesigning its interface, and what they learned (57:26) How they tested their hypothesis before launch (1:02:35) Gustav’s “10% planning time” methodology  (1:03:53) How to bring energy and clarity to your work (1:08:07) How to systematize deep thinking (1:10:29) The peeing-in-your-pants analogy  (1:11:38) Thoughts on how the Swedish culture is portrayed in Succession  (1:13:30) What’s next for Spotify and Spotify Podcasting (1:15:52) Lightning round Referenced: • Spotify: https://open.spotify.com/ • Daniel Ek: https://www.linkedin.com/in/daniel-ek-1b52093a/ • Spotify: A Product Story podcast: https://open.spotify.com/show/3L9tzrt0CthF6hNkxYIeSB • Spotify’s AI DJ: https://newsroom.spotify.com/2023-02-22/spotify-debuts-a-new-ai-dj-right-in-your-pocket/ • Avicii: https://avicii.com/ • DALL-E: https://openai.com/product/dall-e-2 • Stable Diffusion: https://stability.ai/ • Midjourney: https://www.midjourney.com/ • Brian Chesky: https://www.linkedin.com/in/brianchesky/ • Succession on HBO: https://www.hbo.com/succession • Fjällräven: https://www.fjallraven.com/us/en-us • 7 Powers: The Foundations of Business Strategy: https://www.amazon.com/7-Powers-Hamilton-Helmer-audiobook/dp/B07SPHZCNL • Charlie Munger: The Complete Investor: https://www.amazon.com/Charlie-Munger-Complete-Investor-Publishing-ebook/dp/B010EB3EUM • The Mystery of the Aleph: Mathematics, the Kabbalah, and the Search for Infinity: https://www.amazon.com/Mystery-Aleph-Mathematics-Kabbalah-Infinity/dp/B00005OSHY • Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime: https://www.amazon.com/Something-Deeply-Hidden-audiobook/dp/B07QT9TBQW • Helgoland: Making Sense of the Quantum Revolution: https://www.amazon.com/Helgoland-Making-Sense-Quantum-Revolution/dp/B08MQYRVRX • The Beginning of Infinity: Explanations That Transform the World: https://www.amazon.com/Beginning-Infinity-Explanations-Transform-2011-07-21/dp/B01NGZJHOK • The Fabric of Reality: The Science of Parallel Universes—and Its Implications: https://www.amazon.com/The-Fabric-of-Reality-audiobook/dp/B07L5XWD7X • The Case Against Reality: Why Evolution Hid the Truth from Our Eyes: https://www.amazon.com/The-Case-Against-Reality-audiobook/dp/B07VL5TCVF • Gödel’s Proof: https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/0814758371 • The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life: https://www.amazon.com/Demon-Machine-Information-Solving-Mystery/dp/B0C3WPGH4P • Halt and Catch Fire on Apple TV: https://tv.apple.com/us/show/halt-and-catch-fire/umc.cmc.5s15r46uj0wx044tipm2zoh88 • Duolingo: https://www.duolingo.com/ Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Lenny may be an investor in the companies discussed.

Gustav SöderströmguestLenny Rachitskyhost
May 21, 20231h 24mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:08

    Gustav’s background

    1. GS

      The internet sort of started with curation, often user curation. So you, you, you took something, some good, like people or books or music, and you digitized it and you put it online, and then you asked users to curate it. And that was Facebook, Spotify, and so forth. And then after a while, the world switched from curation to recommendation, where instead of people doing that work, you had algorithms. And that was a big change that required us and others to actually rethink the entire user experience, and, and sometimes the, the business model as well. And I think we're entering now is we're going from your curation to recommendation to generation. And I suspect it will be as big of a shift that you will eventually have to rethink your products. We have to rethink the user interface and the experience for recommendation first era. And so what, what does that mean in the generative era? No, no one really knows yet.

    2. LR

      (instrumental music) Welcome to Lenny's Podcast, where I interview world-class product leaders and growth experts to learn from their hard-won experiences building and growing today's most successful products. Today my guest is Gustavs Soderstrom. Gustavs is a product legend, and he's now the co-president, chief product, and chief technology officer at Spotify, where he's responsible for Spotify's global product and technology strategy, and oversees the product, design, data, and engineering teams at the company. I've had Gustavs on my wish list of dream guests to have on this podcast since the day I launched the podcast. And I'm so happy we made it happen. In our conversation, we dig into what Gustavs has learned about taking big bets and what to do when they don't work out, how Spotify moved away from squads and how they structure their teams now, how AI is already impacting their product, and also the future of music generated by AI, also why all great products need to pull some kind of magic trick, how accurately Succession represents Swedish business culture, and his hilarious analogy of peeing in your pants. Enjoy this episode with Gustavs Soderstrom after a short word from our sponsors. This episode is brought to you by Microsoft Clarity, a free, easy-to-use tool that captures how real people are actually using your site. You can watch live session replays to discover where users are breezing through your flow and where they struggle. You can view instant heat maps to see what parts of your page users are engaging with, and what content they're ignoring. You can also pinpoint what's bothering your users with really cool frustration metrics like rage clicks, and dead clicks, and much more. If you listen to this podcast, you know how often we talk about the importance of knowing your users. And by seeing how users truly experience your product, you can identify product opportunities, conversion wins, and find big gaps between how you imagine people using your product and how they actually use it. Microsoft Clarity makes it all possible with a simple yet incredibly powerful set of features. You'll be blown away by how easy Clarity is to use. And it's completely free forever. You'll never run into traffic limits or be forced to upgrade to a paid version. It also works across both apps and websites. Stop guessing, get Clarity. Check out Clarity at clarity.microsoft.com. This episode is brought to you by Eppo. Eppo is a next generation A/B testing platform built by Airbnb alums for modern growth teams. Companies like DraftKings, Zapier, ClickUp, Twitch, and Cameo rely on Eppo to power their experiments. Wherever you work, running experiments is increasingly essential. But there are no commercial tools that integrate with a modern growth team stack. This leads to wasted time building internal tools, or trying to run your own experiments through a clunky marketing tool. When I was at Airbnb, one of the things that I loved most about working there was our experimentation platform, where I was able to slice and dice data by device types, country, user stage. Eppo does all that and more, delivering results quickly, avoiding annoying prolonged analytic cycles, and helping you easily get to the root cause of any issue you discover. Eppo lets you go beyond basic click-through metrics, and instead use your North Star metrics like activation, retention, subscription, and payments. Eppo supports tests on front end, on back end, email marketing, even machine learning claims. Check out Eppo at geteppo.com. That's geteppo.com and 10X your experiment velocity. (instrumental music)

  2. 4:086:54

    The various roles Gustav has occupied at Spotify

    1. LR

      Gustavs, welcome to the podcast.

    2. GS

      Thanks for having me, Lenny. It's a pleasure to be here.

    3. LR

      It's my pleasure to have you on. So at this point, you've been at Spotify for over 14 years, which is a rare feat in th- the tech world. And you've held a lot of different roles while you've been at Spotify. Can you just start off by giving us a sense of what these various roles and what you've done over the years at Spotify? And then just what do you, what are you up to these days? What are you responsible for now?

    4. GS

      So I came into Spotify in early-2009, late two thous- late-2008. And my job then, I had been an entrepreneur, started some of my own companies in the, back then, very, very early sort of feature phone, smartphone space. So I had a bunch of knowledge there. I had sold a company to, uh, to Yahoo in the mobile space. I worked there for a while. I came back to Sweden. And then I met through a mutual friend Daniel Ek, the, the CEO and, and co-founder of Spotify. And they had built the desktop product already, the free streaming desktop product. And it was amazing, and I could try it. But they needed someone to figure out what to do with mobile. And because I had been an entrepreneur in that space, I got that job. So my job was to, to head up mobile for Spotify and figure out what the mobile offering would be. Which was a challenge because obviously Spotify desktop was a free on-demand streaming application. And back then, specifically with edge networks, you couldn't really stream at all in real time. The performance wasn't there. And also, you could, you couldn't fund that with an ads model. So was a product and business model innovation there. Was a lot of fun. So that's how I started. Then after a few years, I took on all of product development for Spotify. And then a few years later, I actually took on the technology responsibility, sort of the CTO role for Spotify as well.And recently, my official title is co-president of Spotify, together with Alex Norstrom. So we kind of run half of the company each. I run the product and technology side, and he runs s- the business and content side. So that's the super fast version. Aside from getting more responsibilities, like taking on the technology department, it has been sort of the same job by title. I've always reported to Daniel. But because Spotify has grown so much, every six to 12 months, it's been like starting at a new company. First there was a sort of a Swedish-Nordic challenge, and then it was an, a European challenge, and then it was, you know, getting into the US, and then we became a public company. So it's sort of as if I had jumped around between a lot of jobs actually, even though it was largely the same, uh,

  3. 6:5412:37

    Why Gustav launched a podcast and what he learned

    1. GS

      title and role.

    2. LR

      Your story makes me think of the classic be careful what you're good at, because you end up taking on more and more. And clearly, uh, you've been given more and more responsibility over the years, and so, um, clearly things are going well and you're doing well. Shifting a little bit, so you're on my podcast currently. You actually have your own podcast, which, uh, was kind of this limited series on the S- the product story of Spotify, which I listened to and loved, and it's kind of surreal to listen to your voice in real time, 'cause I've been listening to that recently in preparation for this conversation. Uh, two questions, just what made you decide to launch your own podcast knowing you had a full time job and a lot (laughs) going on? And the production value for your podcast was very high from what I could tell. And then, two, just what did you learn from that experience in terms of the product you ended up building and just, like, empathizing with the podcast creator side of things?

    3. GS

      There were a bunch of different reasons why I did that. Uh, one is, uh, and, and not a small one, I think like you, I, I love writing and I have this, uh, secret creator dream in me.

    4. LR

      Mm-hmm. (laughs)

    5. GS

      You know, I used to write blog posts a long time ago, and I write internally a lot. Can't write that much externally when you work at a (laughs) company like this.

    6. LR

      Yep.

    7. GS

      But I love writing and talking and presenting. So there was certainly that. And then no small part was to, to, uh, from a pro- point of view, to empathize with one of our m- main constituents, the podcast creator. I'm unfortunately not a great musician. I try to play instruments and so forth, but I, I don't, I don't have any records, I don't sing very well. But I decided to make a podcast, and, uh, that taught me a huge amount about what it's like to be a creator, how, you know, creating different styles of podcast, for example, we wanted to do a sort of higher, uh, production cost podcast with music, and then right away, you run into a bunch of problems, uh, Spotify is actually pretty well-positioned to solve, but still, like it's really hard to have music in a podcast from a rights perspective. So you get, you understand all these problems that podcasters have, and, and you can be better at solving them. But the, the biggest benefit and, and the real reason for, for, um, for doing the public podcast was that I had actually done an internal podcast through sort of a hack where we could gate the podcast to only employees.

    8. LR

      Mm-hmm.

    9. GS

      And, um, I tried to figure out internally how to, how to build more culture around Spotify and sort of help, um, define for new employees and existing employees who we are, the mistakes we did, the successes we had, and, and how we think about strategy specifically and product strategy. Because we were quite well known externally for, for technology and the Squads and all of these things, not so much for, for, for product strategy. And, uh, because I, I love storytelling more than Google Docs, I decided to do an internal podcast and I went around and I interviewed actually Daniel's direct reports, so the CMO, the CHRO, and, and, um, CFO and so fo- an- and, and just asked them about a bunch of stuff. And the idea was to make them more approachable for employees.

    10. LR

      Mm-hmm.

    11. GS

      Because I felt listening to podcasts, you know, even these people that have no idea who I am because I've never met them, I feel like I know them. I feel like I know how they think, and I just like them much more. So, (laughs) the, the secret idea was, what if you could get to know your leaders much better than you do through occasional meetings or, or, or some town hall? So I did that internally, and because I'm a product person, we ended up talking a lot about product and product strategy. And people internally really liked that. Uh, so next time, the question was, what if people that don't even work at Spotify yet could feel as if they knew people at Spotify? That'd be great, because most leaders in those companies are very, uh, opaque and appear as some sort of otherworldly creatures that aren't really real I think when you see them in, like, business papers or something. So what if you had heard them talk for an hour or so? So that was the general idea. So a combination of, um, recruitment tool, (laughs) uh, sharing more about how we think about product strategy, and just because I think it was a lot of fun. I got to interview a bunch of smart and interesting people, both externally and, and internally.

    12. LR

      Did it have the effect that you were hoping after looking back?

    13. GS

      I think it did. Uh, the podcast did well, and, and no, we did not give it our, our own sort of promotion. I (laughs) had to compete-

    14. LR

      Mm-hmm.

    15. GS

      ... as everyone else, which also gives you a lot of empathy for the problem of, like, okay, now you have a product. What about user acquisition? How do you actually get people to listen to it? So, it did achieve what I wanted in the sense that, uh, we have, we have this thing called intro days where, especially in the, in the past few years when we've hired a lot, we actually fly people to Stockholm for sort of an onboarding session to learn about Spotify. And, and the leadership is on stage talking about what they do and their departments and strategy and so forth. And, um, it's very common that people come and tell me that, "Oh, you know, I listened to this podcast or listened to the episode and it's, it's at least one of the key reasons why I joined, or sometimes the reason why I joined." So it's sort of anecdotal, but it, it may be in the-... many tens of people at least who have said it. (laughs) So that seems to work.

    16. LR

      That's, uh, really interesting. Just again- and this comes up a few times in the podcast, is just the power of content and all these different ways for hiring, for culture-building. And it sounds like internally it was- the original goal was just internally build this kind of culture and strategy.

    17. GS

      That- that was the original goal. Make- make, uh, senior leadership more approachable. And, um, so reduce the distance and then also share more of the thinking in an entertaining or, uh, way rather than just through docs that people

  4. 12:3721:23

    How PMs and product teams should think about AI

    1. GS

      end up not reading.

    2. LR

      I love that. So I was listening to it, as I said, and what was really interesting is, uh, I think episode four was actually all about AI. And I think your first kind of attempts at leveraging machine learning and AI within Spotify, and I think that's what led to Discover Weekly and a few other tools. And that was like years ago, and it's interesting listening to it now where AI is again like, you know, a huge deal. And so I'm curious, very tactically on the product team, what you advise product managers and product teams, um, on how to think about AI in their product thinking and also just in their day-to-day work.

    3. GS

      I can give a few- a few examples there. And I- I don't know that we're more- more sophisticated than anyone else but- but, uh, what we're doing at least the traditional machine learning for quite a long time. And I think in the podcast, I think I talk about the- the journey of the internet in sort of stages. And one way to think about it is that the internet sort of started with curation, often user curation. So you- you- you took something, some good, like people, or books, or music, and you digitized it and you put it online, and then you asked users to curate it. And that was your Facebook, Spotify, and so forth. And then after a while, the world switched from curation to recommendation, where instead of people doing that work, you had algorithms. And that was a big change that required us and others to actually rethink the entire user experience and- and sometimes the- the business model as well. And I think what we're- we're entering now is we're going from your curation to recommendation to generation. And I suspect it will be as big of a shift that you will eventually have to rethink your products. So- so that's one lens. So I tend to talk to my teams about even though it's all machine learning, I ask them to think of this as something completely different. The recommendation era was one type of machine learning. The generation era is a different type. So don't think of it as just more of the same. Think of it as something actually completely new instead. And what we learned in, well, a few things. So if you look at this new era of large language models and diffusion models and so forth, uh, there are two types of- of applications. As I said, for the recommendation era, we had to rethink the user interface and the experience for recommendation first era. And so what- what does that mean in the generative era? No- no one really knows yet. Uh, there are a bunch- as usual, there are a bunch of, uh, iterative improvements. So, you know, we use these large language models to improve our recommendations. You can have bigger vectors, you can have more cultural knowledge. You can use it for safety classification, and podcasts that no one has listened to yet and so forth. So there's lots of obvious improvements and we're doing those. But so far, we've only really done one sort of real generative product in- in the hard definition, which is a product that couldn't have existed (laughs) without generative AI, and that is, uh, the AI DJ. So that's a concept that we've been thinking about for a very long time and the AI DJ is, you- you press a button, a person, a- a- a- a digitized person, there's a real person named X and we digitized X so he's now an AI, comes on and talks to you about music that you like and suggests music, and you can listen to it. And if you don't like it, you can s- k- kind of call him back and he says, "Okay, now let's listen to something maybe from a few summers ago." Or, "Here's some new stuff that, you know, were trending yesterday in the Last, you know, Last of Us episode." Or something like that.

    4. LR

      Hmm.

    5. GS

      So that product couldn't have existed without generative AI. Both the- generating the voice and generating what, uh, the- the content of- of what the voi- the voice says. So you can have individualized, personalized voice at the scale of, you know, half a billion, um, people. And so we- we had the use case we had seen for many, many years. Sometimes people call it the radio use case, we call it the zero intent use case internally when you actually don't know what you want to listen to at all.

    6. LR

      Hmm.

    7. GS

      Spotify wasn't that good. Spotify was good when you knew at least roughly, you knew the use case of what you wanted to- if it was a workout or dinner. Like, we had lots of- lots of options for all of those. But if you really didn't know at all, it was hard to open Spotify and sort of stare at it. And people used to say longingly, you know, that this was the one thing that radio was good at. Radio was quite bad, to be honest. I mean, it's not personalized to you at all. It's not on demand. You come in in the middle of things. It's actually terrible in many ways. But people still often say that there was something good about it, and I think that something was the fact that you had a knob and you could just switch between contexts. It's like, "No, boring, boring, boring, boring. Okay, this is good." Ha- and Spotify never had that mode of like, "I don't know what I want, but I want to sort of cycle through things until I find something that I like." And I think with AI DJ, that's actually the use case we managed to solve. So X comes on and says, "I'm going to suggest something to you that you can listen to, and if you like it you can keep listening. But if you don't like it, you kind of bring him back again and you change genre." And for one reason or another, we tried to solve that for many times f- for a long time, but just starting to play a random song without any context as to why you would hear this, it just didn't- never worked. S- so that was our first sort of foray into a product that couldn't exist before. And I think to your question of principles around that, there are a few pretty distinct principles that we've learned.One that I really like that e- is not my principle at all, I think, I think it is straight from Chris Dixon, is the principle of fault tolerant user interfaces. So I can't say how many times during the early machine learning era when we said, "You know, we're moving from curation to recommendation." I saw a design sketch that was a single big play button, 'cause clearly that is the simplest user interface you can do. But if you don't understand the performance of your machine learning, you can't design for it. The quality of your machine learning, if you're going to have a single play button, needs to be literally 100% or zero prediction error (laughs) and that's never the case, right? So let's say that you have, you know, a one in five hits, four out of five things are done, then you need a UI that probably at least shows five things at the same time on screen so you have a one in five of something being relevant on screen. So you need to understand the performance of your machine learning to design for it. It needs to be fault tolerant, and often you need an escape hatch for the user. If you were... So you make a prediction, but if you were wrong, it needs to be super easy for the user to say, "No, you're wrong. I want to go to my library or to this or to that." S- so we have that principle of having a fault tolerant user interface and a user interface that corresponds to the current performance of your, of your algorithms, and I think that is going to be true for generative machine learning as well. I think a very clear example actually is Midjourney. If you think about early Midjourney user interface inside the Discord channel, actually generating an image was very, very slow. It took a long time to generate high quality image, and they could have built the silver button thing where you put in a prompt, you wait for minutes, you get an image, and I think one out of four times it's going to be bad. So you would have been disappointed three out of four times and it's a minute each, so like four minutes later you'd be, "This is a shitty product." What they did was they generated four simultaneous low res images-

    8. LR

      Mm-hmm.

    9. GS

      ... very quickly. And you could see like s- so apparently their performance was probably one in four, that's why they four- showed four and not six. And so one in four was obviously... It was usually pretty good. You click that one and either continue to iterate or scale it up. So that's also an example of, I think, people understanding where the performance of generative AI was when they built the UI. So it's something that, you know, I would be inspired by. And for the AI DJ specifically, another principle is to try to avoid this urge of just wanting to show off the technology-

    10. LR

      Mm-hmm.

    11. GS

      ... and have this voice sex talk and talk and talk and talk. You have to remember that people came there for the music. So the principle for the AI DJ coming from the team... By the way, this was a bottoms up product actually. It, it required a lot of support. We actually acquired big companies and so forth to be able to build it. But the idea has been, uh, had been built by, by teams bottom up. So the principle there was literally to do as little as possible and get out of the way. And I think that was really helpful. You know, it's, it's not telling you what the weather is and what happened in the news and going on and on and on about this band. It is trying to get you to the music. And I think that's, that's why it's working because it, it is working very well for us.

  5. 21:2326:19

    AI-generated music

    1. GS

    2. LR

      I love this distinction between recommendation and, uh, generation and this kind of begs the question of there's this trend that I imagine you're, you're seeing of people auto-generating music using artists', you know, catalog. Like there's this Drake and The Weekend thing that came out a week or two ago. Where do you think this ends up going and how do you think artists adjust to this world where music can just be auto-generated? You know, this play button is like all of it is generated versus just like the DJ in between the songs.

    3. GS

      First big caveat this... This is just super early. No one, no one knows anything, you know, about how this is going to play out or, or the legal landscape and so forth. But I think it's going to be... Have a lot of impact and I, I think if we talk about two things, one is what it could do for music, the other is the right situation and if, if rights holders are getting compensated and so forth. So we talk about the first thing in isolation. I think an interesting example is right, right about when I grew up, Avicii came along and it's interesting to think about because Avicii was not really considered by the existing music industry as a real artist 'cause he couldn't really play an instrument and he couldn't sing, and he was just sitting with this computer in this DAW, digital audio workstation. And so it wasn't really considered real music. And I think now all of us consider it very real music and that he had tremendous real musical talent. So I think right now we're probably in the phase where people say, "This isn't real music and it's su- it's somehow fake." I think the way to think about these, these diffusion models if and when they get good enough at, at generating music is probably the same, like an instrument. It's just a much more powerful instrument and we'll probably see a new type of creator that wasn't proficient at any instrument and they, you know, they couldn't... They couldn't assemble a full orchestra and, and do the thing that they had in their head and they can now, uh, generate very, very new things. I also think, by the way, that there is this distinction between AI music and real music. Uh, that, that doesn't exist for sure. Very talented real musicians are using AI to get better and to help create new ideas. So that distinction is... Doesn't really exist. It's all going to be AI. The question is what percentage, which makes the problem harder 'cause you, you can't talk about if it should exist or not, you have to talk about what percentage should exist and, and who gets to use it or not. But I think the way to think about it is probably as an instrument and that could help create a huge amount of, of art. And I think, uh, this is not news to you who probably use these things a lot, but I think if you don't use these generative models, there is the perception that...... you tell it to create a hit, and you will get that. That's, that's not how it works. Actually what these models do is, because they... because they've been listening to a lot of music, they are very good at doing something that sounds very similar to what already exists. Actually being original is very hard. And from one point of view, as it now gets easier to create more generic music, it will actually be more difficult than ever to be truly unique. So I still think there will be tremendous skill in creating something truly unique. And, and my hope would be that what happened with the door and that technology jump was that you've got a whole new genre like EDM that didn't... You couldn't really produce it with an orchestra or live. A- and maybe we'll see completely new music styles with these technologies. I think that would be very exciting. So that's on the positive side. But then you have the, the rights issue, which I have a lot of empathy for. And, and Spotify specifically has seen this before. So we had a different technology shift like this, which was the technology sh- shift to online downloads of music and piracy and peer-to-peer. So first there was a big technology shift in peer-to-peer, and it was gr- it was exciting for consumers. More consumers started listening to more music than ever, and I think that's where we are now with generative AI. There's a new technology, but it also required a new business model before creators in the industry could actually participate and benefit from this. And, and that's obviously self-serving to say, because we were a big part of innovating that business model, but I still think that's what's necessary, and I hope that that's what I and we could be part of. So I think we've seen the first part, the technology shift, and there will probably be a lot of discussion and, and chaos here, which I have a lot of empathy for. But I think, um, I think we haven't seen the second part yet. What is a model where this could be a benefit? What, what actually happened after piracy is that the music industry got bigger than ever, not, not just as big, but bigger than ever, and I think that could happen with this technology as well. But we're right in the beginning.

  6. 26:1928:27

    Will AI continue to be a magic trick for products?

    1. GS

    2. LR

      So along the same lines, something else you teach is this idea of all truly great products have to pull some kind of magic trick. This comes up in your podcast a lot, and I think you mentioned this other places. And thinking about all the stuff you're talking about here, it feels like in a sense everything's gonna feel like magic, 'cause AI is kind of baked into it.

    3. GS

      I think when we did the AI DJ, we did a small version of that. When people first listened to it, we could see that reaction in, in user testing when they wondered like... So, so the magic trick there was that how could they record this person saying so many different things? Because it's talking about my music. So the magic trick was obviously didn't record a person saying it. It's, it's generated, and then that magic trick wears off. You hear it all the time now and so forth. But it was one of those magic tricks. So I, I still think that concept is important, and it seems to correlate with products sort of going viral and, and taking off. And I think it was the same using something like, um, a DALLE or Stable Diffusion or Midjourney the first time. It, it completely seemed like a magic trick. And, and obviously there is no magic. It's just data and statistics. But I think getting to that point and iterating a product to the point where it feels like magic the first time is, is very helpful. And it's often a question of just getting the performance to a certain level-

    4. LR

      Mm-hmm.

    5. GS

      ... scoping down, removing things. It... There's a lot of fine-tuning, I think, that makes you cross that, that, uh, line from it's cool and impressive but not magic, to it, it feels like magic. I do- I don't understand how this, uh, how this could be done.

    6. LR

      Yeah. It reminds me of the launch of GPT, which ended up being the biggest, most fastest growing product in history, and it's like the epitome of a magic trick. It's like feels like actual magic.

    7. GS

      Absolutely. Absolutely. And, and to most people it is still very ma- a- actually to a lot of us and to even to researchers, it's a little bit magical. No one really understands fully. (laughs) So I guess there's maybe some magic left in the world.

    8. LR

      Absolutely, and I think a lot of people are worried about not understanding what's going

  7. 28:2734:33

    How Spotify organizes product teams

    1. LR

      on there. Shifting to the way you all build product at Spotify. So Spotify is kind of famous for popularizing this idea of squads and tribes, and correct me if I'm wrong, but you guys have kind of moved away from that approach. And-

    2. GS

      Yeah. That's right.

    3. LR

      Okay. So I'd love to understand just like why you shifted and what you kind of learned from that approach to building product, and then just like how do you organize the teams now? What do you... What do you do now?

    4. GS

      This was something that we focused a lot on early, and it turned out to be smart of us to name these things into squads and chapters and so forth. It wasn't... It wasn't really... Well, maybe it was sort of deliberately branding, uh, but it wasn't for purposes of branding that we made it up. We made it up because we thought it was a good structure to, to use, and we needed names for things, and, and this was the early internet era, so you were allowed to, like, ma- make things up. And so it was very good for where we were at the time, and it certainly helped us in recruiting. It's, it's become a little bit of a, of a cost to us, because people still think that we organize that way, and it's not a very efficient way of being organized at this scale, or, or maybe even if you started over right now because we've learned more. But I, I think the big difference is, is the idea with the... with the squad specifically was, uh, twofold. They were supposed to be small and sort of full stack. So a squad should be about seven people, and it should have, you know, front and back end, mobile QA, um, agile coaches and so forth. And, and it should be aut- very autonomous was the idea. And, and that's really what we have shifted. So, so first of all, as you grow the company, scaling in increments of, of seven engineers just creates a ton of overhead.

    5. LR

      Mm-hmm.

    6. GS

      So obviously our teams now tend to be much bigger, maybe, maybe two, three times that at least per like manager, so maybe have like 14 or something instead of 7.... and just less, less overhead roles. And so, so that's one. It, it looks more traditional as, as you learn more and as reasonable as you scale. The second big thing, I think, we struggled with was back then when I joined, the average age at Spotify was... I mean, I was the oldest, and this, this was 14 years ago. (laughs) So I think the average age was probably under 30 or something, and it was in most, uh, tech companies. And so we had, coming from Sweden, which is, uh, it's a different culture than, than the US, and I, I love a lot of things about Swedish culture, and I think we managed to keep the best parts. But Sweden is a very sort of bottoms-up, uh, autonomous culture. There's this famous drawing of how you make decisions in Sweden, and in the US I, I think it's just a, a hierarchy. In, in Sweden, it's kind of a circle. You sit in a circle. No one is in the middle. There is no leader, and so forth.

    7. LR

      Interesting.

    8. GS

      So, so I think by, by sort of culture, we're very inspired by this sup- super-autonomous thing, and I think the idea with, with autonomy is, uh, is very reasonable, and, and the right one, which is we, we were and we are hiring the smartest people we can find, and, and we pay high salaries for that. So if you're hiring smart people, one way to think about it is you're rent- you're renting brainpower. So if you're renting all of this expensive brainpower and then you give them no room to think for themselves, that doesn't sound smart. Then you should actually hire less smart people, and, like, keep your costs down or something. So I, I think you have to give a bunch of autonomy to actually maximize the value of the investment you're making.

    9. LR

      Mm-hmm.

    10. GS

      So, so that's very reasonable, that you would give a lot of space for people to use as much of their (laughs) , their, their talent and capacity as possible. But the problem with that is if you put autonomy very far towards the leaves of the organization, a- and also if you combine that with having a very junior organization, which we did back then, there's a fair chance that you're just going to produce heat. You're going to have 100 squads with 100 strategies-

    11. LR

      Mm-hmm.

    12. GS

      ... running in 100 directions. (laughs) And, and S- and, you know, Spotify has been there in that camp. I mean, we managed to get somewhere, for sure, (laughs) in spite of this. But I'd s- be s- I'd struggle to say we were, like, efficient in doing that. So we've done a few things. Uh, the team structure is more traditional, larger teams, uh, less overhead, and we've been specifically working with, where in the org do we put the autonomy? 'Cause the extremes are at the leaves, and we were there. The other extreme may be at the top. Let's say maybe someone like Twitter, (laughs) there's, there's one person.

    13. LR

      Mm-hmm.

    14. GS

      Both have problems. If you have it at the leaves, you're going to produce a lot of heat. If you have it at the top, you, you need someone with a lot of capacity, and Elon has a lot of capacity. But you are, by definition, going to bottleneck. All decisions have to go through there, and, and Daniel just... it's not his personality that he even wants to make all the decisions. He wants to maximize throughput rather than to bottleneck the throughput. So the question is, if it's not at the top and not at the very bottom, where do you put it? And, and what we've found, which I, I don't think is very contrarian at all, I think this is the case in most companies, is around sort of the VP level. So if you have Daniel, then you have the C level, myself and others, then you have sort of the VP level. That is a good mix of... Instead of having one person in the company think, so only Daniel, then, and, and the rest just do, you have on the VP level in a company like this many tens to maybe hundreds of people that have a lot of autonomy to think. So you get, you get a good amount of freedom of thought and, and people thinking in different directions. But it's, it's not like 8,000 people, right? And the- these people on the VP level are both quite, quite a lot old then, but they're also usually quite senior. They have a lot of pattern recognition. So I think that solves for... It's, like, a good... If, if you think of it a- as a, as an optimization problem, it's kind of a good optimization space. So the, the autonomy level in Spotify now tends to be quite high at the VP level, and then lower around those levels.

  8. 34:3335:45

    How Spotify operationalized autonomy

    1. GS

    2. LR

      And when you say autonomy, what does that actually mean? Is it the VP of, say, the podcasting product has a lot of say over what happens, and there's not a ton of... I don't know. Like, how, how involved are people above? And I know Ma- Maya's the VP of product, I believe, for the podcast product-

    3. GS

      Exactly.

    4. LR

      ... who I think is gonna come on the podcast some day. Um, what does that mean in terms of autonomy for her, what practically?

    5. GS

      So it means that, um, I would ask Maya to define a strategy for what we do in podcasting. How are we going to be different? Why would a podcaster want to be here? Whereas in another company, I would make that strategy, or in an- another company, Daniel would make that strategy.

    6. LR

      Mm-hmm.

    7. GS

      Same with the... The AI DJ, for example, came from, uh, one of our, from our personalization team. And so that was a bet that they made. So they have autonomy to make those kinds of, of bets and define strategies. Same with the user interface. We have an experience team. Can talk about the org structure l- uh, later. But I put a lot of autonomy on the VP of experience to, to define and suggest what it is that we want to do, and in other companies, I, I would define all of that myself, for example.

  9. 35:4543:34

    Why Spotify uses a centralized model for structuring their organization

    1. GS

    2. LR

      Just going even a little bit further here, I know you have just, like, strong opinions on the way to organize teams and how the organization kind of helps you optimize for specific things, but your kind of just thoughts along those lines and what you've learned about how, the impact of organization and what you're optimizing for.

    3. GS

      Yeah. So I, I talk about, um, sort of an, um, an idealized spectrum or, or, um, maybe not idealized but exaggerated spectrum.

    4. LR

      Mm-hmm.

    5. GS

      It's not, it's not really... Noth- nothing is really true.

    6. LR

      Mm-hmm.

    7. GS

      But you, you create, uh, extremes to make a point, right? So on one spectrum, you have something like Amazon, which is known for two-pizza teams, um...... no dependencies. You try to minimize dependencies, so you can run in parallel. Teams compete with each other, uh, on, even on the same project and so forth. But they have direct access to the user. And so the benefit here is if you have an idea, the time to get to user is very low. And, and hi- it has worked for them. It's produced, you know, it's produced, um, Kindle, it produced Alexa, it's produced a lot of very novel things. There are a few interesting, um, downsides here. One downside that I'm extremely impressed with Jeff Bezos for seeing is, if you have teams that compete with each other, the incentives are to, to hide your results, (laughs) hide your code, and that should make for an organization that gets no platform leverage, because no one's cooperating. And I think this, either he had that insight or because he saw this, (laughs) he, he had to do this, but he's well-known for pushing extremely hard on hard APIs. Like, if you don't create hard APIs to your technology, you're out. And if you think about it, it has to be that way 'cause otherwise no one would do it.

    8. LR

      And a hard API is essentially a, like, everyone knows how to use this API-

    9. GS

      Yeah.

    10. LR

      ... and connect to this team to interface with them.

    11. GS

      Exactly. You have to expose your technology to others. You have to maintain those APIs, and they have to be very structured, 'cause otherwise the whole thing would, would collapse as everyone's supposed to compete 'cause there are no incentives. You have to centrally force that. And interestingly, even though theoretically then they're the worst position to have a structured platform, I think because they forced it so hard, they were the ones who did Amazon Web Services, because they had such hard-defined APIs (laughs) because of this rule, that it was easier for them to turn it inside out and expose to the rest of the world. Whereas if you look at someone like Google, I think they struggled more with externalizing their APIs, maybe because it is so friendly and soft, so they didn't need as hard APIs on the inside.

    12. LR

      Hm.

    13. GS

      Because there was no competition. People could just go into each other's code. So it's an interesting a- anecdote around it. But the main point is, you're faster there, but it's going to be hard to cooperate. And so you will see something like maybe exaggerating a bit. Sometimes you'll see multiple search boxes on the same page from different teams. A- and this has been through Spotify by the way as well. You've seen, like, multiple toasters on the Now Playing view coming up from different teams because they're working when we were in the autonomous mode, (laughs) everyone running. Um, and then so, so you get the benefit of speed, but you get the drawback of kind of shipping your org chart and shipping complexity to the end user. But clearly that's been the right choice for, for Amazon because they're, they're a trillion-dollar company. But then on the other spectrum, you have something like Apple, who's also a trillion-dollar company. So clearly both models work, where you would never see two search boxes from the same team popping up on an iPhone. That shit is centrally organized by, (laughs) you know, something that is close to single individual. So they, they are instead in a, in what is probably the world's biggest, largest functional org. They're doing as much. If you think about what goes into the Apple, it's, I mean, they certainly do everything we do. They have music service, podcast service, audiobooks, and they have a billion other services. So it's not like they have an easier problem. And, and yet they, they, um, build something that feels more like it was built by a, a single developer for a single user. So they centralize, and they have this bottlenecking function that everything has to go through and be decided how it fits with everything else. And so that has the benefit of the user experience being simpler and not shipping the org chart and increasing complexity. But it also has the drawback of speed. Without having facts on it, I've heard people working at Apple said like, "Yeah, took seven years (laughs) to get that thing to market," because it just had to wait in the, in the pipeline. So y- you have these extremes, and, and I think that the most interesting ex- uh, ac- example, I think, to think about is, uh, when you double-click the power button on an iPhone, the Apple Pay comes up. Like that decision, how did that happen? You can imagine that all the services team would like to pop up when you double-click that button.

    14. LR

      Hm.

    15. GS

      And so someone had to decide should ch- should music come up? Should app payments come up? Should something else come up? Um, and so th- they have a different structure there. And, and on that spectrum of centralized versus decentralized, because of our strategy, which is we're a single application trying to add, or not trying to, we have added multiple types of content with actually very different business models on the backend, you know, rev shares and royalties and book deals and so forth, into a single user experience. That is our strategy. We think the user experience and keeping that simple is the most important thing. So we've chosen more of the central- centralized model, where these different sort of vertical businesses, if you think about it, the music business, podcast, audiobooks business, they have to go through a single recommendation organization, 'cause that's another problem. You know, which one do you recommend to which user? Should it be a book or podcast or music, and how do you weigh them against each other? And also, the user interface could easily get incredibly complicated if everyone built their own UI. The music team built their UI, and then someone add- added features on top. So that's how we chose to, to optimize. But it, it is based on our strategy, and I think both models work.

    16. LR

      (instrumental music) This episode is brought to you by Eko. Last month, Eko users earned an average of $84 in cashback rewards. How? With Eko, the future of personal finance. Eko is the update to a misaligned financial system, providing an app that works just like your bank, but removes almost all of the middlemen...And then there are Eko points, the world's first open reward system. You earn them whenever you do almost anything in the Eko app. Eko is working to make these points the most rewarding points ever, so it pays to be early. Sound too good to be true? Go to eko.com/lenny, sign up for an onboarding, and find out why it isn't. Lenny's Podcast listeners who attend an Eko welcome session will get an exclusive 4% APY on deposits over $1,000. Learn more at eko.com/lenny. That's eko.com/lenny. It's interesting, these two examples you gave, Apple and Amazon. They're two of the biggest companies in the world and they're, like, at the extremes of these two-

    17. GS

      Yeah.

    18. LR

      ...ends of the spectrum. And it's interesting, most companies are somewhere in the middle. I wonder if there's just, like, a benefit to being at an extreme and that ends up being really important.

    19. GS

      I think so. In, in almost all industries you have this smiling curve concept, right, where you want to be at the extremes of the smiling curve, and that's where big business opportunities are but, but not in the middle. So it's probably true in terms of organizational models as well.

  10. 43:3457:26

    The big bet Spotify took with redesigning its interface, and what they learned

    1. GS

    2. LR

      Speaking of, um, extremes, I want to talk a bit about taking big bets. So you guys had this big launch event recently where you basically redesigned the whole primary feed of Spotify to make it feel more like where kind of apps are going, like TikTok Reels feel of just, you know, stream and you start hearing videos and music starts playing, and some people loved it, some people did not. And I'm curious as a product leader how you think about thinking long-term and dealing with people that are just like, "What the hell's cha- I hate change. Stop changing things." How do you think about that? Who do you listen to? Who do you ignore? How do you know to stay the course? How do you approach that?

    3. GS

      Yeah. You're, you're being very kind. There, there was a lot of, of, uh, negative feedback on Twitter on, on some of that. So le- let me actually kind of dig into some detail, 'cause I think this is a really, for product people listening to this, this is an interesting lesson that I think few people, few companies talk about. 'Cause, because you don't really want to talk about... Uh, you want to talk about everything that went exactly as you thought they would and you don't want to talk about the things that didn't go exactly as you thought they would. So I'll, I'll go through kind of, um, what we are trying to achieve and, and, and what we learned. So Spotify is mainly a background application, and, um, for a long time we've been considered very good at background music and podcast recommendation when the phone is in your pocket and you're listening to like a, an EDM playlist or, you know, pop playlist or something. We're really good at inserting another EDM track there or another pop track there or something like that in the background. What we hear from users again and again though is that w- you know, they say that they get trapped in a taste bubble. So, you know, I love my Spotify, I love this, but I am a little bit bored with EDM now, and Spotify is not suggesting something completely new. And if you think about that problem, it may sound similar to the recommendation pro- it's just another recommendation problem, but it's actually fundamentally different, because when you're recommending another EDM track inside the EDM playlist, you have a lot of signal from that user that they like EDM. But if you're going to recommend a completely new genre, by definition you have no idea, because if you had an idea it wasn't new to them. (laughs) So you can't know anything. So back to hit rate. Your hit rate is going to be incredibly low when you suggest something completely new to the user. So this problem of helping people get out of their taste bubble isn't that easy as it sounds, and we can't really take some, you know, some genre that maybe isn't typical. So I'm a big fan, fan of reggaeton, for example. It's not typically... It's not that common in Sweden, and if you would look the rest of my profile, it's kind of EDM heavy. So you probably wouldn't have guessed it, and Spotify wouldn't have guessed it. So if I'm listening to my favorite, uh, EDM playlist in the background or maybe my metal playlist, metal is very big in Sweden, it's really hard for us to just insert a reggaeton track in the middle of that.

    4. LR

      (laughs)

    5. GS

      You know, most people are gonna think Spotify is broken.

    6. LR

      (laughs) Yeah.

    7. GS

      What the hell (laughs) are they thinking, right? So that doesn't really work. So in order to help people break out of their taste bubbles, you need something different. You need something where your hit ratio can be very low and you need people to expect it to be very low. So when we recommend things in the background, our hit ratio needs to be at least nine out of ten. Maybe one dud is okay, but if you get, you know, five duds, you're gonna think we broke your playlist and your session. We need something where one out of ten is, is a success. If you find one gem out of ten tries, you're very happy. So, so you need a completely different paradigm. And, and you also need to be able to go through many candidates quickly, right? Because the hit rate is so low. You can't take three minutes per item. It's like, "Okay, I didn't like this and it's still, like, two minutes left before the next one comes on." You need to quickly say, "No, no, no." So the, the obvious candidates for this are these, uh, feed type experience where you can go through lots of content, you're expecting the hit ratio to be much lower, and if you don't like it, the cost is very low. You, you just swipe. And then this is the reason why people have been f- when they want to break out of their, of their taste bubbles or when they come into Spotify and listen to something completely new, it is usually because they found it on one of these services, like a TikTok or YouTube or something, where they get exposed to lots of, of new content. So people were asking us for these tools, and so that's what we wanted to solve for. And so we built a bunch of features, um, feed-like structures where you can go through either a gen- a new genre with many tracks, uh, or a podcast genre with, genre with many episodes, or, or even full playlists. And, and we implemented those and we put them in something called sub feeds. So in the current experience, and this is rolled out worldwide.If you click the podcast sub-feed, you get a feed of podcast episodes. Click the music sub-feeds, you get a feed of, of, uh, playlists, where you can quickly, you know, you can go through many playlists, and if you don't understand the name, you can quickly hear what they sound like and check out a few tracks and understand if this is for you. And if you go to the search and browse pace- page, you can find completely new genres that you can quickly go through. And so those are working as we intended. People go through them, go to them when they want to find new music, they browse through them, and they save new songs. So they're working as we intended. Uh, the thing that didn't work as we intended was, when users asked us for this, uh, again and again, we took sort of the sum of these things, and we put it on Home, because people ask so much about Discovery, and we can see clearly how, how correlated Discovery is with retention on Spotify and so forth. But what we, what we misjudged, or, or failed to, to, uh, or rather learned about our own homepage (laughs) is that the way it works right now, and, and this is what you can see in the Twitter comments. If you, if you r- remove the angry voices and-

    8. NA

      (laughs) .

    9. GS

      ... sort of try to see what they're saying, they're saying the following, which is actually quite clear in the quantitive data as well, that if you look at what people do on Spotify's homepage, the current one, it is almost 90% what we call recall. So it is either getting to a session that you're already in, or a specific playlist that you know you want to get to, or at least a specific use case, so you come in with a high intent. You actually knew what you wanted. And maybe only 10% of the time, it's a true discovery. Like, "I don't know what I want." So if you think about that, it's 90% recall and 10% discovery. When we tested the, the design ... So, so the sub-feeds were working and not working, but when we tested the, the sum of them on Home, we kind of switched it from 90/10 to 10/90. So 10% recall, 90% discovery. And while people want discovery, they don't, they probably don't want 90% discovery instead of 90% recall. So if you then look at the comments on Twitter, what they're saying is like, "Hey, I can't find my playlists anymore." Like, "Where are these things?" They're not really complaining about the discovery. (laughs) They're complaining about the things they don't get anymore. And we could see this in the quant data as well, and you can see traffic shifting from Home into search and into library, which is a clear sign people are trying to find, find the things they can't find anymore. And, and you can even see people then trying to use these discovery tools, which are optimized for understand, quickly understanding new things, to do the recall. Like, "Where's that workout playlist I know I want?" And it's actually very bad UI for recall. It's kind of like a slot machine, right? (laughs) Very unpredictable if you ever get to that workout playlist. It was optimized for finding new things, not for recall of existing things. When you do recall, you want a dense UI with many items on screen because you know what it is you're looking for, so you don't need a lot of real estate. When you're doing discovery of new things, you want a lump, a lot, you want a lot of user interfa- a l- a lot of pixels, and you probably want sound, because you don't know what it is. So what, kind of what we learned about our UI, and I think there's maybe, maybe a little bit of, you know, product jealousy here. You always look at other experiences, and if you look around, you could be forgiven for thinking that most other products, if you look at something like YouTube, for example, their homepage is exactly that. It's a huge, single item discovery feed with only new items. And people don't seem to tweet angrily about-

    10. NA

      (laughs)

    11. GS

      ... about how angry they are at YouTube. They say they love YouTube, and it's a big product. And I think what we discovered was that we actually did something really well on our homepage, which was supporting you being inside of multiple sessions at the same time. So you could be in the middle of two podcasts and an audiobook and also then, actually, I just want to get to that workout playlist. I don't remember the name of it, but I know it's workout. We actually did that part really well. I would venture to say much better than the other experiences, where you literally have to go to your, to some tab and into library and start browsing to get back to where you were. And so maybe it's path dependent. If we ha- you know, because we have done recall pretty well, people got, I think, uh, reasonably upset when they couldn't find their, when they couldn't do the recall anymore. And we really don't want, we didn't want to lose that, 'cause it was one of the things we did well and underestimated. And my takeaway is actually we do it better than other experiences. Uh, so we certainly want to keep that. So what we did was, now we're just updating the hypothesis to achieve the same goal, which is, these things are working, and when people want to discover, they use them and they seem to work. They can, they can also get better, you know. You're on this like, um, hill climbing journey from a machine learning point of view. But the question is, how do you make sure that whenever people feel that they are in that, "I'm trapped in my taste bubble," they understand that these things are there and they're easy to use? So now we have a version of, of Home that, uh, that we are also testing obviously, where these things are very available but, but voluntary. And you can still do all of the recall. And so from my point of view, this is the reason we A/B test, because, you know, you want to be scientific about it, and, uh, you know, you want to learn as much as possible about your own product and your users. And now I'm sharing a lot of the learnings. Maybe we should keep them (laughs) to ourselves, but, uh, my hunch is that it's going to make it a much better product. But what I told my teams when we went into this, 'cause I- I've done this a few times. Like, redesigning, I think there is a, there are two fundamentally different types of product development. One is designing a new feature. It is hard to make it, but it's, but it's voluntary for people to use. So you, you do the AI DJ, some people love it. That's fine. If you don't like it, it didn't make it worse for you. But when you redesign, it is much more tricky, because it's not voluntary to participate in the (laughs) redesign. So there is, there is a cost even, you know, for, for people who don't like it.You have a very tricky problem here, which is there are going to be two types of feedback. One is you did something and it was right, but people are upset-

    12. LR

      Mm-hmm.

    13. GS

      ... because you changed stuff. The other is you did something and it wasn't right, and people are also upset-

    14. LR

      (laughs)

    15. GS

      ... but for good reasons. And so how do you separate these two? Because the... I, I think, uh, I explained this, um, to, um... When we talked through this with my, my teams, I think the analogy to think about is you have, you know, your desktop, your physical desktop. You have your computer in one place, you have your pencil over here, you have your notebook over there. And I come in and I just rearrange all of it. And you have spent, in our case, maybe 12 years with that setup. It doesn't matter if I have a lot of, of quantitative data that my new setup is better. You're going to get upset because you are effective in this old setup. And it's hard to tell those apart. Uh, the most classic use case is the, the Facebook newsfeed, which people were very upset about when it became a single newsfeed, but it turned out to solve a lot of user problems, that you didn't have to run around all over Facebook collecting events yourself. So, there are some ways of, of understanding if, if, if you've made it better but people's habits are broken or if it's not better. And one thing is, for example, to look at new user cohorts that don't have that behavior versus old user cohorts and so forth. So, we went through all of this with the teams before we did it. I said, "This is gonna be painful. It's probably gonna be a lot of tweets." (laughs) 'Cause chances that we get it exactly right are very low. So for that reason, it hasn't been, you know, very hard on the team. Um, it is hard... You know, you want to respond to people but the right way to do it is to listen, understand, try a new hypothesis to really figure out what's going on. So, I've, I've... think I've done it, uh, maybe three or four times now. One... Three, maybe. One unsuccessful, two successfully. Just kind of knew what I was getting into. Uh, so it's, it's almost like, uh, you punish yourself. It's very painful, but also the most exciting things. And I think any product person knows that the, the easiest and most straightforward thing to do is to iterate around where you are. There's no risk. You're not gonna get fired. No user is gonna get angry. But everyone also knows that eventually, if you don't adopt new technologies, new paradigms, et cetera, you're going to get replaced, so you have to find this balance of trying new things. And that's... You know, when you work in software, you have this tool of A/B testing and being scientific about it. When you, when you build hardware, it's worse. If you're, if you're wrong, you're wrong. You can't update.

  11. 57:261:02:35

    How they tested their hypothesis before launch

    1. GS

    2. LR

      I love this story. I so appreciate you sharing it. I imagine also with a big launch like this, you can't actually A/B test it ahead of time 'cause the press season. They're like, "Oh my God, look what Spotify's doing." And so you're kind of limited there, I imagine, right? You couldn't really test this ahead of time.

    3. GS

      The, the hardest thing about this is if you're trying something completely new, the, the MVP needs to be very big so you can build a new UI. But if you didn't do algorithms for single item feed, you can't tell if it was the right idea but, but poor machine learning, right? You had poor machine learning or the... Uh, you have to build a lot, and it... That gets quite expensive. That's actually the biggest why it's painful is not really the feedback from the outside. It is, uh, the, the cost you have to take on the inside, you know? You, you incur a lot of cost 'cause you're really hoping you're right. And i- and in our cases, the, the changes on the homepage aren't that hard for us to do. The important thing is that the underlying hypothesis of, "Can we help you break out of your taste bubble?" actually works and then, you know, you update the, the acquisition funnels into that experience. But I think, uh, the problem is that you need to get so many things in place to be able to say if you get... You know, you, you might get a false negative just because, like, you didn't do it well enough. That's the biggest challenge, I think, with these big rewrites, where everyone has to update everything before you can know if you're right or wrong.

    4. LR

      What was that process like of helping you understand what is not working and what is working and what you wanted to change? Like, I imagine there's, like, a bunch of data you're looking at, some tweets, things like that. Like, what was kind of, like, the tactical, "Oh, shoot. Something's not going the way we expected. Here's what we should do"?

    5. GS

      Well, the feeds we tested, but the home feed, uh, we, we rolled out and tested afterwards, and we tested it out on users, few different variants of it. And then we got the data back, and we, we looked more at the quantitative data, and we do a lot of user research where people sit and use the, the feeds to understand and, and build, like, our own theory mind of what is working and what is not working. And then obviously, you, you look at... You look at user feedback, of course. And some users are very good at expressing-

    6. LR

      (laughs)

    7. GS

      ... what is the... u- that, that isn't working. Others are not as good (laughs) at expressing what isn't working. So it can be hard to parse that. But certainly, that's a factor as well. And so then o- once you do that, then you have quantitative data to look at. And then you sit and reason through what do you think is right and wrong, what are different hypotheses, what is working and what's not working, and then just update and test again and again until you find, uh... until you prove or sort of disprove your hypothesis. Trying to be as scientific as possible about it, and, and also I think the biggest risk... Also, when you've invested so much time in something is y- you know, getting precious about things, you have to just be brutal. You have to believe in things 100% until the data says no, and then you believe in something else 100%. That's, that sounds easy. It's very hard to do. It, it... To, to the extent that people get upset when you do it because for some reason, people don't like when people change their mind. It is what we should want from everyone. I would love a politician who said, "I've looked at the data, and I realized, actually, this is right, and now I believe this." But we hate politicians that do that.... you know, they feel untrustworthy, and, like, we ridicule them. So I think that's the biggest risk with anyone. You just have to be like unemotional and just, just look at the proof in the data. And then, you know, if you do that, you just move on and then you get to where you want to be, and you solve the same problem but you adapt.

    8. LR

      I really like that philosophy. Essentially it's the idea of strong opinions loosely held, right? Is that-

    9. GS

      Exactly. Exactly what it is. And it sounds so easy, but it, it's hard.

    10. LR

      Right. 'Cause to your point, people don't like... don't respect someone changing their mind. They're like, "Oh, I see. They were wrong the whole time, and they were so confident about being wrong."

    11. GS

      Yeah, exactly. And it's unclear why that is what we should want, but, uh, I think it, I think it has something to do with human, human psychology. We actually tend to love prophets and people who holds very strong opinions with very little data. Those are the people we like. (laughs)

    12. LR

      (laughs)

    13. GS

      People who look at a lot of data-

    14. LR

      Excellent.

    15. GS

      ... and actually adapt, we, we don't like. Not sure why.

    16. LR

      We're flawed. Flawed creatures.

    17. GS

      For sure.

    18. LR

      Is there something that you've recently changed your mind about along these same lines that maybe comes to mind of like, "Oh, yeah"?

    19. GS

      No, I think these learnings about the... what our own, uh, design system and homepage does really well, maybe better than others, uh, we don't sort of wa- want to wash out with the bathwater, or whatever the appropriate expression is. Think that's the biggest, um, current learning, I'm actually very, very happy about.

    20. LR

      Yeah. I love learning that, uh, we're doing something really well that we didn't really realize necessarily, and maybe we should lean into that more.

    21. GS

      Exactly.

  12. 1:02:351:03:53

    Gustav’s “10% planning time” methodology

    1. GS

    2. LR

      Going in a somewhat different direction, Shishir Mehrotra suggested I ask you something. He's on your board, I believe. And-

    3. GS

      Yes.

    4. LR

      ... he suggested I ask you about your 10% planning time. What is, what is that about?

    5. GS

      This is a concept that I think Shishir has used for a long time, ever since he worked at, uh, YouTube. And the idea is that roughly you shouldn't be spending more than 10% of your time planning versus, uh, executing or, or building, uh, which means that if you're working quarterly, sort of 10 weeks, you should spend one week planning. 'Cause we, we work in, in sort of six-month increment, so we try to spend two weeks planning and are roughly successful. And this is, um, actually, when we talk about org models, uh, give a shout-out to, to Brian Chesky at Airbnb who is, who is actually one of the first, I think, to have these more contrarian org models. He's much more Apple-esque than most of Silicon Valley. He also works in six-month, uh, increments, so he has a lot of experience in that as well. Uh, so that's what the 10% planning time is. And I think if you find yourself planning much more than that, you're either planning too much or your execution period is just too short, but... for that amount of planning. It's a, it's a rule of thumb, but I find that it c- it works.

  13. 1:03:531:08:07

    How to bring energy and clarity to your work

    1. GS

    2. LR

      I asked, uh, a few PMs what I should ask you, PMs that work at Spotify, actually, that haven't told you. And someone pointed out that you, you always bring a lot of energy and clarity to a room. That's something they see you as really strong at. What have you learned about just the importance of that or just how to do that well as a, as a leader?

    3. GS

      Well, that's great to hear. Um, I didn't know that, so (laughs) I'm trying to figure out what to answer. Uh, I think that, uh, the energy, I don't know. I guess I'm just excited about what I do. Uh, I've always been excited about technology. I love, um, seeing new things. Uh, my, my core drive is still this notion of, you know, you, you see something which I think you'll, you'll empathize with that doesn't exist yet, and you're like-

    4. LR

      Mm-hmm.

    5. GS

      ... "Wow, I wonder if that could exist. That would be so cool." And then in order to get people to do it, you try to share that excitement. So I don't think I can be... bring a lot of energy for something I'm not excited about, so I kind of have to work on things I actually (laughs) believe in and that I am excited about. And so maybe then the na- the energy comes more naturally. Unfortunately for me so far, Spotify has been in this phase where a lot of innovation is allowed and I'm even asked to try to do new cool things. Maybe I would have less energy if we're in a pure optimization phase. Um, on the clarity, I've always liked, uh, trying to explain things. Uh, it's, it's a w- you know, well-known fact that the best way to understand something is to try to explain it to someone else. So I try... I go around explaining things to people who didn't ask for it.

    6. LR

      (laughs)

    7. GS

      And not to sound smart, but to see if I actually understood it. (laughs) And so may- maybe it's that practice. And, and on that note, I actually do ask my leaders that work for me, and I ask them to ask their leaders, to always explain themselves. And I think when, you know, we talk a little bit about autonomy and so forth, I don't think... We don't promise everyone that they have to agree, but I think the promise we should make to all employees is that even if they don't agree, they should be entitled to understand why you're making the decision. What I don't think is acceptable is to say, "No, we're gonna do it this way because I'm, I'm more senior. I, I've, I've seen this a bunch of times. Uh, you're not smart enough." Like, all of those things. I think you have to explain yourself, so you owe an explanation. And I find that valuable. Back to, like, the only way to understand something is to explain it, because it usually turns out that if you can't explain it yourself, you probably don't really even understand it yourself. Sometimes, I think it's possible that you can have product instincts that are good but you can't express them. But most often, when people say, "There's something there," you know, they... but they can't explain it, they actually don't understand it themselves, and many times there actually isn't anything there.... and also if you can explain it as a product person, that knowledge is now shared, so i- it just becomes much more effective for the organization. So s- sometimes I try to provoke people a little bit and say, you know, when people ask, like, "How much is art versus science?" And I say, "It's 0% art, 0% magic, and 100% science." And that's because I want to force people to try to explain it. I think we use the word art and magic... We have historically used the word art and magic for anything that we couldn't yet explain. You know, um, genetics was, was magic and art until it was science, and, uh, you know, quantum physics was magic until it was science. A- and most recently actually, intelligence and creativity was art and magic until it was statistics in an LLM. So I, I think, um, I try to push people to say, "Are you sure you can explain this?" Because that forces people to think through. So that's, uh... Maybe... I like it and I try to force it on people, so maybe that's why people think I sometimes bring clarity.

  14. 1:08:071:10:29

    How to systematize deep thinking

    1. GS

    2. LR

      I love that. Question along those lines, is there a system or an approach to explaining that you recommend? Is it just, like, write it out in a document? Is it explain it in a certain style or is it just, like, however is natural to the person?

    3. GS

      I used to write everything, and then write and rewrite and make it more and more condensed, so that, that worked for me. I don't write as much anymore. Now I tend to, like, walk and talk in my head (laughs) to myself. What, what I actually do is I, um... A- and I found this different for different people. A lot of people want to bounce something with someone else. That's how they think. You kind of repeat the same thing again and again and you get some feedback on it. And so I used to write a lot. I sometimes do when it's an idea I want to understand better, and at some point in my life I would love to write something real, like a book or something. But what I do increasingly now is I do my one-on-ones with peers or people who report to me or something, and I just put on AirPods and do, like, a distributed walk and talk. Both people are walking but in different locations, and you spend an hour discussing something. That has actually turned out to be very, very fruitful. So then you get the power of you're not alone, so you get more brain power than your own. And I think... I don't... you know, I don't think there is s- strong evolutionary proof for this, but there's certainly indications that you're thinking better when you're walking, whether it's because you're oxy- oxy- oxygenating your brain or because it's evolutionary for some other reason, I'm not sure. But I found that walking, talking, and thinking, uh, actually, even if you're not in person, just over AirPods, is super effective. It was the pandemic that kind of forced this. I thought we would get less creative and that strate- strategizing would suffer during the pandemic, and I found the opposite. We had more ideas than ever, and I started thinking about why, and I think it's all of these walk and talks that we did.

    4. LR

      You kind of threw out there that you want to write a book someday. What do you think your book would be about?

    5. GS

      I have no idea.

    6. LR

      (laughs)

    7. GS

      (laughs) No idea. Statistically it's probably gonna be about something that I did a lot, so it has to be about something with technology or product or something.

    8. LR

      Mm.

    9. GS

      But I would love to write something fictional. That'd be a lot of fun.

    10. LR

      Oh, boy. Uh, I'll pre-order soon as that's up.

  15. 1:10:291:11:38

    The peeing-in-your-pants analogy

    1. LR

      Another concept I wanted to touch on that another PM suggested, which is he called it, uh, the pee in the pants analogy. Does that ring a bell, and is that interesting to talk about?

    2. GS

      I don't know exactly which occasion this person is referring to (laughs) , but I know I've used that analogy, uh, a few times.

    3. LR

      Okay. Promising.

    4. GS

      I... Uh, the... (laughs) I don't know if it's, like, a Swedish analogy, um, because I, I thought it was more, more widely known, but the idea is that you do something... So the, the, the saying is that's like peeing in your pants, uh, in, you know, in, in cold weather. It feels really warm and nice to begin with, and then after a while you start to regret it.

    5. LR

      (laughs)

    6. GS

      It's about being, being short term basically (laughs) . So now I just say that... I just say, like, "That's like peeing in the pants," and stuff, because people know what I mean. It's, it's a short term thing.

    7. LR

      That's a hilarious way of communicating that idea (laughs) . Uh, must be a Swedish thing.

    8. GS

      Yes. I think, uh, Swedish people do it for some reason, apparently. Others don't.

    9. LR

      Maybe 'cause it's cold a lot of time of the year.

    10. GS

      Yes, that's probably... This is a saying in cold climates. In the warm it does not... it doesn't help.

    11. LR

      Okay.

    12. GS

      No one understands what you mean.

  16. 1:11:381:13:30

    Thoughts on how the Swedish culture is portrayed in Succession

    1. GS

    2. LR

      Speaking of Sweden, uh, do you watch Succession?

    3. GS

      Yes, I do.

    4. LR

      Okay.

    5. GS

      For sure.

    6. LR

      So Sweden has become a big part of the show, uh, specifically the company trying to... I guess I don't want to spoil, but there's a character that's really-

    7. GS

      Matsson.

    8. LR

      ... uh, important. Yes, exactly, that is Swedish. And so I'm curious just (laughs) what do you think of the way they portray the Swedish culture and Swedish business dealings?

    9. GS

      It's, it's super fun to see this as a Swede. And, and I guess fir- f- first and foremost, like, anyone or any person or any, uh, country that gets represented by super tall, well-built, great looking Alexander Skarsgard should probably be pretty happy. So (laughs) -

    10. LR

      (laughs)

    11. GS

      So I... That's good. Then I think, um, there's this episode where they are in Norway, not giving away too much.

    12. LR

      Yep.

    13. GS

      It's, it's, uh... There are elements that are authentic. There, there's a lot of, uh, of, uh... I think, um, paid brand positioning from a Swedish brand named Fjallraven, which I think means arctic, arctic, arctic fox.

    14. LR

      Mm.

    15. GS

      Uh, which is actually a very popular outdoor brand in Sweden, so that's kind of, that's kind of authentic. The, the sauna things and so forth are authentic. So it's like... It's real but it's, it's exaggerated. Actually, the thing that isn't very au- authentic is his, uh, negotiation style.

    16. LR

      Mm.

    17. GS

      Uh, Swedish people tend to be...... serious, cautious. And, and this guy's more of a player.

    18. LR

      Mm-hmm.

    19. GS

      So, it's, he's not the typical Swedish businessman from a negotiation tactic point of view, I think.

    20. LR

      Yeah. It doesn't make me think of the way you described it, where in Sweden people sit in a circle and no one's in the center.

    21. GS

      No, exactly.

    22. LR

      Because of Jan Sten.

    23. GS

      (laughs) He's very much in the center.

    24. LR

      And then when people go saunas, they're just like a chant, "Sauna, sauna." Right, in the-

    25. GS

      Exactly. (laughs)

    26. LR

      ... last episode.

    27. GS

      But it is a great show. I love it.

    28. LR

      I love it. This season is insane. I am so curious where it all goes.

  17. 1:13:301:15:52

    What’s next for Spotify and Spotify Podcasting

    1. LR

      Maybe just a last question before very exciting lightning round. Spotify is at this point the biggest podcasting platform for me specifically and I think globally. Uh, and I love using it. It works great. I'm curious just what's next for Spotify and specifically Spotify Podcasting.

    2. GS

      There are two sides to it. It's for Spotify creators and for Spotify listeners. For Spotify creators, there are two things. One is... And, and this is what we talked about at StreamOn. We talked about it mostly for music, music discovery, but the same problem, it's the same problem and even harder for podcasts. So we're still focused very heavily on helping spot, on helping, uh, podcast creators find more audience. Uh, this is, like I said, it's even a bigger problem to break up, break out of your habits and your bubbles, and podcasting's such a big investment to, to, um, find a new podcast. And so that is something I think we could and should do really well. So we keep investing a lot there. And, and as I said, you'll see more as we roll out more features now. The other big need for, for creators is monetization. And, you know, you can monetize today in many ways with um, uh, DAI and uh, S- Spotify SEI and so forth. But we're working hard to, to expand that and make it better, 'cause th- the industry's starting to mature. And I think this is one of the biggest needs and the biggest things we could do for, for creators to help them monetize better. Actually, both free and paid. We also have paid podcasts. So that's on the creator side. On the consumer side, I, I don't want to share too much. We've shown that we're investing a lot in discovery. I want to keep some secrets for when they roll out. But we are investing a lot in the user experience itself. I think it's far from optimal yet, and what it could be. One thing that I can share that we're investing a lot in is just the ubiquity and playback across different devices and in cars and all these things that we've done well for, for music. But I think the listening experience can get a lot more seamless, and I think, uh, search can get better, the data about podcasts and... Well, I don't want to say too much, but looking at, you know, AI and generative technology, there's, there's a lot that can be done.

  18. 1:15:521:18:37

    Lightning round

    1. GS

    2. LR

      All right. Well, (laughs) I'll take, I'll take what I can get. With that, we've reached our very exciting lightning round. I've got six questions for you, Gustaf. Are you ready?

    3. GS

      I think I am. Let's do it.

    4. LR

      Okay. Let's... We'll find out. What are two or three books that you've recommended most to other people?

    5. GS

      Okay. This is where I try to squeeze in seven into two and three.

    6. LR

      (laughs)

    7. GS

      So if we start with the... on, uh, product, I think, uh, it's well-known, but one that I would recommend probably people to read is Seven Powers by Hamilton Hamner, which, um, Netflix has used a lot, we use a lot. It's just if you're starting out, it's great to have a strategy framework. No strategy framework is right. But having one is better than none. Another in sort of the space of mental models and frameworks I think is The, The Complete Investor by Charlie Munger. So it, it's... Yes, it's about investment, but really it's a bunch of mental models that he uses, and I think the, the key takeaway is you have a problem, you should always ab- apply three different models to it, because, uh, what models do is they, they, um, simplify and, and reduce dimensionality, you know? The world has probably infinite dimensions and they reduce this to maybe three or four. And the risk with that is you happen to get rid of a really important dimension, like, you know, maybe pandemic diseases or something. But if you use three models that have different dimensions and diff- was reduced in different ways statistically and i- it comes to the same conclusion, even the second model you apply vastly increases your chances that you're right. So that was a good book to read. Then I think if we go outside of product, uh, I'm very interested in just science and mathematics. So a few quick ones. The Mystery of the Aleph, an amazing book. Something Deeply Hidden by Sean Carroll on the Everettian interpretation of quantum mechanics. Helgoland by Carlo Rovella on the relational interpretation of quantum mechanics. The Beginning of, of Infinity and the Fabric of Reality by David Deutsch. The Case Against Reality by Donald Hoffman on sort of evolution versus truth, and that evolution doesn't optimize for seeing the truth, just for fitness.Gödel's Proof, I think is an amazing book on his incompleteness theorem, that in sort of any axiomatic systems, there will be true statements that can never be proven, which is a weird thing to, to think about. And then, uh, maybe one of my favorites is The Demon in the Machine by Paul Davies that I think is lesser known, on how information is really just entropy and this concept of information engines, that you can power something by just information and the exhaust is also information. That was not a quick list.

Episode duration: 1:24:30

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode QtJoYFyrdPI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome