Skip to content
Lenny's PodcastLenny's Podcast

Howie Liu: How Airtable refounded its product for the AI era

Through fast and slow-thinking org splits and an IC-CEO who cut one-on-ones; Liu still ranks as the number-one inference-cost user of Airtable AI.

Howie LiuguestLenny Rachitskyhost
Aug 31, 20251h 40mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:05

    Introduction to Howie Liu and Airtable

    1. HL

      If you were literally founding a new company from scratch with the same mission, how would you execute on that mission using a fully AI native approach? If you can't, then just find a buyer. And then if you really care about this mission, like, go and start the next incarnation of it.

    2. LR

      For people that work for you, how have you adjusted what you expect of them to help them be successful?

    3. HL

      If you wanna cancel all your meetings for, like, a day or for an entire week and just go play around with every AI product that you think could be relevant to Airtable, go do it.

    4. LR

      Of the different functions on a product team, PM, engineering, design, who has had the most success being more productive with these tools?

    5. HL

      It really does become more about individual attitude. There's a strong advantage to any of those three roles who can kind of cross over into the other two. As a PM, you need to start looking more like a hybrid PM prototyper who has some good design sensibilities.

    6. LR

      Do you see one of these roles being more in trouble than others? Today, my guest is Howie Liu. Howie is the co-founder and CEO of Airtable. I'm having a bunch of conversations on this podcast with founders who are reinventing their decade-plus-old business in this AI era to help you navigate this existential transition that every company and product is going through right now. Howie and Airtable's journey is an incredible example of this, and there's so much to learn from what Howie shares in this conversation. We talk about a very interesting trend that I've noticed that Howie is very much an example of, of CEOs almost becoming individual contributors again, getting into the code, building things, leading initiatives themselves. There's something that we call the ICCEO. We also talk about the very specific skills that he believes product managers and product leaders, also engineers and designers, need to build to do well in this new world that we're in. Also, how he restructured his company into two groups, a fast-thinking group and a slow-thinking group, which allowed their AI investments to significantly accelerate. If you're struggling to figure out how to be successful in this new AI era, this episode is for you. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become an annual subscriber of my newsletter, you get a year free of 15 incredible products, including Lovable, Replit, Bolt, n8n, Linear, Superhuman, Descript, Whisperflow, Gamma, Perplexity, Warp, Granola, Magic Patterns, Raycast, ChatBRD, and Mobit. Check it out, lennysnewsletter.com and click Product Pass. With that, I bring you Howie Liu. This episode is brought to you by Lucidlink, the storage collaboration platform. You've built a great product, but how you show it through video, design, and storytelling is what brings it to life. If your team works with large media files, videos, design assets, layered project files, you know how painful it can be to stay organized across locations. Files live in different places. You're constantly asking, "Is this the latest version?" Creative work slows down while people wait for files to transfer. Lucidlink fixes this. It gives your team a shared space in the cloud that works like a local drive. Files are instantly accessible from anywhere. No downloading, no syncing, and always up-to-date. That means producers, editors, designers, and marketers can open massive files in their native apps, work directly from the cloud, and stay aligned wherever they are. Teams at Adobe, Shopify, and top creative agencies use Lucidlink to keep their content engine running fast and smooth. Try it for free at lucidlink.com/lenny. That's L-U-C-I-D-L-I-N-k.com/lenny. Today's episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To thrive in the AI era, organizations need to adapt quickly. But many organization leaders struggle to answer pressing questions like, which tools are working? How are they being used? What's actually driving value? DX provides the data and insights that leaders need to navigate this shift. With DX, companies like Dropbox, Booking.com, Adyen, and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having on engineering productivity. To learn more, visit DX's website at getdx.com/lenny. That's getdx.com/lenny.

  2. 4:058:07

    The “Airtable is dead” viral tweet controversy

    1. LR

      Howie, thank you so much for being here, and welcome to the podcast.

    2. HL

      I'm so excited. Thank you, Lenny. I've, I've, uh, been a listener from afar for a while now.

    3. LR

      I'm, I'm really flattered to hear that. I'm also very excited. You've been on quite a journey over the last, uh, is it 13 years or is it, is it longer?

    4. HL

      Like, right, yeah, right about 13.

    5. LR

      13 years. I imagine there have been a lot of ups and a lot of downs. Uh, I wanna talk about all those things. I wanna talk about a lot of the lessons that you've learned along the way. I wanna start with what I imagine was a, a very surprising down moment in the history of Airtable. This is something that, unfortunately, is something I think about when I think of Airtable, uh, I feel other people maybe feel this way, is there's this tweet that went super viral, uh, maybe a couple years ago at this point, where someone just shared all this data and they're like, "Airtable is dead. They've raised way more money than they're worth. They're not making enough to get un- from un- underwater."

    6. HL

      Yeah.

    7. LR

      "Airtable RIP." Uh, what happened there? How much of that was true? How did that go?

    8. HL

      Yeah. So very... I mean, basically none of it was true. Uh, and, um... I mean, it wa- the surprising thing to me was how viral this tweet went when, frankly, like, I, I actually looked back at this person's, uh, other tweets. I think they, they, um, they worked at CB Insights. Uh, and the irony is, like, the, the whole point of that business is to have, like, good data, good data quality around private company data. And they just, like, literally had incorrect numbers by, like, a, a strong multiple on, like, what our revenue scale was, what our growth rate was, like, you know. And, and if it gave me some consolation, I looked back and, like, this person had also tweeted about other companies. Like, Flexport was the last, like, kind of takedown tweet. They, they had like, "Oh, Flexport's dead," and, like, you know, their, their, um, you know, their valuation is, is, um, you know, too high and blah, blah, blah. And so I think that the more surprising thing was just, like, this person has been tweeting a bunch of, like, spicy takes that are not substantiated by real data or correct data.... and yet, like, this particular tweet went super viral. And that was the perplexing part to me. Um, and then I think, actually, I think what, what, uh, really gave it legs was, um, on the All-In podcast, which is, like, obviously super popular, uh, you know, and I listen to it, like, you know, they, they commented. They were like, "Oh," like, you know, "latest on, on, uh, this week's news," like, you know, "this tweet about Airtable, what do we think about this?" And it almost, I think, became, like, um, a way to talk about a broader theme of what happens to this last generation of highly valued companies, maybe decacorn companies, in this new... And at that point, it was, like, kind of the recent moment for both public and private markets. Um, they did also issue a correction though. Um, All-In, uh, did a, a follow-up episode a few, few, uh, I think weeks later saying like, "Hey," like, you know, "we got the numbers wrong." Like, um, you know, "we, we're revising our case and, and kind of, uh, uh, view on Airtable."

    9. LR

      What's that line about how, uh, a lie gets around the world some number of times before truth has even has time to get out of bed?

    10. HL

      Yeah. Yeah, yeah, yeah. Well, I, I think I learned about, um, uh, memes and morality very quickly in, uh, in that experience (laughs) . Not a very good-

    11. LR

      Yeah.

    12. HL

      ... social media person, but, uh, I think I learned a little more.

    13. LR

      Yeah. It's tough. Twitter's such a in- the incentives are so misaligned. It's just, I ne- I tweet something people want to share, not truth.

    14. HL

      Well, I mean, e- especially, like, I mean, I, I, there's a lot to like. I would say net net I like the post-Elon Twitter more than the pre-Elon Twitter because it's, it's just bolder and, like, I, you know, I guess I, I really admire bold product execution where you're not just kind of stuck to, like, the current laurels. And they made so many changes, but, like, I do feel like I get injected into my feed very sensational content all the time. And I mean, it works on me. I'm like, you know, like, I can't help but to, like, click on it and engage with it and, like, you know, but it, it does, I think it does result in, like, this kind of content, like, really spreading.

    15. LR

      Yeah. Now Nikita running the show, I don't, I don't know if you saw this, there's a new... We don't need to keep talking about Twitter, but there's a new feature where you take a screenshot of a tweet and it has, like, a huge x.com logo watermark-

    16. HL

      Oh, no.

    17. LR

      ... in the top right. Yeah. Just to, like, you know, people are-

    18. HL

      Interesting.

    19. LR

      ... sharing these tweets all the time. Yeah.

    20. HL

      Yeah. Yeah, yeah.

    21. LR

      Oh, man. Never a dull

  3. 8:0710:57

    The rise of IC CEOs

    1. LR

      moment over there.

    2. HL

      For sure.

    3. LR

      Okay. I want to go in a completely different direction.

    4. HL

      Okay.

    5. LR

      Something that I'm really excited to talk to you about, which is this very, uh, emerging trend that I've noticed that I feel like you're at the forefront of, of CEOs becoming ICs again. It's kind of this move of, uh, IC CEOs, CEOs getting their hands dirty again, building again, getting in the weeds coding again. Feel like you're, again, at the forefront of this. Talk about just why you've done this, why you think this is important, and just what that looks like day to day to you versus what your life was like a few years ago.

    6. HL

      The underlying reason for this shift, at least for me, is that as we started the company, I was very much in this mode, right? Like, I was literally writing code, both on the backend, thinking about the real-time data architecture of, of our platform, also the frontend, the UX, um, and, you know, I would argue that, like, in that founding moment, like, the initial product market fit finding, um, and especially for a product that is, like, pure software, right? Like, we weren't building, like, a operationally heavy business, like a dog walking marketplace, uh, where the tech is only an afterthought. Like, the tech was the product, right? Um, and in a very meta sense, like, Airtable is the platform for other people to build their own apps, right? So, like, it's all about the, the tech. Like, the very intimate design decisions, um, again, both architecturally and, and, uh, on the frontend and the product UX choices, like, that is the product's value prop, right? Like, you can't separate those two. You can't say like, "Okay," like, "I researched the jobs to be done, here's the workflow, here's the process," and then, like, okay, some engineer can just build it as an afterthought. Like, it's those, like, little decisions and, and really being able to, like, be at the bleeding edge of what's possible both in the browser and with, like, you know, kind of the, the real-time data architecture, um, that made the product what it was, right? Uh, and I think the same is true for Figma, which, um, you know, actually, like, had a very parallel timeline to us. Like, we both were founded around the same time, both spent two and a half years building the product, um, like, hands on, uh, you know, that early team before launching. And, you know, when I think now to, like, both the era in between that founding moment and then now, as well as, like, now, the, the new kind of gen AI moment, like, I think there was a maturing era of both SaaS overall and Airtable specifically where, you know, as you scale up and you kind of learn how to build, you know, teams and organizations and, like, you have to kind of, like, scale up stuff that's not actually those intimate details, but process and people and so on, you kind of get, you know, by default, further and further away from those details, right? And maybe for some businesses, that's fine, because, like, no longer is it about finding, like, the, the details that make for a magical new product market fit, and it is really just about scaling up an existing thing that works, right? Um, and using what I would call, like, more blunt instruments, uh, to kind of scale it up, right? Like, a more blunt roadmap, a more blunt, you know, kind of go to market execution strategy. Regardless,

  4. 10:5716:27

    AI’s paradigm shift in product development

    1. HL

      I think that now we're, we're entering this moment where, like, every, I mean, certainly every software product, in my opinion, has to be refounded because, like, AI is such a paradigm shift. It's not even, like, just, like, the shift from desktop to mobile or on prem to cloud, where that was more like a, a, a very one time and somewhat predictable change in form factor. Like, I think AI is so rapidly evolving that with every evolution, like, every new model release and every new type of, like, capability that's released, it actually implies novel form factors and novel, like, UX patterns to be invented to fully capitalize on those capabilities. And so, like, to be continuous- uh, continuously relevant and to kind of refine product market fit in this era, I think you have to be in the details. Like, there is no, like, you know, looking at it from 10,000 foot view and saying, "Oh, we're just gonna throw a bunch of people at this problem." It's actually understanding, like, what is the right product experience and the right business model that backs it up, um, and the right, you know, everything else to support that engine to take advantage of the capabilities in our product domain?

    2. LR

      You have this phrase somewhere where you, you talk about being the chief taste maker.

    3. HL

      Yeah.

    4. LR

      And to do that, you have to do exactly what you're describing.

    5. HL

      That's right. I mean, I think that, and like I would also say, like, it's actually now also hard to taste the soup without participating in, like, at least some part of creating the soup, right? And, like, meeting with AI, you can kind of look at the final product and say, "Okay, like this, this feels right or not," or like, "It feels like we're being bold enough and we're, we're properly, you know, productizing these new capabilities." Um, but I think, like, to really understand, you know, the solution space of what's possible, you kinda have to be in the details, right? I mean, literally, like, you can't just look at, you know, kind of screenshots or, like, a pre-recorded video of, like, a, a new product feature. Like, AI is something you have to play with, and ideally you're playing with both the, like, kind of packaged up, you know, app or solution that you've built with it, but you're also playing around directly with the underlying primitives. You're using the models either via API or via, like, a chat interface, like you're really pushing them to the boundaries, and, like, because that's the only way that you really understand what these new ingredients... It's like, as a chef, you just gained access to, like, amazing new ingredients, but you have to, like, actually kinda get comfortable with them to put them into a new dish.

    6. LR

      Uh, we had, um, Dan Schipper on the podcast, he runs, uh, this newsletter and podcast about a company called Every and he, they work with companies to help them become more AI, uh, successful and adopt AI and all that stuff, and he, I asked him what's the si- what's the signal that a company will have success adopting AI and seeing huge productivity gains?

    7. HL

      Yeah.

    8. LR

      And he said it's, "Does the CEO use ChatGPT or Claude daily?"

    9. HL

      Yeah.

    10. LR

      And I feel like you're describing exactly.

    11. HL

      Every hourly. Right? (laughs)

    12. LR

      Hourly. (laughs)

    13. HL

      Early hourly, like, or, you know, you could even, like, have a measure of, like, inference, uh, like, cost, right? Like the equivalent underlying, like, inference compute cycle, right?

    14. LR

      How many tokens they use. (laughs)

    15. HL

      Yeah, I mean, I, I'm proud to say, like, I am a, I'm, I'm pretty sure I'm still the, um... I, I just checked this recently, but, like, uh, I take pride in being the number one most expensive in inference cost user of Airtable AI. Uh, not just within our own company, but I think for a long time I was globally across all our customers as well.

    16. LR

      (laughs)

    17. HL

      Like, I mean, I'm just... I'm, I'm, like, well, I mean, like, I'm extremely intentionally wasteful, uh, wasteful in the sense of, like, you know, I'll do something that costs like maybe hundreds of dollars of, like, actual inference cost, right? Like for instance, you know, doing a lot of LLM calls against, like, long, you know, kinda transcripts of let's say sales calls to extract different types of insights, like here's the product apps identified or here's summaries, et cetera, um, and we, we also have now a capability that's basically like an LLM MapReduce. So, effectively even if you can't fit, like, you know, the entire corpus of content into one LLM call, because the, the context window, uh, limitations, we'll map through, like, all of this content and break it up into chunks and then, like, perform an LLM call on each one and then perform an aggregation LLM call on those chunks. Very expensive, right? Because you're basically running, like, a highly expensive model against a lotta data and then running it again on the aggregates of that. But, like, for me, you know, like hundreds of dollars spent on this exercise is trivial compared to the potential strategic value of, like, having better insights. It's as if, like, a really, really smart chief of staff has gone through and read every single sales call, like, transcript that we've had in the past year and giving me, like, you know, you know, kind of very, uh, astute product insights, marketing insights, like, you know, kind of positioning insights and segmentation insights. Um, like that's invaluable, right? Like, you could pay a consulting firm, like, literally millions of dollars to get that quality of work so, like, to me, I still think the... like the value versus the actual cost of AI when applied greedily but smartly, like it's just... it's, it's, it's a crazy ratio and, like, more people should be, like, aggressively throwing compute cycles at these very high value problems.

    18. LR

      Until somebody tweets how you're eating, uh, costing the company so much on, on AI compute and you guys are gonna be underwater in...

    19. HL

      (laughs) Pretty much.

    20. LR

      Just kidding.

    21. HL

      It's like Howie have personally taking down, um-

    22. LR

      Sorry. (laughs)

    23. HL

      ... the, uh, the cashflow, uh, profile of the business. Like... (laughs)

  5. 16:2721:38

    Specific changes Airtable has made

    1. HL

    2. LR

      So, okay, so CEOs, founders hearing this, they're probably like, "Okay, I, I, I should probably start doing this." What does this actually look like? I imagine you still have a lot of other stuff. You got one-on-ones, you got all these... Like, how do you actually... How have you changed your day-to-day to do this?

    3. HL

      Yeah, so I actually cut my one-on-one roster, uh, by default, uh, and the idea is I'm not... is not that I don't want to spend time one-on-one with people, but rather that I found that the, um, just like having more standing one-on-ones actually precludes me from, you know, engaging in more timely topics, right? Like, I like to think of, um, you know, the best types of meetings as, like, very, um, urgency driven and, like, you know, there's some timely topic like, you know, you've, you've discovered some insight. Maybe I talked to some new startup, right? Um, and, uh, you know, I learned something from, from their product or their approach and I wanna bring that into how we're thinking about, like, a new feature at Airtable or even just, like, plant the seed with like, you know, some different, like pro-... you know, EPD people within Airtable. Like, I wanna make most meetings, uh, very timely and very informed by like real alpha, right? There's gotta be some kinda value and insight, uh, to seat that with. Now, in addition to that, I'll supplement with, like, you know, when I'm in person, uh, you know, with someone, like, I wanna carve out time for, like, a, you know, a proper, like, catch up and, like, less structured, less, less, like, timely and just more of, like, you know, building a relationship with a human. But I actually find that, like, you know, having that con-... It's almost a barbell approach where it's like, you know, if you're gonna spend time with somebody in a free form way, like actually do it in a high quality, not like forced weekly ritual way, like go for a longer lunch or coffee walk or whatever, um, in person when you can. Uh, maybe that's like a once every month or two kinda thing, and then, like-... the, the in-betweens are either topical, so we do have standing meetings for, you know, like now, um, we have a, a weekly, basically, um, sprint check-in on all of our AI execution stuff, which now is, like, half the company, or half the, um, EPD org is working on AI capabilities. We're trying to shift very quickly, like, you know, I basically want to always ask the question, like, how would an AI native company, like a Cursor or a Windsurf, et cetera, like, how would they execute, right? And are we executing as fast as them and taking advantage of, like, all the new stuff as well as them? So, like, bringing that level of, like, kind of intensity and urgency to, like, how I spend my time within, that's been the main, the biggest shift, uh, for me.

    4. LR

      What's a change you've made to help the company move faster and, and match that sort of pace?

    5. HL

      Yeah, so, uh, I mean, we did do a reorg, uh, of, uh, the EPD org. So before we had... We've gone through a few different, um, kind of reorgs over the past, call it four years. The, the, you know, kind of original state as we just kind of proliferated, I think by default or incrementally, was that we had a bunch of groups that were each responsible for, like, a feature or a surface area. So, there was a group responsible for search within our table, and there was a group responsible for, like, mobile experience and, you know, so on and so forth, right? And, you know, that has its benefits, like, you know, obviously, like, that team can go and, like, you know, get really ramped up on that part of the code base, that part of the product. But it has the disadvantage of, you know, you, you tend to think incrementally when everyone's remit is actually, like, a feature that they incrementally improve by definition, as opposed to thinking about, like, a mission or, like, a outcome goal, right? That might need to, you know, uh, coordinate, you know, dramatic changes across a wider set of, of, uh, surface areas instead of just, like, each one kind of incrementally, uh, improving. And so we reorged, um, initially to basically different, um, business units effectively, right? So, uh, I know Airbnb has done, like, kind of the, the functional to GM, you know, back, et cetera. This was more like saying, look, we have an enterprise business, and the MO there is more about, like, scalability. Can we support, like, the larger scale data sets and use cases? Do you have the core capabilities needed to be able to, like, push out an app to maybe 10,000 seats or 20,000 seats for product operations, right? Um, so a lot of architecture, a lot of scale. Not gonna work. We would have a, uh, what we call the team's pillar, which is more about self-service, like kind of the product UX, like, how easy it is to, to adopt the product, onboard, share, do all the kind of, like, basic functionality. An AI pillar, solutions pillar, and, uh, and then basically infra. And what we found, though, with that approach, is that there was still, um, you know, there, there was more kind of, uh, holistic bets being made, so, like, you know, the team's pillar could think not just about one feature, but, like, the overall onboarding experience, where they'll really think about next, you know, in a way that touched multiple parts of the product. Um, but it still felt like it wasn't, especially as, as we started to execute more on AI stuff, like, it wasn't, you know, allowing us to aggressively and quickly move as a AI native company would, right? Like, I mean, when you look at, you know, the Cursors of the world, they're shipping, like, major new stuff every week. And like, you know, it's not like, "Oh, well, we have, like, this separate, you know, kind of roadmap for enterprise. We have this roadmap for, for, uh, this group." And, you know, it just feels like one, um, one cohesive product that's shipping at

  6. 21:3832:57

    Fast- and slow-thinking teams

    1. HL

      a breakneck pace. So, we did this, uh, recent reorg where now we have the, what I call, like, the fast thinking, uh, group, which officially is called AI platform. Um, but it really means, like, we want to just ship a bunch of new, uh, capabilities on a near weekly basis, um, and each of them should be, like, truly awesome value, right? Like, you should drop your jaw at, like, how awesome it is to use this new capability in Airtable. And then separately, we have the slow thinking group, and that's not mea- meant to be, like, better or worse. Like, it's, it's literally, like, you need fast and slow thinking in the Kahneman sense, uh, to operate, right? Like, as a human-

    2. LR

      I have that book behind me. (laughs)

    3. HL

      Yeah, I love that book. Um, but, uh, but slow thinking is like, it's just a different mode of planning and executing, right? It's, like, more deliberate bets that require more premeditation, right? Like, we can't just, like, ship a new piece of infrastructure that has a lot of, like, uh, data complexity, uh, like, you know, our, our, uh, data store, HyperDB, that, um, now can handle like multi-hundred million record datasets. Like, that's not something you ship in a week, right? In a hacky prototype. So, we now have these two separate parts of the company, and I actually think what's, what's really cool is, like, they, they actually complement each other very well, right? 'Cause, like, the, the fast execution, the AI stuff, you know, that creates the top of funnel excitement. That, that also, you know, kind of inspires new use cases and new users to come into Airtable, including enlarge enterprise, right? Like, you know, enterprises can use this stuff too, it's not just like an S&B thing. But, like, the slow thinking basically allows those initial seeds of adoption to sprout and grow into much larger deployments. Whereas I think a lot of the challenge for many of the AI native companies I've seen is that they have, like, a very wide top of funnel. Like, get all of this AI tourist traffic, you know, a lot of interest, a lot of, like, kind of, like, you know, early usage, but then, you know, sometimes the, the challenge is how do you, like, turn that into more durable, you know, growth and, and get each of those adoption seeds to retain and expand over time.

    4. LR

      That is super cool. I've never heard of this way of structuring teams, the fast thinking, thinking fast, thinking slow, the Kahneman. (laughs) It's so interesting. For the fast thinking team, do you find there are specific archetypes of people that are successful there? Is it a lot of, like, bringing in new people that are not just used to the way of working at Airtable? What do you find?

    5. HL

      We, we have a mix. So, you know, we've brought in, uh, I mean we're, we're always hiring, right? Like, there was never a point in, um, in the company's life where we stopped hiring and that, you know, candidly, even when we had to do, uh, two riffs, right? That, that significantly, you know, kind of reduced our headcount, you know, we had just, like, way too quickly grown and overscaled the business at a certain point, um, but even when we did our riffs, we were still actively recruiting and hiring, um, you know, in, I mean, every major department, but especially in, uh, in EPD because, you know, it's always been my belief that, like, you, you all, like...It would be arrogant to say that we have all the people we ever need already in, in the, uh, rosters day. Right? Like, we're always gonna need to find new, fresh perspectives, new skill sets, et cetera. Um, and so, you know, we- we've continued to hire. Um, I think we've learned, uh, as we've gone along of, like, you know, what is the ideal type of hire. And, you know, we've done some acqui-hires and learned from that as well. Um, but I think the fast thinking part, it really just requires a, a lot of, like, um, somebody who's able to operate with a lot of autonomy, right? Like, you know, who's entrepreneurial in nature. That doesn't mean, like, they have to literally be a former founder. I know some companies are, you know... Like, Rippling, for instance, does a lot of actual acquisitions. It gets actual founders into the company. Like, we found that, you know, that- that's great and we've done some of that as well, but, like, also there are some really, really capable people who, like, we didn't literally have to, like, acquire in and yet they're just able to, like, think full stack about the problem and, like, the user experience. Problem not just meaning, like, you know, the- the technical layers of the problem, but, like, also, like, what is the wow factor we're trying to create? Right? Um, so tangibly, like, you know, um, we're- we're doing this new thing that's about to ship where, you know, not only can you describe the app you want to build and then iterate on it with, you know, kind of our conversational agent, Omni, but, um... And it builds it with, like, the existing Airtable platform capabilities. But, um, we're also giving it the ability to actually do code gen to extend those apps with, like, really final mile, very bespoke functionality or, like, uh, visuals. Right? So you could say like, "Hey, generate me a very, very specific type of map view with, like, this kind of, like, uh, heat mapping and this kind of, like, you know, icons. And when you click it, do this." And, like, that's a capability that, like, there's so much ambiguity in some of the design decisions around it. Like, you know, um... And, and you have to blend that design thinking with some of the technical con- constraints of, like, what can the AI models actually one-shot effectively? And if not, like, how do you add in, like, the right human workflow for approval and review and then re-prompting and so on? So, just so many different, like, design decisions and you need somebody who can, like, really think full stack about that kind of product and is not overwhelmed by that, you know, kind of open-endedness, but, like, relishes in it.

    6. LR

      I was actually playing with it, uh, before we started chatting. I made a really cute startup CRM.

    7. HL

      Oh, that's awesome.

    8. LR

      Yeah. Started talking to Omni over here. It's like, the colors are beautiful. That was-

    9. HL

      Oh, sweet.

    10. LR

      That's what's standing out to me right now. Um.

    11. HL

      I mean, it is, um-

    12. LR

      Yeah.

    13. HL

      I will say, like, just as a, as a note, um, you know, I consider myself, like, at my core, like, a product UX person. Right? Like that, that's my, like, passion and, you know, everything else I've had to learn to, to kind of run this company, uh, is almost like what was a necessary, you know, part of the, the journey. Like, you know, but, but, like, my real passion is thinking about product UX, right? And I... You know, I- I think of UX in a deeper sense than just, like, the cosmetic, like, design. Like, you know, what you put into a framer, you know, kind of prototype like. I think of it as, like, literally, like, what should this product do and how should it represent that and behave for the user? That is the product, in my opinion. Right? Um, and of course then you have to figure out, like, technically what's possible and how to implement it. But, like, I think to me, um, what's under, uh, executed today in the world of AI products is, like, there's so many awesome capabilities of AI, and most of them are really under-merchandised, and there's, like, very poor actually visual or otherwise metaphors or affordances given to users to help represent or understand, like, what those under- underlying capabilities are. Right? Like, I mean, ChatGPT obviously, like, you know, extremely successful product so not knocking it at all. But, like, you come in and you just make- get this, like, completely blank chat box. Right? By default. And now they have suggestions underneath it and, and so on. But, like, you know, the product UX part of me is just like craving more visual metaphors or colors or some kind of, like, use the canvas of a web interface to... And, and all the richness, um, you know, of interaction you create there to better represent or, or show all the different things that you can do with, uh, you know, wi- with the underlying model. Right? Um, and so that's something we try to do with Airtable, is like show, like, all of the different states and, like, use colors even to play those off.

    14. LR

      It's interesting how much of this connects with... I just had Nick Turley on the podcast. He's head of ChatGPT at OpenAI, and he had these two really interesting insights that resonate, uh, directly with what you're describing. One is he has this concept of whenever something is being worked on, he's always asking, "Is this maximally accelerated? How do we move faster? Is th- if this is important, what would allow us to move faster?"

    15. HL

      Yeah.

    16. LR

      And I love that that's one of the themes that's coming up as you talk, is just this-

    17. HL

      Yeah.

    18. LR

      ... creating this very clear sense of speed. And you even call it the fast thinking team. Like.

    19. HL

      Yeah.

    20. LR

      You are gonna move fast.

    21. HL

      Yeah.

    22. LR

      And then the other one is just this insight that with AI you often don't know what pe- what it a- what it can do and what people want to do with it-

    23. HL

      Yeah.

    24. LR

      ... until it's out. So there's this need to get it out and that'll tell you what it should be.

    25. HL

      I, I couldn't agree more with, with both those and p- particularly on the second point. You know, I think it's interesting, like, clearly there have been companies, um, that ha- have both been successful in PLG and, like, kind of more sales lead, you know, kind of distribution for AI products. Like, you know, the, the most notable ones I can think of are, like, Palantir with their AIP deployments. Like, that's obviously very sales lead. You're not PLGing into a, uh, Palantir deployment. But even, you know, like, companies like Harvey and, and, uh, and so on, like, you know, they're doing very well and, like, it's primarily, from what I understand, like, sales lead. Right? You're not self-serving into a Harvey instance at a law firm.

    26. LR

      Mm-hmm.

    27. HL

      Um, and yet, like, to me the, the best way to get AI value out there is experientially. Right? And so, like, you can kind of get that in a sales motion. You can, like, you know, show a demo. Maybe you can get... Do a POC. But, like, it's so much more powerful when you just...... open up the doors and say, "Anyone who wants to come and sign up and try out this product, like, can," right? And, uh, I think, you know, it's- to me it's like, you know, kind of a real proof point that, like, ChatGPT is arguably, like, the most successful, uh, you know, kind of PLG product of all time, right? Just in terms of, like, sheer scale of users. Like, they announced 700 million, like, MA- uh, is it MAUs or we- uh, I think it's-

    28. LR

      Weekly active users.

    29. HL

      ... weeks.

    30. LR

      10% of humans on Earth use it weekly.

  7. 32:5734:48

    The emergence of new form factors in AI models

    1. HL

      by the agent.

    2. LR

      Let me follow that thread. So if you go to airtable.com today, it looks- it looks like basically all the other AI app building sites. Now it's just tell me what you want to build. Thoughts on that as just, like, a thing everyone's starting to do. Is there... What do you think comes next? Is this... Does- is it working well?

    3. HL

      There's clearly a- an incredible magic to, uh, vibe coding and- and app building with AI, right? And, um, this is actually, you know, like, a- a prime illustration, uh, in my view of- of, uh, that- that Konstantine talked about a second ago, which is, you know, as capabilities of these underlying models evolve, the form factor and the product UX also needs to evolve with it, right? And so, like, the earliest models, like, the kind of original ChatGPT, like, GPT-3.5, uh, you know, kind of era models were- were not nearly as smart as the current models, right? Um, and so, like, you couldn't really ask it to one shot a more complicated chunk of code or- or certainly not, like, a full stack app and expect it to work. Um, and so the right form factor for leveraging those models in a software creation context was GitHub Copilot, right? It's like auto complete a few lines of code at a time, right? But, you know, you- you couldn't chat to it and tell it, like, "Build me this entire app from scratch," right? And I think that, like, as the models got better and better, you saw that the new form factors emerge. Like, I think Cursor did a great job of, like, being an early pioneer of this more agentic way of leveraging the models to- to do more complex things and generate more- you know, kind of larger chunks of code. And now with Composer, you can literally just go into Cursor and build an app from scratch, like, build me a 3D shooter game from scratch and just watch it go (laughs) and, like, create all the files and re- you know, fill out each file and then, like, you know, like, the thing actually runs some of the time. And so, to me, this is, you know, where the world is going. The models are clearly getting smarter. And, you know, if

  8. 34:4840:20

    Airtable’s vision and philosophy

    1. HL

      you think about the original vision of Airtable, it was always about democratizing soft creation. Like, we just strongly believed that, you know, the number of people who use apps, uh, far outweighs the number of people who can actually, like, build their own or- or manipulate apps and, like, harness, like, custom software to their advantage.

    2. LR

      That sounds very familiar.

    3. HL

      And-

    4. LR

      Very familiar these days, there.

    5. HL

      Yeah. Exactly. And- and so, like, I think this is, like, it's a different means to the same end. And so, like, it's almost like we have to lean into this because if we started Airtable today, like, this is what we would be all in on. Now, I think that the advantage the- that we have, um... And, like, I do think you have to be realistic to yourself, um, especially as- as a- uh, as a company that predates Gen AI and now has to kind of find your new footing in the AI landscape. Like, you can't fool yourself and just say like, "Okay, I'm gonna throw in some AI stuff on the landing- on the marketing site, you know, put in a couple of AI features and call it a day." Like, I think you actually have to take a clean slate, uh, approach to saying, like, "How would our mission best be expressed?" Like, if you were literally founding a new company from scratch with the same mission, how would you execute on that mission using a fully AI native approach, right? And, like, I... And- and- and then by the way, like, do you have useful building blocks, um, you know, that you can leverage from your existing product, uh, and your existing business? Or are you literally worse off having this legacy asset versus starting something from scratch? And, like, I don't think the answer is always yes or no. I think it just depends on the product. And if you can't really introspect and say like, "Look, I think I'm better off doing this with the pieces that I have from my existing business and product," then I think you should sell, right? Like, you should find a buyer for that company and then go and- and, like, you know, if you really care about this mission, like, go and start the next carnation of it, right?In my case, like, I, I, I really, you know, thought about this and like really feel strongly that the building blocks that we have, like these no-code components, actually do allow us to execute better on this vision than if I had to start from scratch, right? Meaning, like the problem with vibe coding, especially for building business apps. So, I should clarify that like, you know, we wanna democratize software creation, but specifically, we are focused on business apps, right? We're not trying to be the platform where you create like a cool viral consumer game. This is for, like, your CRM, right? Or if you wanna build an inventory management system as a small restaurant, or a l- a lawyer trying to build like a case management system, like, that's what we've always been, been, uh, focused on. And I think in this, uh, AI native world, clearly you should be able to generate those apps ageneticly, and yet if you have an agent that has to generate every single bit of that app from scratch, from code, it's gonna be very unreliable, there's gonna be bugs, there's gonna be data and security issues, and then you're also gonna have a context collapse as it just cannot manage all of the code that it's written basically as the app gets more and more complex, right? And what we actually have are basically these primitives that the agent can manipulate and use without having to, like, literally write the code from scratch to represent, like, here's a beautiful crud interface on top of the data layer, right? Like ours is real time and collaborative and really rich and has collaboration on it, and by the way, here's all these other view types and a layout engine for a custom interface, you know, uh, a layout, right? Or automations and business logic. And so it's almost like, um, in programming terms, like the Airtable pieces in our Lego kit today can be used by this agent as almost like a more expressive DSL, like a domain specific language, to build business apps, instead of literally having to write everything down to like the SQL and HTML and JavaScript to build every part of that app from scratch. And so, like if we can combine the best of both worlds, like we have these very reliable high quality Lego pieces, now an agent can go and like assemble them for you instead of you just using the GUI to do that, and by the way, if you do wanna fall back to the GUI, there's a really great, you know, kind of way for the non-technical user to still understand and participate in what's going on. Whereas if you're not technical, you can't inspect the code underneath a V0 or Lovable or Revlet app, right? Like, it's just kind of opaque to you. And if you can't reprompt it to get what you want, you're kind of stuck. Um, you know, this is much more akin to like a developer using Cursor can generate lots of code but then can still drop back to the IDE to edit and, and manipulate it to the final, you know, kind of production ready state. So like that's, that's kind of the, the play that we're making. And if I didn't fully and truly believe like, you know, we have a better shot at doing it with our existing product, like, I wouldn't be running this company in its form today.

    6. LR

      I'm talking to a lot of founders that are going through the journey you're going on, which is, we've had a business for a decade, AI merged and wow, we gotta figure out something that works that could work even better. And so I'm trying to pull out the threads that are consistently working across these journeys 'cause I think a lot of companies are trying to figure this out. So one that you just touched on is just if you were to start today, what would you do?

    7. HL

      Yeah.

    8. LR

      Like, what would that business be?

    9. HL

      Yeah.

    10. LR

      Plus how can, how can, do we have an un- unfair advantage with the thing we've done in the past?

    11. HL

      Yeah. Yeah.

    12. LR

      That feels like an important ingredient. And then the other, circling back to stuff you've shared already, there's just, uh, just like creating a sense of urgency and pace-

    13. HL

      Mm.

    14. LR

      ... and getting people, uh, to understand this is how things move in AI...

    15. HL

      Yeah.

    16. LR

      ... and we need to create this fast thinking team. I love that metaphor and framing. And then there's the point you made about just talking to AI regularly as the founder feels like an important element, just like to truly be this IC CEO talking to AI...

  9. 40:2046:50

    Empowering teams with AI tools

    1. LR

    2. HL

      Yeah.

    3. LR

      ... working with AI regularly. Just on that note a little bit more, what, just to give people a sense of what this looks like day to day. So you're talking to Omni all day trying to under to- flex the power of what you can do and iterate on it. Is there anything else you're doing day to day that helps you figure out what to do for the business?

    4. HL

      One, I try to use as many different AI products, including not Airtable, right? Like, as I can. Um, and both literally for the novelty factor and just like, you know, some new cool demo comes out, like, uh, Runway released their like immersive world, uh, you know, kind of engine, right? And, um, and so like, I'm gonna go try, try it out, right? Like when, uh, Sesame AI put out their like cool like kind of interactive voice, voice chat, um, you know, uh, uh, you know, demo, like I tried that out because like even though we don't have a direct and near term, like, um, you know, kind of, uh, need for like really, um, realistic and, and interruptible like kind of voice mode, uh, where it's not as core to our capabilities, like, I just wanna understand and, and like get a feel for everything that's out there, right? And I try to invent little like kind of almost like side projects of my own to have a, a real kind of reason to use these products. Like, you know, oh cool, what if I were to take like a, what, what if I were to like try to create like a funny little like, um, you know, like a, a short, a funny video short, right? Using a combination of like hey gen avatars with like a script, like a, a comical script generated by AI, right? And maybe it'll be on like an interesting topic. So I'll do like deep research on the topic with ChatGPT and pull together the results, have it composed, like, you know, kind of a, a little-

    5. LR

      Did you actually do this? Is there something made like this?

    6. HL

      Yeah. I mean like that, that's literally an example...

    7. LR

      Okay.

    8. HL

      ... of something like just, you know, a fun weekend project and like to be honest, like, these things only take you like an hour, right? If you're, if you become kind of pre- pretty proficient with using the products, like they're all so easy to use. Like you can literally do the deep research thing, you know, kick off query, make a coffee, come back in 20 minutes, okay, like let me, let me prompt it to like generate me some dialogue. Uh, it's a little bit like what NotebookLM does for you outta the box, but sometimes I like to just like do it myself, right? And then okay, let me take the script and like cut it up and like, you know, turn it into a hey gen avatar and then download the video and then like play it, right? Like, and just for fun, right? I'm not like trying to make, make that into an actual like, you know, kind of YouTube like video business. But, but I think like-... coming up with, like, these different, like, fun weekend projects is a really useful construct to, like, force myself to actually try these products in a more than just, like, a Twitch click way. And you know, what, what it gives me is like, A, like, it's not just understanding the models, which is also very, very important, right? Like, GPT f- five came out yesterday and, like, playing around with it a bunch, uh, just on like a variety of different, like, personal use cases, um, you know, but like, there's a difference between just understanding the model, but then also understanding, like, the product form factors in which they can be placed, right? Meaning like, you know, when you apply the model in a more structured way, right? Um, you know, when you apply the model with different tool calling than maybe what ChatGPT has in its kind of, like, out of the box form, you know, y- when you apply it with, like, you know, kind of a more agentic workflow, again, that might be different from, like, what ChatGPT gives you out of the box, like, that's when you kind of learn, like, you know, you, you really get to inspire yourself on, like, what are the product form factors that these new models can take. So like, and, and plus, by the way, like, I find it to be really fun. Like, there is a, to me, like, a delight and entertainment value to just using AI, period. Because like, A, it's, it's, it's not, it's not, like, perfectly predictable. So I think the element of, like, you're not quite sure what you're gonna get, (laughs) you know? It's like a box of chocolates, uh, you know? Uh, and, and B, like, it always blows my mind just to think about like, wow, like, you know, five years ago, we didn't have any of this stuff, right? Like, you know, AI was like, okay, like, it's like we can do predictive analytics. It's like, you know, there, there's some, like, basically very advanced, you know, kind of regressions that we could run with, with AI, but like, it looked nothing like this, right? In its cr- in, in its current form. And it's just, like, actually super fun, in my opinion, to get to play around with all the different types of products that, that, uh, that come out. So I think that is a big part of it, um, you know, because on the point about, like, the pace of the world moving so much faster in AI than any other landscape, it, it, like, you know, in SaaS, y- you know, in the mature SaaS era, like, it was important to study your competition, right? Like, if you were building a SaaS company, you'd be crazy not to follow Salesforce, right, um, every, like, year and see what the, you know, the major releases they're putting out are or ServiceNow or, you know, so on. Like, this is the equivalent of that, but, like there's major new releases and products and, and so on, like, every week, right? Not like every year. And so I just think you have to stay abreast of all, uh, of it all, and combining this with our point earlier of, like, a lot of this has to be experienced, not just, like, read. Like, you can't just read, like, the write-up on TechCrunch or, or, you know, even a tweet about, like, a new capability. Like, you kind of have to try it to really get a sense of, like, what it is.

    9. LR

      Today's episode is brought to you by Anthropic, the team behind Claude. I use Claude at least 10 times a day. I use it for researching my podcast guests, for brainstorming title ideas for both my podcast and my newsletter, for getting feedback on my writing, and all kinds of stuff. Just last week, I was preparing for an interview with a very fancy guest, and I had Claude tell me what are all the questions that other podcast hosts have asked this guest so that I don't ask them these questions. How much time do you spend every week trying to synthesize all of your user research insights, support tickets, sales calls, experiment results, and competitive intel? Claude can handle incredibly complex, multi-step work. You can throw a 100-page strategy document at it and ask it for insights, or you can dump all your user research and ask it to find patterns. With Claude 4 and the new integrations, including Claude 4 Opus, the world's best coding model, you get voice conversations, advanced research capabilities, direct Google Workspace integration, and now MCP connections to your custom tools and data sources. Claude just becomes part of your workflow. If you wanna try it out, get started at claude.ai/lenny, and using this link, you get an incredible 50% off your first three months of the Pro plan. That's claude.ai/lenny.

  10. 46:5050:55

    Encouraging experimentation and play

    1. LR

      For people that work for you across Airtable, say, the product team, PMs, maybe engineers, designers, how have you adjusted what you expect of them to help them be successful in this new world?

    2. HL

      One is, you know, really, really, really stressing this idea of like, go play with this stuff. And I mean, when I say play, I really mean play like in, in the psychological sense of, like, you know, it's, there's a difference when, like, you go in and you're kind of just trying to check the box and, like, get a job done, right? There's a difference when, like, you come in with a curiosity and, like, you know, you're kind of, like, exploring, right? And it's both more fun and energizing, but also I think, like, you learn more through that, right? And so, like, I've really tried to stress the value of play with these AI products, and I kind of, you know, try to lead by example by, like, literally going and, like, sharing out links or, or, uh, screenshots, like, you know, of the things that I'm doing in these various products. So like, you know, as an example, you know, like, I, uh, will go into, um, you know, like, one, one of the, um, uh, prototyping tools and show like, hey, like, you know, uh, I built a marketing landing page for, you know, this new, uh, capability we're launching. I kind of created, like, a landing page for it in Replit, let's say, and now I'm sharing that link. Instead of, you know, what typically, like, we would have done in the past is like, okay, we're gonna write a doc about it and then share the doc. I'm just gonna show you, like, an actual landing page with, like, visuals and everything in there, right? Um, or, like, I'll share, like, uh, you know, the actual link to my deep research reports, or, like, instead of me writing a perfect memo on a topic, like, I'll actually just, like, prompt my way into getting, like, a chat thread, uh, or a chat output that basically covers all the content that I care about and maybe even, like, ask it to, like, okay, summarize this all into, like, a final, you know, (laughs) kind of, like, memo output, and then intentionally share that rather than expose the fact that, like, I'm using AI in this way and here's literally how I'm prompting it so you can follow along as well. You know, but really trying to encourage everyone to, like, go and just,... play with these products. And I've even said, "Look, if anyone wants to just literally block out a day, or frankly even a week, and, like, like, um, have the ultimate, uh, excuse." Like, you can use, like, you know, you c- you could say that I told you to do it, right? Like, if you wanna cancel all your meetings for like a day or for an entire week and just go play around with every product, AI product that you can find that you think could be relevant to Airtable, go do it. Like, period. Um, so I think that's the most important thing is like this, this play, this ex- experimentation. I think there's also a lot of other, you know, kind of shifts in how we execute, uh, prototypes over decks. Um, you know, like I, I wanna see, like, actual interactive demos, because like, again like, it's hard to, to, you know, in a deck or in a PRD you could say like, "Okay, well, we're gonna make Omni really good at handling this kind of app building." Okay, those are just words. The real proof is in the putting of like, okay, let me try it out on a few like realistic prompts that I can imagine. And in a demo, in a real prototype, you can, like instantly, you know, try it out on unrealistic rather than golden pathy scenarios, um, and see how it feels too. Like is it... Does it feel too slow? Like, do we need to expose more of the, the reasoning or steps, you know, kind of, um, you know, that, that are happening behind the scenes? Create a progress bar or something like that. But like, it's really hard to get that feel of the product with anything but like a functional prototype that really does, in an open-end way, you know, like use the, the AI to, to do whatever, uh, you know, you put in. So, you know, I think it's, it's more like a, um, like experimentation playground, it feels like. Uh, how we need to execute versus I think in the past it sometimes felt like a more, like, deterministic resourcing and, and like kind of timelines view of execution, right? Like, we're gonna put this many people on this problem and this is the eight-week timeline to this milestone and then we're gonna ship in a quarter from now. And like I think now, the whole thing is just like a lot more experimentation and iteration driven.

  11. 50:551:03:35

    Cross-functional skills in product teams

    1. HL

    2. LR

      Of the different functions on a product team, PM, engineering, design, who has had the most success being more productive with these tools? And how do you think this will impact each of these three functions over time?

    3. HL

      What I found is that it really does become more about individual attitude and maybe some like, um, you know, polymathism. Like i- you know, there's a strong, uh, advantage to any of those three roles who can kind of cross over into the other two, right? Like kind of the, the hybrid unicorn types, right? So if you're a designer who can be just technical enough to kind of be dangerous and, and understand a little bit of like how these models work and, you know, um, like how does tool calling work and, uh, and all of this stuff. Like, then you can actually design a concept or even prototype a concept in- including in these prototyping tools, um, that, that's much more interesting and maybe realistic than if you're just stuck in kind of the flat, like, let me put something in a static design, right? Um, concept, right? 'cause, uh, I think, you know, designs have to be more interactive. Like, the, the whole... The, the, the value of the product, um, and the product functionality is in the interaction of it, right? Like, you know, think about the design of ChatGPT. Again, it's like, you know, it's the most basic design you could possibly imagine. The real design actually is happening underneath the hood in how it responds to different queries, right? And what happens after you fire off a prompt, right? Um, so, you know, I think like, I found that there are people within each of these functions, like there are engineers who are very good at thinking about product and experience. And like, you know, kind of c- can go and prototype out like the whole thing. There are designers who can kind of do, do the same. Even if they can't literally code, they can prototype something out like literally using a prototyping tool. And I think that's where like, AI tooling is also giving more advantage to people who can think in this way by equipping them with an alternative to actually having to go through the long hoops of learning CS, right? And the PMs as well. I think like there are some PMs who are like really getting into the technical details and studying up on like, you know, how does this stuff work? And actually getting hands-on rather than seeing their role as, you know, kind of writing documents, writing PRDs.

    4. LR

      Do you see one of these roles, uh, I don't know, being more in trouble than others? Just like you need fewer of thes- these people in the future potentially?

    5. HL

      I think overall you can get more done with fewer people. And that's not to say like, you know, we wanna go and like s- like make the team smaller. But rather like, like the really cool thing for, for, uh, us and I think a lot of other companies is, it's not like you have a finite set of things you need to do and execute on from a product standpoint and okay, like now I can do that with a tenth of the people. I mean, you could do that in a lot of cases, but like for us maybe it's also because we're a very meta product, right? Like, we are the app platform with which you can build now any AI app with AI, right? The apps (laughs) themselves leverage AI capabilities at runtime, whether it's to generate imagery for a creative production workflow or, you know, kind of leveraging deep research, um, or AI based... like, um, you know, kind of crawling of the web to search for companies that match a certain criteria for your deal flow app, right? Or something like that. Like, we can effectively leverage all of these other AI capabilities in this, this kind of like app platform because by definition we're enabling our customers to build apps that have this wide range of AI capabilities. But because of that it's like we have a, you know, kind of almost infinite like set of possible AI capabilities that we could execute on, right? And I'm always telling the team like, "Look, like, the great news is like we have... It's like we have all these fruit trees and like there's so many crazy low-hanging fruit," right? Like, and you got literally like massive watermelons like literally sitting on the ground, right? (laughs) And all you have to do is like kinda walk over 20 feet and pick it up instead of having to climb the really tall coconut tree to grab like a hard coconut from like 50 feet up. And so, like...There's so many watermelons on the ground. Just go out and, like, start finding the biggest ones and attacking those, right? And, like, um, and what that means is that, like, if we can build this culture, and I do think, like, it's a learnable way of operating, like, I, I, I really like to believe in, like, the, like, the growth potential of, like, any human, right? Like, and, and, uh, any individual. Like, I think if you really have a growth mindset, and that's why one of our, like, most important core values is, is growth mindset, right? Like, if you really have that growth mindset, I think, like, especially if you're willing to put in the nights and weekends, hours, or in my case, like, I'm literally telling people, like, "Take a full day off, take a full week off and learn this stuff," like, you can, you know, become more fluent, uh, in this way. And I think then what we get is, like, a team that can just go and work on more things in a much more leveraged and fast way, right? So, I like to think, like, you know, people who are willing to jump on the train are just gonna become more and more effective, and it's not like, oh, like, as a PM, my role is becoming inc- in- entirely irrelevant, right? Like, no, it means that as a PM you need to start looking more like a hybrid PM prototyper who has some good design sensibilities. And by the way, like, I think some of the best eng PM and design cultures respectively over the past even few decades have always been multidisciplinary in nature, right? Like, the original PM spec at Google required the PMs to actually be somewhat technical so they could, they could understand the engineering, you know, kind of, um, limitations of, of, like, the product, you know, designs they wanted to make, and they had to be kind of design-y, right? Like, um, I remember my, my co-founder Andrew when he was in the APM program was, like, always reading books about, like, design, like, even down to, like, visual design and color theory and that kind of thing, right? Um, and so I think it just a reminder that, you know, like, designers as well, like, the, you know, some of the best designers, if you're a designer at Apple, like, you know, including hardware designer, like, you have to understand some of the technical capabilities of how this stuff works, right? Um, and if you're an engineer, like, I think some of the best engineers, and maybe Stripe always had a very good engineering culture of engineers who could think about the product and business requirements. In fact, like, you know, on any given product group, uh, at Stripe my understanding is that, like, you know, the DRI isn't always the PM, right? Um, like, as is traditionally the case in, in kind of that, that triangle. It's like, you know, sometimes it's actually the engineer who's taking the product lead and saying, like, "This is what we need to build."

    6. LR

      So, what I'm hearing is essentially if you are... Like, the trend across product engineering design is each of those functions needs to get good at one of the other functions at least.

    7. HL

      Yeah.

    8. LR

      Ideally you can do 'em all, but if, if you can just do one additional, so a PM becomes better at design, an engineer becomes better at product management.

    9. HL

      I think, well, I would actually go further and say, like, I think you-

    10. LR

      Mm-hmm.

    11. HL

      ... need to get, like, decently good at all three.

    12. LR

      Mm-hmm.

    13. HL

      Like, there's this, a minimum baseline of, like, if you're any one of those roles, you need to be, like, minimally good at the other two, and then you can go deeper into your own kind of specialty, right? Like, you know, you could be a designer who's really good at thinking about UX and interaction design, and then just, like, good enough to be dangerous on thinking about, like, what's technically possible and, like, what is the product, you know, kind of, you know, kind of story around this, this, uh, feature.

    14. LR

      I love that. And to do that, one piece of advice that comes up again and again in what you're-

    15. HL

      Yeah.

    16. LR

      ... what you've been describing is using the, use the tools constantly to see what's possible, and that will teach you a lot of these things.

    17. HL

      I think use, well, use the tools gives you exposure to what's possible, right? It's kind of like if you wanted to be a great industrial designer, and let's say, like, I mean, the chair is kind of the ultimate, like, hello world of, like, industrial design, right? It's like the, the, like, canonical design object. Like, you wouldn't just sit there in a vacuum and with no familiarity with, like, the materials that you can use, plywood, steel, whatever, or, like, existing form factors of chairs, try and invent the world's best chair in a vacuum, right? Like, you should go and first do a study of, like, all of the best chairs out there today. Like, go look at an Eames chair, sit in it, like, try to examine it to kind of reverse engineer how it was made, right? And, like, you know, and, and, um, just look at the prior art for that type of product. Like, that's how I see the go out and play with these products. And also, I think, like, actually going and designing or implementing or executing is the best practice. So, like, you can't just only go and look at other people's chairs. Like, eventually you have to go and, like, actually try building your own, and then try building another one and another one and another one. And so I think that's where, like, you know, when I think about how I honed my own product UX sensibilities, like, I never, like, I mean, you know, and at that time, like, that I was in, in, uh, school and, and kind of learning about this stuff, like, there wasn't really any good curriculum for UX, right? It's not like there were, like, great, you know, college classes to learn product UX. I mean, even CS was, like, very academic in nature at that time. It wasn't applied software engineering, like build an app or whatever. Uh, maybe now at, like, some of the schools like Stanford, MIT, they have, like, actually UXI type courses, but it's, it's still a rarity for most people to have access to that. And so, like, the way I learned, like, all of my product sensibilities was just, like, trial and error and, like, also using and studying other products, right? And then going and trying to build, like, my own weekend project ideas, right? Oh, I want to build, like, a Yelp-style app with a map view and then also a list view, and I want it so that when you, when you pan around in the map for it to automatically update the list view, and maybe there's some UX improvements I can make on top of that, but I can also, like, test my technical skills to, to figure out, like, which parts of this are hard to implement and, like, how do you make it work and what are some of the design changes or affordances that you can use to kind of, like, map to, like, the technical possibilities.

    18. LR

      To do that, I loved your piece of advice, which I forgot to double down on, which I also find really powerful.The best tip there is find something to actually build that is useful to you and fun. Like, pick a project that's like, "Oh, yeah. This would be fun to do." Have, like, a problem you're solving that forces you to actually do this thing.

    19. HL

      For sure. And look, I think that can be, like, night and weekend projects. It can also be, like, the daytime job projects, right? I mean, like, I am basically telling our teams on the AI platform, um, uh, group especially, like, look, like, you know, in that, that low-hanging fruit metaphor, it's like, I'm not being prescriptive with you on, like, which watermelons you should pick. But, like, you should go and, like... And, and we do have different, like, pods within that group, but one of them, for instance is, uh, what we call the field agents team, and they're responsible for the agents that work within your app. So, this is not the agent that builds your app, but these agents that run on a customer's behalf to do, like, web research on your customers, or they can, you know, go and analyze a document. Um, and like, in the future, maybe do things like actually generate a, like, prototype, like, of, of a, uh, of a feature, you know, from a PRT or from, like, a feature idea. Uh, and, you know, I'm telling them, like, "Look, like, there's a almost infinite number of things you could... like, superpowers you can give these field agents. I'm not gonna tell you which specifically to do." Now, you can ask me to weigh in for sure, but, like, you should go and, like, you know, just experiment and prototype, like, a few different versions of it, like, uh, a few different directions we could go. Like, what if you prototyped what it would look like to have a deep research implementation in field agents, so that, like, for any given row of data, let's say in your case it's podcast guests, you can just click a button or click a button en masse across the entire, like, every speaker you have lined up to do deep research, like, powered by Chat g- GPT's own deep research on each of the speakers and have them all laid out side by side in this table, right? Like, go prototype that and see how it, like, you know, see how it feels and looks like. And so, I think some of this stuff can also be, like, in your daytime job, especially if that daytime job is literally to go and build AI functionality.

    20. LR

      I actually tried to do exactly that. The problem I ran into, I wonder if it's, uh, changed, is there's no, uh, API for, uh, op- for ChatGPT deep research yet, as far as I can tell.

    21. HL

      There is now. There is now.

    22. LR

      There is. There we go.

    23. HL

      So, it actually ends up being, uh... And I think they only recently exposed it. It ends up being, like-

    24. LR

      Okay.

    25. HL

      ... something on the order of, like, a dollar plus per research call, which-

    26. LR

      What a deal.

    27. HL

      Like, I mean, to... Again, exactly. I mean, some people would say, "Oh my God, that's so expensive," and you rack up 50 of those, you've, you've cost $50 a month. I think it's like, well, it just saved you, like, hours of research by a human.

    28. LR

      Not only that, I, I actually have a researcher that I pay to, uh, give me background on guests.

    29. HL

      Yeah.

    30. LR

      That was like 400 or 500 bucks.

  12. 1:03:351:08:06

    The importance of evals and open-ended testing

    1. LR

    2. HL

      (laughs)

    3. LR

      Oh, man. Okay. There's one more, uh, skill I wanted to talk about real quick. This comes up a lot in these conversations, is evals.

    4. HL

      Okay.

    5. LR

      The power of getting good at evals. I know that's something-

    6. HL

      Yeah.

    7. LR

      ... you value highly. Talk about just why you think this is something people need to get good at.

    8. HL

      Yeah. I mean, um... And I listened to your, your, uh, your episodes with, uh, with Bim-

    9. LR

      Hmm.

    10. HL

      ... and, and Mike, uh, who talked about this. I think it's, like-

    11. LR

      Yep.

    12. HL

      ... interesting that, like, um, you know, like, both heads of OpenAI and Anthropic, uh, you know, have converged on, on this point. I mean, look, I think, um... I, I would add, like, a, a slightly different or additive take though, which is like, I think, um, for a completely novel product experience or form factor, you should actually not start with evals, and you should start with vibes, right? Meaning, like, you know, it, you, you need to go and just kind of test in a much more open-ended way. Like, like, does this even work, like, you know, in, in kind of, like, a broad sense? So, like, as an example, for our custom code generation capability, like, instead of defining evals that get repeatably tested, you know, as you vary, like, the prompt or the model or, like, the, the agentic workflow used to generate the- these outputs, and you have to define, like, you know, what does good look like, right? By definition for the eval. Like, I would first start with a much more open-ended and, like, ad hoc style of, like, just throw stuff against the wall, like, try different prompts and see how well it does. And to me, evals are more useful, A, once you've converged on the kind of, like, basic scaffold of the form factor and you kinda know what are the use cases you want it to work well for and what you want to test against it. Whereas in the early days, especially if, if your product market fit finding either for an entirely new company or for, like, a new, uh... pretty dramatically new or bold new capability that doesn't really have, like... it's not an incremental improvement on something that exists in Airtable today, like, I think you have to just be a little bit more creative initially in, like, throwing stuff at it, seeing what works to understand, okay, like, let's use an example. You know, we- we're implementing this new capability that can use, uh, basically a long-running AI crawler agent that goes and researches the web, you know, for a specific type of object or entity, right? So, it's a little bit different from deep research. It's similar to deep research, but what it actually does is instead of outputting, like, a, you know, kind of a report, it's actually going and compiling a list of things. The things could be companies or people, um, or, or anything else, right? Like, find me every Marvel movie, right? Ever made. Find me every, like, um, you know, kind of DC comics, like, uh, spinoff, right? Like, a series, right? Um, literally anything. And you know, you have to go in at first, like, just try out a bunch of random, like... you know, use your own brain to think of, like, what are all the, like... what's the range of use cases I can test this against, right? And then you get back some results and you're like, "Okay. Well, like, it's clear that, like, where it does really well are these types of searches," right? Like, people on companies with this kind of parameter. And I think to me, like, evals are useful once you have, like, a sense of, like, what is that cluster of useful use cases? You can start then more, um, d- like, um, uh, programmatically, like, measuring...... the changes that you're making to improve, like, the, the, uh, the, the, the output for that, right? Um, but, like, by that point, you've probably already scoped the product. And maybe the way we would merchandise it in the, the, uh, in Airtable is not, like, a completely open-ended capability, but, like, "Hey, like, here's a specific capability that can research one of these X number of, uh, entity types, including people and companies. And here's even, like, the filter conditions or criteria that are more explicit that you can define to give it the prompting to, to search for that thing," right? But I kind of think it's, it's more useful as a way to iterate your way to improvement, um, and you can start, you know, really testing stuff, like, empirically, right? You can AB test, especially if you have the scale of a really large product like Anthropic or OpenAI. You can, like, just test everything and, and see, like, "Oh, this model actually performs better than this one. This prompt performs better than this one." Um, but I think early on, like, you don't have that luxury, and you're in a much more open-ended discovery process.

    13. LR

      That is very wise. Evils can constrain you too early. I think about just the double diamond, I don't know, IDEO kind of framework of, like, be conver- uh, divergent first and then convergent, and maybe that's where you want to start

    14. HL

      Yeah. It is, yeah, uh, exactly. Uh, I hadn't heard that before, but, um, that, that, uh, completely resonates.

  13. 1:08:061:12:43

    Key strategies for AI-driven success

    1. HL

    2. LR

      Okay. Let me try to reflect back some of the advice I've been hearing about how to shift a company to be successful in this new world, and let me see if I'm missing anything that you think is really important. So, one is, there's this sense of just, like, reset the expectations on pace and urgency.

    3. HL

      Mm-hmm.

    4. LR

      And help people understand, in AI, things move incredibly fast.

    5. HL

      Yeah.

    6. LR

      This is how we need to operate. And then there's also a piece of get stuff out so that you can learn how people use it and what it's capable of, versus polishing it endlessly.

    7. HL

      Mm-hmm.

    8. LR

      Um, forcing people almost... I don't know if forcing is the right word, but encouraging people to play with the latest stuff.

    9. HL

      Yeah.

    10. LR

      And, like, giving them chance to take days off, to-

    11. HL

      Yeah.

    12. LR

      ... or block out calendars, cancel meetings, just like stay on top of the stuff.

    13. HL

      Yeah.

    14. LR

      Uh, to play, as you talked about it, and then sharing things they've learned, get the vibes of what's possible.

    15. HL

      Yeah.

    16. LR

      There's also this idea of just rethink. Okay, if we were to start today, in this world, what would we do to achieve the same mission we have achieved, we are trying to achieve? And ideally, it leverages this unfair advantage we have with things we've been working on for a long time. And then there's just, like, talk to AI constantly, m- uh, every hour (laughs) -

    17. HL

      For sure.

    18. LR

      ... as you described.

    19. HL

      Yeah, multiple times an hour if possible.

    20. LR

      Multiple times an hour.

    21. HL

      Yeah.

    22. LR

      It keeps going up. Um, is there anything else that I missed there, that you're like, "This is, you need to do this too, to be really, to have a chance"?

    23. HL

      I think just to really, really try to break down role silos. Like, and I think that's true certainly for EPND, um, in the typical, like, um, you know, EPD triangle, but I also think it's, it's probably true even for, like, non-product roles, right? Like, I think, um, it's true in marketing, right? Like, I'm seeing, you know, something, uh, you know, something I'm really pushing for in marketing, and I think our marketing team is, like, you know, really leaning into actually is, um, is like, you know, if you can just do all of the thing yourself... Like, traditionally, you know, how a marketing team might operate is like, okay, you have one person who's kind of responsible for executing the performance marketing, you know, kind of, uh, part of a campaign, right? Like, they literally go into the Google AdWords interface, and they're, like, tweaking the parameters of targeting and, you know, budget and, like, you know, kind of, uh, conversion, uh, tracking, et cetera. And, and then somebody else is actually responsible for, like, coming up with the specific ad copy, right? And somebody else yet was responsible for coming up with, like, the seed content or positioning, you know, guide, like, written by a PMM that feeds into the ad creative and, you know, so on and so forth, right? Like, maybe they're promoting some, like, new demo asset, right, uh, that somebody else yet created. And I just think that, like, you know, in the same way that you can collapse the roles in EPD and, like, the ideal person, maybe they're, they're very specially, uh, you know, specialized and deep in one dimension, like engineering, um, but they're well-rounded enough to kind of, like, be dangerous on the other two. Like, I think that's kind of true in almost every other function, right? Like, you know, like sales as well. Like, I think you should, you know, start to be able to play more of an SE role. Like, traditionally, salespeople, um, didn't necessarily know the product that well, and like, you know, kind of relied on the SE to come in and be the product experts. Like, I think it's really hard to sell any kind of AI product now without actually being fluent in the product and be able to demo the product, right? So, like, you know, and, uh, AEs need to be, like, SE fluent as well. So, I just think that that concept of, like, collapsing roles, um... You know, everybody needs to, like, become more full stack to do the thi-... Like, being more outcome-oriented, right? Like, your outcome as an AE is to, like, show customers, you know, convince customers of the value of your product and close deals, right? Okay, well, in order to do that, like, you used to have dependencies on having assets created by marketing and, like, you know, an SE to help you demo. Like, can you collapse more of those dependencies so that if you had to, you could do it all yourself, right? Um, and I just think that's a new way... Like, it's a new operating mentality overall for every AI-native company or company that wants to compete in this new arena.

Episode duration: 1:40:41

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode GT0jtVjRy2E

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome