Skip to content
The Twenty Minute VCThe Twenty Minute VC

George Sivulka, Co-Founder & CEO @Hebbia: The Future of Foundation Models | E1250

George Sivulka is the founder and CEO of Hebbia, is one of the fastest-growing gen AI companies and they recently raised a $130M series B. Investors include the company include hailed names such as a16z, Peter Thiel, Index, GV and others. ---------------------------------------------- In Today’s Episode with George Sivulka We Discuss: 00:00 Intro 03:28 Three Traits The Best Founders All Share? 05:41 How Cold Calling NASA Changed My Life 09:24 From Stealing Food From Stanford to Pitching Peter Thiel 15:59 Lessons working with Peter Thiel 24:02 The Future of AI and Business Applications 33:29 Debunking the Myths of AI Job Displacement 38:32 The Future of Models: Many specialised or few generalised? 39:50 The Impact of Scaling Laws on Foundation Models 46:13 The Geopolitical Influence on AI 48:06 The Commoditization of AI Models 48:05 Why Foundation Models Will Not Follow the Same Path of Cloud 57:54 Quick-Fire Round ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZ... Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast... Follow Harry Stebbings on Twitter: / harrystebbings Follow George Sivulka on Twitter: / gsivulka Follow 20VC on Instagram: / 20vchq Follow 20VC on TikTok: / 20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/con... ----------------------------------------------- #20vc #harrystebbings #GeorgeSivulka #Hebbia #founder #CEO #venturecapital #startups #fundraising #SamAltman #openai #businesstips

George SivulkaguestHarry Stebbingshost
Jan 22, 20251h 8mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:003:28

    Intro

    1. GS

      You can bucket great founders into three backgrounds. The most common is that you had kind of a messed up childhood. The second most common would be, uh, you're gay. And the third most common would be you were adopted. Look at, like, a list of- of- of all-time greats, Elon Musk, kind of messed up childhood, Jeff Bezos, Steve Jobs, adopted, Peter Thiel, Sam Altman, publicly gay. All of these early life experiences end up giving you some desire, some deeper passion to go out and prove yourself.

    2. HS

      Ready to go? George, I am so excited for this, dude. I've been really looking forward to this one. I spoke to Kevin Hart, Sangeen, Corey. I found out all the shit there is to know. So thank you for joining us.

    3. GS

      Yeah. And- and- and, I mean, it sounds like you did a lot of research, so thank you for diving deep and I'm really excited to meet you as well.

    4. HS

      Yeah. As a venture capitalist, it's amazing the amount of free time you have. Uh, talk to-

    5. GS

      (laughs)

    6. HS

      Talk... This is gonna be a fun show. Uh-

    7. GS

      Yeah.

    8. HS

      ... talk to me about your childhood. I spoke to Sangeen and he was like, "This was a really interesting part of me getting to know George." So talk to me about your childhood. And I'm leaving that deliberately open for you.

    9. GS

      It's, uh... It is fair. And, uh, I think the first time, you know, I met with Sangeen, who's one of our investors at the Series B, it was like a 30-minute, uh, lunch that turned into almost two hours of us talking in- in depth about, uh, the dynamics that- that I think made me have a chip on my shoulder. But in short, uh, I was born in Staten Island, New York City, which, you know, you already have a chip on your shoulder from that. Uh, grew up kind of around New York City, in- in New Jersey primarily, which a second chip. Um, but my mom, uh, is probably, like, a, you know, mafia child, born and raised Staten Island. And my dad is an immigrant from Slovakia who grew up under the Iron Curtain and then immigrated, really escaped to the United States. They, both of them actually fully intended to be professional athletes, and they had four children of which only one was a boy. Uh, and so, you know, you can imagine their dismay when I was, uh, you know, chasing butterflies on the soccer pitch, uh, or, like, falling on my head many times, which I- I have plenty of stories of me literally, like, falling over while trying to dribble a basketball. Uh, and I think, you know, my whole childhood, I was- I was really just a math kid. Like, pretty, like, not very out there, wasn't really talkative, only really good at math. And my parents barely even knew what Stanford was. So growing up, you know, you kind of have this whole misalignment of- of who I was and who I wanted to be, uh, with who they wanted me to be. And I think that gave me this drive and desire and passion to go out and prove myself in a way that was really tangible, uh, maybe not only to them, but, like, hopefully to- to my own kids one day.

    10. HS

      Did you have friends?

    11. GS

      Uh, I was very popular. Thank you very much, Harry. (laughs) Uh, uh-

    12. HS

      Dude, I'm looking at you now.

    13. GS

      Yeah. (laughs)

    14. HS

      I'm believing that this Cristiano Ronaldo lookalike could be popular.

    15. GS

      No, no. I- I... Yeah. I- I had a lot of friends who, um, were incredibly nerdy. So we went to a public school. Uh, like, I was the type of kid that would- would hack the- the school tablets to put StarCraft on everyone's computer and then, you know, we'd all, like, not be paying attention in public school playing Star... You know. So there was a- there was a- a large enough contingent of kids that were also not athletes, uh, that- that, you know, there was- there was some involvement there. And- and, uh, and I think they also had- had a strong effect

  2. 3:285:41

    Three Traits The Best Founders All Share?

    1. GS

      on me.

    2. HS

      Before when we were chatting, you said-

    3. GS

      Mm-hmm.

    4. HS

      ... there are three archetypes, I don't know if you're willing to go into it-

    5. GS

      Yeah. I'm- I'm- I'm... I have- (laughs)

    6. HS

      ... of successful founders, that you've found is a trend. Can you talk to me about the different profiles?

    7. GS

      Uh, I'm- I'm- I'm happy to. I- I always joke around and say that, uh, you can- you can bucket great founders into- into three backgrounds. Uh, I think probably the most common is that you had kind of a messed up childhood. Uh, the second most common would be, uh, you're gay. And the third most common would be you were adopted. And I think if you... you can, if you can look at, like, a list of- of- of the all-time greats, you know, like, uh, Elon Musk, kind of messed up childhood, Jeff Bezos, Steve Jobs, adopted, uh, you know, uh, Peter Thiel, Sam Altman, you know, like, publicly gay. And- and I think that all of these kind of early life experiences end up giving you some desire, some, like, deeper passion to go out and prove yourself.

    8. HS

      I actually very much agree with you. I always very much felt like a disappointment. My brother was always-

    9. GS

      (laughs)

    10. HS

      No, no. I'm being serious.

    11. GS

      Really?

    12. HS

      My brother was always incredibly talented and good-looking and tall and smart. And I was kind of just pretty average and I was fat. Um, and, like, my dad didn't really hang out with me, he hung out with my brother. And so I always just felt like a disappointment. Um, what a, what a mistake that was, papa.

    13. GS

      (laughs)

    14. HS

      Uh... (laughs)

    15. GS

      Well, what does... what does-

    16. HS

      He's laughing now.

    17. GS

      What does your brother do now? That's- that's-

    18. HS

      He works with me upstairs. (laughs)

    19. GS

      That's- that's what I'm talking about. There we go. (laughs)

    20. HS

      But did you feel like a disappointment?

    21. GS

      You know, I think, um... I think the answer is yes, I did. Uh, I think, um, I felt, like, actually physically unable to do the things that, uh, you know, I was wanted to do, um, or I- I thought that I was good at things that weren't valued or weren't as important. And, you know, I... yeah. All of my sisters are amazing athletes. They're all, like, you know, six feet tall and they're, you know, just incredible, uh, incredible, um, you know, a- athletes. And- and I was... I was just not. And so I was kind of like the ugly duckling in many ways, yeah.

    22. HS

      I heard that you built lasers, you cold cooled NASA. Do... Can you talk to me about these kind of very strange but cool early influences in your life and how it shaped you?

    23. GS

      Uh, there's- there's completely separate stories there. Uh, but I think, um-

    24. HS

      How do you cold cool NASA?

  3. 5:419:24

    How Cold Calling NASA Changed My Life

    1. HS

    2. GS

      So- so the- the story... This is actually a very good story. I wanted to be an astronaut. Like, that was my- my number one goal and I was... I was hell-bent on that. And so by the time I was, uh, I think around 15 years old, I was going to high school in New York City, an all-scholarship school where the alumni pay for everything. I was like... I was kind of tracking academically really strong. And, um-... and I wanted a NASA internship. And they were offering them to college, you know, undergrads or, or graduate students. Uh, and so obviously, I applied and got rejected five times. And then there was a snow day in February where, um, you know, my school was closed. I, I commuted into the city. I actually showed up in front of their New York City office, the NASA Goddard Institute for Space Studies, and I, I demanded that they let me in. Uh, and the front door security guard was like, you know, "Kid, get the heck out." Like, "What are you doing? You don't have an appointment." Like, you know, I was like, "I printed my resume out on the nicest paper. I'm wearing a suit. You've gotta let me up." Uh, and he kicked me to the curb. And so I actually sat outside. Uh, it's like 110th Street in Manhattan. Uh, and it was snowing. It was like so, so, so cold. And I, uh, I didn't know what to do and I started crying. And I actually called my mother, 'cause I was like, "Well, I'm gonna come home." And she's a salesperson. She works in medical sales. And so she, uh, she picked up the phone and said, uh, "Listen. No, you're not going anywhere. You sit your ass down and you call every single number that you can get into the building." And so I sat on the curb and I cold-called every number on Google, you know, (laughs) from, from my, from my old phone. Uh, and finally someone picked up. Uh, it was one of the only people in the office that day. And they came down and met me in the lobby and I basically pitched them on myself for two hours. And, um, gave me an interview. I interviewed, botched the interview 'cause I didn't know anything about like, kind of like, uh, linear algebra. I didn't know anything about physics. Uh, but I remem- I memorized all of the titles of the posters on this professor's wall. Came back the next day, so showed up again, cold, uh, and told them basically everything that I could possibly know about his specific research and he was impressed enough to let me work for him for free. And then they paid me the next year and then I published internationally recognized research the next year. And, and by that time, uh, I think that was impressive enough to, to let Stanford let me in. Uh, which, (laughs) which was a, which was a, a life-changing moment for me.

    3. HS

      That is incredible.

    4. GS

      Yeah.

    5. HS

      That is also an incredibly heart-wrenching moment thinking of a, a little boy on the street crying. The advice of when to give up versus when to persist and fucking relentless. Me and you are both young.

    6. GS

      Yep.

    7. HS

      We've been taught that you f- win by re- by persistence and going for it. When is that true and when is it not?

    8. GS

      I think, um, I think I have an unhealthy obsession with driving really hard. And, and I think, um, yeah, I think I, you just can never give up. Like I just don't think that's an option. Right? I think, um, you can look at every company ever and some get to $100 million in, in revenue in whatever, like some span of time which they probably, their marketing team is hacked. And some, you know, end up taking really, really long periods of time. Um, and the only, the only thing that actually changes is the rate at which you get there. And so sometimes things go in your favor, sometimes they don't. But if you're so persistent that you just continue, like you can, you can bring a lemonade stand to $100 million in ARR. Like there's nothing that's actually stopping you. You can brute force your way as a founder. Just screw product-market. You could literally brute force anything in the world. Um, you just have to have that shape. You have to continue to, to just, you know, pound away, uh, at, at, at whatever is, is, is in your way.

  4. 9:2415:59

    From Stealing Food From Stanford to Pitching Peter Thiel

    1. GS

      Yeah.

    2. HS

      Stamford was a big one for you, I imagine. It was a really big personal validation to get in. Correct?

    3. GS

      Yes, yes.

    4. HS

      How did it feel when you got in?

    5. GS

      Uh, I was onto the next one. You know, it was li- it was like, "Okay, you know, that, that's done." And like the next day I was like, "Okay, well how do I become the youngest PhD student in my school's history?" It's like, yeah, not even a moment, like, I think, you know, I was excited for a moment, but you know, it, it faded very, very quickly.

    6. HS

      So, I spoke to Cory before the show.

    7. GS

      Yeah.

    8. HS

      Someone who's known you since you were 18?

    9. GS

      Probably even earlier, yeah. (laughs)

    10. HS

      I'm incredibly... Take me to the founding of Hebbia then. We're at Stanford, we're doing incredibly well, we are the wonder child. How does Hebbia come to be in that situation?

    11. GS

      I, I'm one of the youngest PhD students in the history of my school and I actually believed that I was working on, you know, at the time, one, one, one of the areas of research that was most interesting to me was meta-learning. This idea of teaching machines to learn to learn. Um, and it was June of 2020, uh, Sam Altman, OpenAI came out with GPT-3. And if you remember the title of that paper, it was, um, large language models are multitask or meta-learners. And I'm, I'm sitting in, in my lab one day, I'm playing around with this new technology, uh, and I'm like, "Wow, they just stole the most important thing I could work out r- right under from my hands." And I said, "Well, if I can't build the most important technology, how can I build the most important product?" Uh, and I think those are two very separate things. Obviously, at the time, GPT-3 was not a product. And I don't even think ChatGPT is a really good product. It's like a calculator. It's got the technology in there encapsulated in very simple form, but it's not a product that, like in Excel, that lets you just build whatever you'd like with it. That's very human first. Um, and Stanford always like pounds into your head the idea, "Hey, you've got to start a company where there's a lot of pain." And I had a lot of my students or a, a lot, a lot of my friends would go into, you know, investment banking or private equity if they were really lucky, and, uh, they would come back and, and basically be the least happy versions of themselves. They'd like lost 50 pounds, they just hated their lives. And, uh, it seemed like there was more pain in financial services around processing unstructured data than anything I'd ever seen. And it's like, "Well, there's a great company to be had here. Let's, let's give it a shot."

    12. HS

      Okay. So we're sitting in that lab. We're like, "Hey, there's a great company to be had here."

    13. GS

      Yeah.

    14. HS

      "Let's give it a shot."

    15. GS

      Yep.

    16. HS

      What now? 'Cause I heard, I mean, and I saw pictures of this wonderful bedroom.

    17. GS

      (laughs)

    18. HS

      Uh, uh, so this was Four Seasons finest.

    19. GS

      (laughs)

    20. HS

      Um, unable to make rent. You made me feel like such a diva when I saw that bedroom. Uh, but unable to make $300 rent, sneaking into Stanford dining halls for meals when you weren't studying there.

    21. GS

      Yep.

    22. HS

      (laughs) Martin George.

    23. GS

      I have no comment. Off the- off the record. We, we... (laughs)

    24. HS

      Uh, raised two rounds of financing with clothes hanging behind him on Zoom.Take me to the next step post that, "I'm going to do this in the lab."

    25. GS

      So, you know, I think I was, I was on a PhD salary. You know, you're, you're making what, $38,000 a year, I think 42 if, at, at the time if you, if you had the Stanford Graduate Fellowship, which I had. And so big dollars. Uh, and I said I was gonna go on leave, and actually originally went on leave and said, you know, told my advisor, uh, "I'll be back in a year." You know, "It's coronavirus, just like give me some time." Uh, and I didn't have anywhere to go, like, there was like a logical next step and I, I, I wanted to work on this company. So I, um, I asked my friends who were renting out, uh, a house in East Palo Alto to let me rent a room, the cheapest room they could possibly find. And they were all fully booked and it was like I think over $1000 of rent. And, and they said... I, I think it was actually, you know, $500 or $600, not $300 to give my, my broke self some credit here for not being able to afford the rent, but they said you could rent out the master bedroom closet. And so I bought, uh, I brought in like a mattress from the dorms and I had a folding table from, uh, I, I think it was like Home Depot nearby. Uh, and I would basically rotate whether the mattress on, was on the floor or the folding table was on the floor, and that was, I just sat there and worked all day, 16, 18 hours a day, go to sleep, wake up, do it again. No weekends. You know, it was just kind of like I turned into almost a monk, where I was just obsessively building heavy... I was training models at the time. Um, so I'd wake up in the middle of the night, check on them, you know, continue to use my GPU cr- 'cause I didn't want to spend any money.

    26. HS

      Is there a period where more work is not effective? Like when I think about 16 to 18 hour days in that environment-

    27. GS

      Mm-hmm.

    28. HS

      Dude, I lo- I'm masochistic to the extreme-

    29. GS

      (laughs)

    30. HS

      ... where it's unhealthy. An alcoholic, bulimic, tortured child.

  5. 15:5924:02

    Lessons working with Peter Thiel

    1. GS

      yeah.

    2. HS

      I- So I just want to unpack that element there 'cause I said... Uh, I can't remember who it was who told me I had to ask, but they said I had to ask about driving to Peter Thiel's house.

    3. GS

      Well, so, so saying two months prior-

    4. HS

      Mm-hmm.

    5. GS

      ... uh, you know, I was again the, uh, I think I was at Stanford or just about to leave Stanford, and one of my friends had interned at Founders Fund, uh, you know one, one of my best friends in the world. And he's like, "Hey, you know, um, I hear you're raising financing. You should talk to Peter." Uh, and I was like, "Well, I'm not gonna say no to that." And so he introduces me on an email thread with me and Peter and, and I'm like, "Hey Peter, would love to do a lunch or a dinner." I'm not a morning person, so I was like, "Peter, would love to do a, a lunch or a dinner anytime soon." And Peter was like, "I can do a breakfast." I was like, "I really want to do a lunch or dinner, you know like any... Can we do a brunch?" You know? (laughs) He's like, "I'm gonna do a breakfast." And so I said, "Well, that's, that's fine." And he gave me a slot on a Saturday. And so got in my car, that was an old like beat up 2006 Audi convertible that I had like fixed up from Craigslist and bought for $4000, at 3:00 in the morning, and drank a bunch of coffee, like 18 cups of coffee, like a Five Hour Energy, like all the disgusting stuff, and drove from 3:00 to 8:00 to his house, to, to go and pitch this guy. Um, and he, I think he showed up 45 minutes to an hour late. He's like just waking up and I'm, I'm wired. I'm like sitting in my chair, you know like ready to go. And uh, (laughs) and it was supposed to be a 30 or 45 minute breakfast. I was like, all right it's kind of already shot. But we ended up talking for I think like four or five hours, uh, about not only the company and all of the, the flaws that I had in my, my business model but then also math and, and deep esoteric philosophy and like just the world. And he said, you know, "I'm not investing," um, at the time 'cause it was coronavirus and a variety of other factors. Um, "But I'd love to put in a check." And I, I had, I got out of this conversation thinking I had just made a friend or was seen from someone who's incredibly, incredibly, incredibly bright. And uh, I, I'm, I'm leaving his house and I'm, you know I felt like I was inducted into the Illuminati. I was like my whole bo- you know, drop top, the sun's shining, I was playing Kanye West and I drove out as, and this was like my first offer from a venture investor.

    6. HS

      How much did he invest?

    7. GS

      ... I, I, I don't even, I don't even recall. I think the, the total round was like a million dollars. So (laughs) it was, it was like nothing. Yeah.

    8. HS

      What do you think makes Peter so incredible?

    9. GS

      I think that there's, there's, there's two things. Um, you know, the- there's, he is incredibly ontologically smart. And so he, he can build this worldview or this perspective of the w- of the world where like he actually just like knows how to pattern match to a, a variety of other things. But then he's also, I think, phenomenologically smart, which is the idea of, uh, like he understands processes and how humans behave really well. And so he's, he's always thinking like, "Hey, you know, x ante," or, or like, you know, if, if I'm looking at, um, you know, something that's about to unfold, uh, "Could I have predicted this ahead of time?" And he always just asks himself that question. So he's built up a really rich perspective of, of the, the fallacies that, that human society has and like, you know, the, the, the meme- mimetic behavior that, that people kind of go out and, and, and copy each other with, et cetera, et cetera.

    10. HS

      So we then have money from Peter.

    11. GS

      Yeah. (laughs)

    12. HS

      And we have the Peter Thiel stamp of approval.

    13. GS

      Yeah.

    14. HS

      Does that open every door in the valley?

    15. GS

      Uh, uh, I mean, I think we were in late discussions with a lot of investors in there and also it's like, yeah, let's like, you know, let's pile on in here. It's, you know, it's some of the best money that you can get. (laughs) Uh, and, and that was, that was a, a game changer for us.

    16. HS

      So then we close with Maples and Floodgate. Yeah?

    17. GS

      Yeah. It was Ann, Ann at Floodgate.

    18. HS

      Okay.

    19. GS

      Mm-hmm.

    20. HS

      Ann and Floodgate. Ann and Thiel. And then we get back to work.

    21. GS

      Mm-hmm.

    22. HS

      Right? And we've got a million or so.

    23. GS

      We've got a million. Uh, and then two months later, uh, Mike actually hears about Hebia from, uh, his daughter who's a Stanford student, uh, and I think had seen the product, was friends, and, uh, and then Mike actually comes in and is like, "Hey, this is completely different than, uh, Elastic or all these other search technologies that I've seen and invested in." Uh, obviously he's on the board of Elastic today, so he's like, "Well, let's just go, uh, add some fuel to the fire."

    24. HS

      How much did he invest then?

    25. GS

      He, I think, invested like an additional 2 or $2.5 million at the time.

    26. HS

      So where did the 130 come from? This was-

    27. GS

      Well, that was, was years later, you know, so... (laughs)

    28. HS

      This was year- this was years later?

    29. GS

      So, uh, th- this is all in 2020. We, we end up building a product studio, which is the first to productionize RAG, Retrieval Augmented Generation, also in 2020. We build the first semantic search d- uh, en- engine. We actually go out and, and start to-

    30. HS

      Can you just help us understand.

  6. 24:0233:29

    The Future of AI and Business Applications

    1. GS

      answered.

    2. HS

      My question is like, what the fuck happens to the rest of the landscape if you're like, "No, RAG is not actually the right approach," and they're all loving RAG.

    3. GS

      Mm-hmm.

    4. HS

      Couldn't be hotter right now.

    5. GS

      I, I don't think they're loving RAG.

    6. HS

      You don't think they are?

    7. GS

      I don't think they're loving RAG now.

    8. HS

      What makes you say that?

    9. GS

      I think mo- like 90% of enterprise AI right now is almost like this vapor where, "Hey, we're, we, we swear it works. Like, look at this amazing demo where we ask, 'What does the CEO say about the investment?'" And then the minute that they actually go to, you know, try to use it in a real world example, completely just fails. Uh, and so I actually think that the majority of AI usage, a lot of these usage statistics are all kind of, uh, one of my favorite phrases is fugazi fugazi. And, and one of the things that Hebia really tries to, to, to put forth in the market is, is say, "Hey, uh, change will take time, uh, but we have a system that, that is actually starting to drive real, measurable value over very specific defined use cases." And our tagline's always, "Hey, stop experimenting with AI," which everyone's experimenting, they're all really excited about it, "start driving value, like getting value out of it."

    10. HS

      To what extent is this RPA versus agentic?

    11. GS

      I, I think that, um... I, I actually am not a big believer in RPA. I think RPA is a, is almost not an AI application in the, in the new sense of AI. It's like AI in the old 10 years ago sense of AI, um, where RP- RPA is effectively, like, very simple computation. But some of the things that people are asking Hebia are over 800-page credit agreements, or 230-page, you know, CIMs, uh, confidential information memorandums, this marketing material, where they're, they're not actually asking for things that, um, you know, like copying numbers. They're saying, "Hey, t- tell me, uh, what are inconsistencies in this document? Tell me where there's an event of default that we can trigger." And like, it, there, there's almost this, like, open-endedness or this new level of computation that people can create.

    12. HS

      I think Daniel Dines, who we had on the show-

    13. GS

      Mm-hmm.

    14. HS

      ... he comes out on Wednesday, but, um, he said it very well. He said, like, "Listen, RPA is low skilled, low level-

    15. GS

      (laughs)

    16. HS

      ... cognitive processes."

    17. GS

      Yep.

    18. HS

      And, you know, agencies, high skilled-

    19. GS

      Yep.

    20. HS

      ... ambiguous-

    21. GS

      Great.

    22. HS

      ... decisions.

    23. GS

      Yes.

    24. HS

      I think that's a nice phrasing.

    25. GS

      I think it's, uh, I think it's, it's incredibly clear. Yeah. And I think that we very much are capturing, uh, the, the agent, the, the high level, um, ambiguous decision-making, and trying to trace it all the way down back to individual citations or individual characters that led the model to that decision.

    26. HS

      I thought about, actually, Satya's statement the other day-

    27. GS

      Yep.

    28. HS

      ... that he made, which is the notion that business apps exist. Uh, sorry, the notion that business apps that exist today will all just collapse into agents. Do you agree with that? And will apps just be the predecessor to agents?

    29. GS

      I actually don't, don't agree with that at all.

    30. HS

      Huh?

  7. 33:2938:32

    Debunking the Myths of AI Job Displacement

    1. GS

      I, I don't, I don't think that, that takes away, um, yeah.

    2. HS

      Do you really think it will be a tool for usage, not a tool for replacement?

    3. GS

      I h- I genuinely believe it makes humans better. I genuinely believe-

    4. HS

      In five years' time, do you not think that is a different story?

    5. GS

      I think that it will change the way that people do work, but I genuinely believe that, uh, it'll actually increase the AUM of the firms that use it. I think it will actually drive more employment. I, I think there will be some jobs that change. Hey, there's no more bookkeepers that do, you know, tabulations in, in spreadsheets on two sheets of paper.

    6. HS

      What changes most and what stays the same?

    7. GS

      The cognitive tasks that are lower in cognition, i.e. like more back office, kind of middle office, uh, maybe even some of the more junior front office tasks, I think we'll start to be, start to move into, "Okay, how can we manage AI juniors rather than, uh, you know, actually do this ourselves by hand?" But I don't think that, you know, just as Excel didn't end up taking away jobs from people, it just changed people, you know, to having to learn Excel, um, the exact same thing will happen with AI.

    8. HS

      So you don't think that we will see team sizes reduce as a result of agent integration into enterprise?

    9. GS

      You know, there's, there's all these stories and, you know, you have like Klarna that's p- positioning for investors that they're firing half their staff and, and, and no one really wants, and I think a lot-

    10. HS

      Do you think that's BS?

    11. GS

      I think it's BS, yeah.

    12. HS

      Why?

    13. GS

      I think, I think, uh, there might be some reality to it. Um, uh, but I think that, uh, it's, it's an amazing marketing story. And so anytime that I ever hear something that's put out as a marketing story, I almost negate it in my head to actually think about like what the implications are. When you're saying something and screaming it from the rooftops, that almost always means that inter- internally you're freaking out about something. And so, uh, you know, I, I, I look at that really loud behavior and I think that the behavior itself really negates the content. Uh, and, and that's, that's maybe my, my positioning on, on this sort of stuff, but...

    14. HS

      How do you feel about competition? What are your lessons on competition? There are several players now in the Hevia kind of, uh, slipstream.

    15. GS

      T- Totally.

    16. HS

      How do you feel about that?

    17. GS

      I ultimately, um, I think that if $100 trillion of economic value will be created by AI and agentic applications, uh, that there will be so much room and so much opportunity for a ton of different players. I don't think that when Excel came out, um, and then, you know, Mark released Salesforce, and then, you know, people created TurboT- you know, uh, all these unravelings of Excel, uh, actually were, were produced later, that that made Excel any less valuable. I actually think it made Excel more valuable. And so I, I, I view Hevia as this platform as something that will actually get better the more people get inspired by it and build increasingly verticalized applications.

    18. HS

      What models do you use? You said on top of what?

    19. GS

      We are completely model agnostic. Um, we are u- we use all of the major model providers, some of our own models. But ultimately, the foundational difference that Hebia is capitalizing on right now is, I think, a- a fundamentally new and- and very important difference, which is I actually think on the order of creating RAG and- and creating agents and decomposition, uh, is this idea of us in the last year or so having pioneered scaling at inference.

    20. HS

      Talk to me about this.

    21. GS

      Right now, you actually... So OpenAI is starting to do this with o1, where they'll have a model recursively think about a question over and over and over again before it produces an answer. And so instead of training a larger model, they're using, effectively, a- a similarly sized model and just telling it to run multiple cycles, or i.e. compute more before answering.

    22. HS

      Huh.

    23. GS

      Hebia has actually pioneered something different, where, you know, a year, almost 18 months ago, we said, "Hey, we can't wait for these models to catch up." What we'll do is infer s- simple single question. Let's actually run hundreds or even thousands of sub-models, of the best models in the world, to compute over every single document to answer the same question. And so ultimately, if you can't train larger and larger models fast enough, you could take whatever state-of-the-art or cutting edge and run it more times to get more compute, i.e. more computational power, better decision-making for the same user right now. And so this is a- a idea that we pioneered. It doesn't matter if you're using Claude 3.5 or if you're using, uh, o1 itself, i.e. scaling at inference, uh, at the orchestration layer with something that was scaled at inference with the training layer, um, but you get way better results, and it's, uh, it's a- it's a way to- to drive through more accuracy.

    24. HS

      Right now, which do you find provides the best results? We had Des Treynor on from Intercom recently. He spoke about the movement away from OpenAI to Anthropic.

    25. GS

      We've seen that for certain types of documents, like the dense legalese or- or- or- or even, uh, more colloquial documents, Anthropic works better. Uh, but for other types of documents, like o1 or OpenAI 4o works better, and it's always trade-offs between accuracy and speed and all kinds of things. Actually, a lot of the time when we're decomposing a task, we'll use mixes of OpenAI, Anthropic, even Gemini.

  8. 38:3239:50

    The Future of Models: Many specialised or few generalised?

    1. GS

    2. HS

      Got you.

    3. GS

      Yeah.

    4. HS

      Do you think we live in a world moving forwards of many models that are specialized in different things, as you said?

    5. GS

      Yep.

    6. HS

      Some do legal, some do whatever we want to talk about. Um, and that's the world we live in, or there's generalist monolith models which really kind of own the whole stack?

    7. GS

      You know, there's a sto- this makes me think of the story of Bloomberg, which has the best financial services training set of all time, and they trained a GPT 3.5 class model. Uh, it was called like Bloomberg GPT, and they released an archive paper and everyone on LinkedIn was like, "Wow, Bloomberg is, you know, cutting edge and they're gonna, uh, steal finance AI." And-

    8. HS

      Why did they not?

    9. GS

      (laughs)

    10. HS

      They did have all... They've got the best data in finance. What-

    11. GS

      Well-

    12. HS

      What the fuck?

    13. GS

      So then GPT 4 was released I think like a few weeks later, I don't know exactly, you know, the right timeline, but it just destroyed Bloomberg GPT at every single finance task. And so you saw the idea of- uh, of- of post-training or- or- or- or kind of like this refined verticalized model creation, uh, just always would lose to scaling laws. And maybe we're at the end of scaling laws at training, but I actually think, you know, Hebia and- and now OpenAI and a variety of other companies are starting to pioneer the idea of scaling laws and inference, and I actually think that it will... Like, nothing that- that other players can do to fine-tune models will ever catch up.

  9. 39:5046:13

    The Impact of Scaling Laws on Foundation Models

    1. GS

    2. HS

      I- I need to break that down, sorry.

    3. GS

      Yeah.

    4. HS

      Um, so everyone's like, "Oh, are we at the end of scaling laws? Oh, are we..." Yeah, and- (laughs)

    5. GS

      (laughs)

    6. HS

      ... you know, Re- Reid Ho- uh, Benioff and Danelle Dynes are like, "Uh, yes, we are. Yes, we are the upper end of LLM."

    7. GS

      Yep. Yep.

    8. HS

      Uh, Reid Hoffman is like, um, "No, there's so much more room to run." Can you just break down for me the difference between scaling laws at inference and scaling laws at training?

    9. GS

      Yeah, I think, um, you know, there's a... It's a bit of, um, a marketing distinction, right? But ultimately, the idea is that, uh, the way that we got here over the last, you know, five, seven years of- of training models has been let's build a bigger and bigger model, and let's give it more and more data, more and more clean data. And then maybe we'll do some RLHF or some, you know, reinforcement learning to... uh, reinforcement training to- to fine-tune it after pre-training, and, uh, that worked great to get us here, but we're running up against the amount of good data that exists in the world. Uh, we're running up against-

    10. HS

      Are we? Because people push back on this and say, "There's so much data that we haven't used yet, whether it's, uh, kind of video data-"

    11. GS

      (laughs)

    12. HS

      "... that can't be translated, whether it's synthetic data." Like, we are not at all exhausted in terms of data supply.

    13. GS

      You know, I think that we're starting to run up against the constraints of it, and that- that's a- that's a gut feel. I'm not, you know, I'm not looking at, uh... I'm not e- particularly in data collection myself, uh, but- but I- I- I think we're starting to run up against the limits of- of really good data that we can- that we can use. Yeah.

    14. HS

      Okay, so what- what's then the problem?

    15. GS

      So- so, ultimately, uh, that might mean that, hey, we're training larger and larger models, xAI again just- just created the largest GPU cluster of all time, and they're going to try to train larger and larger models. But regardless of how the scaling laws for training larger models or parameter count and- and accuracy or performance, uh, carry out, I'm starting to, uh, believe that you could still get better compute not by building a larger engine, to use a metaphor, but by actually putting a bunch of smaller engines together. And so, you know, it- it... W- Hebia, uh, uh, by- by orchestrating large amounts of inference to answer one single question ends up kind of building like a Tesla where Tesla's made of a bunch of smaller, you know, uh, engines, or a- a bunch of smaller electromechanical motors, uh, that make a lot of torque in a really, a really amazing larger engine.

    16. HS

      Does it not make it incredibly capital inefficient?

    17. GS

      You know, I think the one thing that people in my position, uh, will always tell you is that the cost of intelligence will go to zero.... the cost of intelligence will go to zero. I mean, I think that since Hebea started, the cost of, of, of inference over a fixed number of parameters has decreased in, by like seven orders of magnitude in four years. And so, I genuinely believe that, um, scaling compute is like a no-brainer. And yes, we run more large language model calls than anyone might even say would ever be necessary. But we have the best accuracy in the business. We can answer much more complex problems, we're driving real value for enterprises. I, I, I... and I, I actually think that every single quarter, like our margin goes, "We're not spending money fast enough."

    18. HS

      Huh.

    19. GS

      Yeah.

    20. HS

      You mentioned xAI's GPU cluster.

    21. GS

      Yes.

    22. HS

      What they've been able to do in such a short amount of time is miraculous.

    23. GS

      Yeah.

    24. HS

      What do you think that tells us about the layer itself?

    25. GS

      I, you know, I think that, um, ultimately the model layer, and I think this is not a hot take anymore, I've been saying it for a few years, but I think it'll become commoditized. I think that a lot of value will accrue at the hardware layer, uh, especially, um... and we, and we can talk about what that means for NVIDIA, especially as, uh, you know, NVIDIA has a stranglehold on training, but, you know, uh, n- not as much a stranglehold on inference. And so you might actually see, um, uh, other chip makers, you know, actually start to... their, their chips start to be used in a more meaningful way. Because, you know, Kuda is what all ML scientists were trained on in their PhDs, but then, you know, inference doesn't matter, uh, kind of what you're using. Uh, and I think it'll be y- y- the infrastructure layer, and then actually the application or agent layer that will accrue the most value. Ultimately, yeah.

    26. HS

      Why does it not follow the same vein as cloud? Where cloud is commoditized.

    27. GS

      Yep.

    28. HS

      But Azure, Google Cloud, AWS, I mean, it's completely commoditized, let's be honest, cloud.

    29. GS

      Yeah.

    30. HS

      But it's fucking great business for them.

  10. 46:1348:05

    The Geopolitical Influence on AI

    1. GS

      employees.

    2. HS

      How important is geopolitics-

    3. GS

      I think it's-

    4. HS

      ... in winning this game?

    5. GS

      I think geopolitics is, is, is actually very important. I think that governments will be some of the largest users of AI, especially, uh, especially with like some of the, the recent, uh, things that the, the, the new administration in the United States has been talking about with, with increasing government efficiency. Um, I think that ultimately, uh, energy is a, a very big bottleneck. Every, you know, it's a very common thing in Silicon Valley to talk about, "Hey, we need nuclear reactors to, to flatten the duck curve so that, you know, we can, you know, we can continue to drive to, um, larger and larger, uh, you know, kind of data centers," et cetera, et cetera, et cetera. Um, and, and those are ultimately geopolitical resources. And so I think all these things end up being very important. And then, and then Elon's just operationally so talented, right? So, I think that ultimately if this becomes commoditized and, and whoever can really operationalize, uh, uh, model creation and, and, and serving models the fastest, uh, I think, uh, I think he might start to win.

    6. HS

      So you think xAI, and you would invest in them?

    7. GS

      I would, uh, if, if I, if I had the opportunity. But, but ultimately I think all of them are undervalued. I, I genuinely believe all AI companies and the S&P 500 are all undervalued, which is a very hot take. But again, if, if, if we're about to create $100 trillion dollars in value, I think this is a real tangible technological shift. It's a massive unlock on the order of what computing did for the entire economy over the last 60 to 80 years. Uh, I think this will do for the next 68 years. I think all these companies are massively undervalued, including the non-AI companies.

    8. HS

      Unpack the last bit, "Including the non-AI companies."

    9. GS

      I, I, I genuinely believe that, uh, computers made legacy businesses better if you use them correctly. Uh, and so it's a massive disrupting force, but if you can ride the wave of change, I think that, uh, AI agents in this new fundamental paradigm is,

  11. 48:0548:06

    Why Foundation Models Will Not Follow the Same Path of Cloud

    1. GS

      is a massive unlock

  12. 48:0657:54

    The Commoditization of AI Models

    1. GS

      for

    2. HS

      Do you, do you think there is a slight difference? Everyone talks about kind of different, uh, technological transitions. When you look at, you know, the agricultural transition or-

    3. GS

      Yeah.

    4. HS

      ... kind of agricultural, um, dependency on human labor and movement to machinery-

    5. GS

      Yeah.

    6. HS

      ... computers and workforces, these were at least 10-year transitionary periods.

    7. GS

      Yeah.

    8. HS

      At least.... this is like, hey, we use, you know, AI tools now because we just bought them today.

    9. GS

      Yeah.

    10. HS

      The transition period is, is instant.

    11. GS

      Yeah, it's much faster. Mm-hmm.

    12. HS

      Does that not change the enterprise value accumulation and whether they're good or bad for businesses? Because it's like instantly your business will die if you don't have it or not.

    13. GS

      You know, I actually always liken technological revolutions, uh, y- y- to what Hebia is doing right now where, you know, people invented f- uh, we discovered the technology of fire and then someone invented the torch, you know, I don't know how many years later. You know, we invented the engine and then someone invented the car or the wheel and then the chariot. And so this idea of encapsulating and building a useful product on top of a technology change is actually the thing that takes more time. And I think that Hebia has built, uh, you know, if, if Excel was that product for compute, uh, I actually think Hebia has built that product for AI. And I think that when you have a good product, that tran- transition will be very, very, very quick. Right now we have these chatbots or these, you know, surface level search engines that give you facetious, you know, surface level value. Yeah, it'll help your kid cheat on their homework but to drive to whether or not something's a good investment is a much, much more rich problem.

    14. HS

      Is chat the right interface for many of these applications?

    15. GS

      I ultimately do not think so. I think, I think that chat was always a useful feature, it's an, it's an, it's a useful interface, but again it's, it's like a single cell in Excel. It's like asking if the TI-84 was the right interface for computers, or the terminal was the right interface for computers. We have not even started to explore the opportunities for interfaces. I actually think that-

    16. HS

      What do you think they are?

    17. GS

      Well, I, you know, I think that Hebia is the Bell Labs and I conceive of ourselves as the Bell Labs of defining AI interfaces. Right, I, I think that, um, you know RAG was one of them, i.e. this idea you could find things in the data really fast. Decomposition and agents are another. This idea of scaling at inference with our matrix product is another. I think that, um, you can look at a lot of the other things where agents are controlling four screens at once and you're actually looking at someone use a computer, or computer use where AI models are moving cursors or others. Uh, I think almost all of them have actually, yeah-

    18. HS

      Ultimately if agents are efficient, does interface not become irrelevant?

    19. GS

      I actually think that, uh, the better agents are, the more work that they do, the more important it will be that they are easily understood by humans. The idea would be, okay, let's say we have a bunch of employees, 10,000 employees, or 10,000 AI agents drop at a company. They're all experts at doing something. That ends up not becoming a, uh, you know, a problem of giving them th- the right tasks, but actually it becomes a management problem. Right? There's this whole infrastructure, orchestration layer, the thing I always come back to, of making these things work together. And that's actually going to be a challenge, and that's, that's going to require a very human-first, ultimately a, a product, uh, and that's what we're trying to build.

    20. HS

      Do you think Elon will be successful with Doge?

    21. GS

      I think it will be his greatest challenge. Uh, I think there's, there's a, there's a lot of self-reinforcing, um, self-protecting mechanisms in the largest organization in the world, which is kind of the US government, by, by spend, by... You know, it's just, it's just this massive unruly organization. Uh, it's not, it's not gonna be as simple as Twitter. Uh-

    22. HS

      Are you more excited in a post-Trump-

    23. GS

      The thing that I care most about in the world is that we as an industry have very clear guardrails, uh, that we can follow and understand f- to build the best possible tools, to, to, to get our tools out to the economy, to make sure that everyone transitions in the best pos- like best possible way. So I'm, I'm, I'm ultimately regardless of-

    24. HS

      But does your business not thrive on a better financial system? And we're seeing now a financial system in the US from afar that would seem to be thriving. Objectively-

    25. GS

      Yeah.

    26. HS

      ... it would appear that Trump is good for business.

    27. GS

      Uh, you know, I, I, I won't make a comment here.

    28. HS

      (laughs)

    29. GS

      (laughs) Um, I, I, I think that there's a lot-

    30. HS

      You know what's funny?

  13. 57:541:08:12

    Quick-Fire Round

    1. GS

      love to see.

    2. HS

      Are you ready for a spicy round?

    3. GS

      Let's... Give me the spicy round.

    4. HS

      Okay.

    5. GS

      We got the tissues out here too.

    6. HS

      We've got the tissues in case you cry.

    7. GS

      (laughs)

    8. HS

      This is the... In ca- in case you need them to hide behind.

    9. GS

      (laughs)

    10. HS

      So this is the spice round. So this is questions from friends of yours. Okay?

    11. GS

      Okay.

    12. HS

      Um-

    13. GS

      We got the... We got some changing colors up here. I love it.

    14. HS

      Yeah, yeah, I know. It's a full game show.

    15. GS

      (laughs)

    16. HS

      Um, there we go. It's like a fucking David Guetta concert.

    17. GS

      (laughs)

    18. HS

      Um... Uh... Okay. Fuck, I haven't seen these. Okay, right. Perfect. Number one question. Would you sell for two billion dollars today?

    19. GS

      Would I sell for t- No. No, I would not.

    20. HS

      What was the worst VC meeting you've ever had?

    21. GS

      The worst VC meeting I've ever... And I can't say this one? Next question. (laughs) Uh, I, I... Yeah, I love VC meetings. Um-

    22. HS

      Which one were you like, "They're a douche"?

    23. GS

      Hmm?

    24. HS

      Which one were you like, "Uh, bad experience"?

    25. GS

      I would never say, but I... Yeah, I think that, uh, you know, our customers are VC. I, you know, I, I love, I love almost every VC, uh, that I've ever met.

    26. HS

      What was the single best VC meeting?

    27. GS

      I think it's, it's somewhere between, you know, Peter talking to me about anything but the business and deeply academic things and, and Mike taking me on a walk around the, the Woodside Horse Park, which was just pretty great.

    28. HS

      Do you trust Sam Altman?

    29. GS

      No.

    30. HS

      (laughs)

Episode duration: 1:08:12

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode KmwdPwwXwWw

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome