Skip to content
No PriorsNo Priors

No Priors Ep. 135 | With Humans& Founder Eric Zelikman

The AI industry is obsessed with making models smarter. But what if they’re building the wrong kind of intelligence? In launching his new venture, humans&, Eric Zelikman sees an opportunity to shift the focus from pure IQ to building models with EQ. Sarah Guo is joined by Eric Zelikman, formerly of Stanford and xAI, who shares his journey from AI researcher to founder. Eric talks about the challenges of building human-centric AI, integrating long-term memory in models, and the importance of creating AI systems that work collaboratively with humans to unlock their full potential. Plus, Eric shares his views on abundance and what he’s looking for in talent for humans&. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ericzelikman Chapters: 00:00 – Eric Zelikman Introduction 00:29 – Eric’s Early Interest in AI 01:29 – Challenges in AI and Automation 02:25 – Research Contributions 06:14 – Q-STaR and Scaling Up AI 08:14 – Current State of AI Models 15:23 – Human-Centric AI and Future Directions 22:08 – Eric’s New Venture: humans& 35:33 – Recruitment Goals for humans& 36:58 – Conclusion

Sarah GuohostEric ZelikmanguestElad Gilhost
Oct 9, 202536mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:29

    Eric Zelikman Introduction

    1. SG

      (music plays) Hi, listeners. Welcome back to No Priors. Today, we're here with Erich Zelkman, previously of Stanford and xAI. We're gonna talk about the contributions he's made to research, reasoning, and scaling up RL, as well as his new company, Humans End. Erich, thank you so much for doing this.

    2. EZ

      Thank you.

    3. SG

      You have had an amazing impact as a researcher, including starting from just your time at Stanford.

  2. 0:291:29

    Eric’s Early Interest in AI

    1. SG

      I wanna hear about that, but first, background of how you got interested in machine learning at all.

    2. EZ

      I- I guess going back, like, really far, I- I've- I've been motivated by this question of, like, you have, you know, all of these people out there who have, like, all of these things that they're really talented in, all of these things that people are really passionate about, that you have, like, so much, like... You know, there- there's just so much talent out there and I've always been, like, a little bit disappointed that, like, you know, like, so much of that talent doesn't get used just because everyone has, like, circumstances and, like, has, like, these, you know, situations where, you know, they can't actually pursue those things. And so for me, AI has-

    3. SG

      All of humanity's not living up to their full potential.

    4. EZ

      I mean-

    5. SG

      And so then you got into AI. (laughs)

    6. EZ

      (laughs) I mean, it's a... The- the thing I've always been excited about is, like, how do you actually build this technology that frees people up to kind of do the things that they are passionate about?

    7. SG

      Mm-hmm.

    8. EZ

      Like, how do you basically, you know, s- a- yeah, allow people to actually focus on those things?

  3. 1:292:25

    Challenges in AI and Automation

    1. EZ

      You know, originally, I thought of automation as kind of, like, the most natural way of doing that. Like, you- you automate away the parts that, like, people kind of don't want to do, and that-

    2. SG

      Mm-hmm.

    3. EZ

      ... you know, frees up people to do the things that they do want to do. But I guess I realized, like, increasingly that that's, like, it's actually, like, pretty complex. You actually have to understand, if you want to empower people to do what they want to do, you have to really understand what people actually want to do. Um, and building systems that understand kind of people's goals and outcomes is actually really hard.

    4. SG

      Hmm.

    5. EZ

      Um, yeah.

    6. SG

      Did you have, like, um, this human-centric perspective when you were choosing research problems to work on originally?

    7. EZ

      I- I guess, like, at the very beginning. I was just in- like, when I was choosing research problems, I was just interested in, like, how do you actually make these things half decent?

    8. SG

      Okay.

    9. EZ

      Like-

    10. SG

      So it's more increased capability ev- first.

  4. 2:256:14

    Research Contributions

    1. SG

    2. EZ

      Yeah.

    3. SG

      Yeah.

    4. EZ

      I think, I think for me, like, you know, when I looked at, like, AI, like, or, you know, language models back in, like, 2021 or whatever, you know, I was like, "Th- these things aren't very smart. They can't do that much." And- and there- there was some, like, early work around there, like, um, that showed that, like for example, you could use, like, chain of thought to, like, you know-

    5. SG

      Mm-hmm.

    6. EZ

      ... get models to answer more smartly. But it was still, like, only, like, a small step improvement at that time. Like, there was still the... you know, the- the benefit of that was, you know, as much as you can really get with just prompting. And so back then, I was, like, thinking about, "Okay, how do you actually make them, like, half decent at actually solving these harder problems?"

    7. SG

      Can you give a broad... Like, we have, um, everything from researcher audience to business person audience here. Can you give a broad intuition for STAR?

    8. EZ

      I guess the- the intuition is if you (laughs) have a model and it's able to solve these, like, basic... like, these, like, slightly harder questions by thinking about them, then what if you actually teach it? Like, "Hey, this solution that you came up with, that got you to the right answer. Good job." Or, you know, if you... or if the model didn't, then you basically, like, uh, don't reward it. I guess the original version of STAR actually had, like... Or yeah, uh, there were, like, no n- there wasn't a baseline at the time. Uh, we compared it to, uh, reinforce, which is this, like, m- popular algorithm in, I guess, reinforcement learning, like, very simple, like, policy gradient thing. But yeah, I guess, you know, at the time, it was, like, a very simple algorithm. Just, you know, you, uh, iteratively generate solutions. If the solutions get you to the right answer, you learn from them. If they don't, you don't. And then you just kind of keep doing this as the model solves harder and harder problems and then learns from harder and harder problems.

    9. SG

      Did you, um... W- at what point in the research, uh, if at all, were you surprised by how well it worked or did you have some intuition for this being, like, something scalable?

    10. EZ

      The- there was one experiment that I remember doing, though this was quite a while (laughs) ago at this point, um, but we looked at the... I think it was, like, end-digit, like, addition or multiplication. Sorry, it's been a second.

    11. SG

      Mm-hmm.

    12. EZ

      Uh, and one thing that was really interesting was that this... Back then, this was, like, a task that was considered, like, hard for-

    13. SG

      Yeah.

    14. EZ

      ... language models.

    15. SG

      Of course. It was considered, like, one of the examples of why they were still so stupid.

    16. EZ

      Yeah.

    17. SG

      Yeah. Like, yeah.

    18. EZ

      Exactly. And- and I was like, "Okay." And one- one of the really interesting things for me was that as you actually trained for more and more iterations, the number of digits that it was actually able to do kept increasing.

    19. SG

      Okay, cool.

    20. EZ

      And I think that th- this was, like, one of those big surprises for me. Like, like, oh, wow. Like, there- there's no obvious plateau here.

    21. SG

      And did you go directly from that to generally this should scale?

    22. EZ

      I think I was generally, like, be interested in like... Yeah. I- I think there were a few things though. Like, uh, there was one part of it that we introduced to kind of... We- we observed that there was a bunch of the data that the model wasn't learning from and so we proposed another variant of this where we actually, uh, were like, "Oh, what if you actually take the ones where it fails and you, um, basically, like, ask it to reason about, like, why it should have gotten it right? And then you train as if it got it right?"

    23. SG

      Mm-hmm.

    24. EZ

      Um, and this version, uh, was kind of a way of extending beyond the kind of, the parts of the data that it couldn't- that it couldn't see.

    25. SG

      Mm-hmm.

    26. EZ

      So if you only train it on, like, the positive examples, then you end up in this kind of, like, potential minimum where there's just no more data that it can actually solve. And so back then, we were like, "What if we just, uh, s- show it the problems that it didn't solve and try to teach it from those?" But I guess an- another thing that other work has done since then is, oh, what if you just sample a lot? Uh, and that- that also seems to work, uh, in those works.

    27. SG

      STAR has become a broadly used part of the reasoning paradigm since you published.

  5. 6:148:14

    Q-STaR and Scaling Up AI

    1. SG

      Uh, can you also describe... I think this was, like, sort of your last published work, like, Q*?

    2. EZ

      Okay. Uh, so, so QuietStar was, um, kind of the... yeah, the last thing that I did back at Stanford, and it was really fun. I guess we- we showed a few things that were kind of cool. One of the main goals of that paper was to show that you could actually scale this up to, like, pre-training scale-

    3. SG

      Yeah.

    4. EZ

      ... by using, like, basically pre-training style data. I guess now there's, like, a bunch of these works that have come out recently around, like, you know, RL pre-training and stuff like that. And that- that's, you know... I- I guess i- i- in some ways similar to some of the... what we showed in the QuietStar work. Instead of having question answer, if you actually just have, like, um, you know, these arbitrary kind of, like, chunk- chunks of text, for example, and you're trying to predict what's going to come next, which is, like, the standard language modeling objective, um, can you actually get models that more generally learn to reason? One of the kind of cooler things that I think is kind of overlooked about the original QuietStar paper is we showed a bunch of, like, uh, kind of key improvements to the star paper that were necessary to actually do this kind of thing. So that was, for example, showing that it's really valuable for this algorithm to be online.

    5. SG

      Mm-hmm.

    6. EZ

      Um, showing that it's really valuable for... uh, to have a baseline where you, like, you know, the harder... f- for harder problems, you learn more. Uh, for easier problems, you, like, you don't learn quite as much. And I think that there were a bunch of, like, nuggets in there that, uh, even at the time, I don't think I fully, you know, thought of as, like, "Oh, wow, that's actually, like, a cool improvement over the original thing."

    7. SG

      So you ended up going to Grok for several years, and you, uh... sorry, xAI for several years.

    8. EZ

      Yeah.

    9. SG

      And you worked on a bunch of different paradigms, so pre-training data for Grok-2 and then overall the reasoning recipe for Grok-3. I'm sure I'm missing things, but... uh, tool use and agentic infrastructure for

  6. 8:1415:23

    Current State of AI Models

    1. SG

      Grok-4. I- I guess when you w- if you s- level set us today, like, how smart are models? They can obviously do end digit, um, arithmetic at this point.

    2. EZ

      I guess in terms of, like, IQ stuff, I'd say, like, there's a lot of th- they... and if you're able to pose the problem, like, very well, um, like some very advanced, like, physics problem or math problem, I would say they're... they're- they're reasonably smart. I think, like, a lot of the failures that people see-

    3. SG

      Give me a human comparison. What is reasonably smart?

    4. EZ

      I think, I think it's hard to compare directly because it's very jagged.

    5. SG

      Yeah.

    6. EZ

      Like, like, it's- it's true that, like, some of these... for example, some of the HLE questions that these models are able to solve are genuinely things that are, like, non-trivial for, like, actual, like, PhD researchers. I'm not saying they're, like, o- like, open problems or anything, uh, but they are, like, pretty non-trivial.

    7. SG

      Hard. Yeah.

    8. EZ

      Also, a lot of them are, like... you know, one, one interesting category of, like, these... I spend a lot of time looking at kind of the HLE questions. One interesting category of that-

    9. SG

      Sorry, Humanities Last Exam-

    10. EZ

      Sorry. (laughs)

    11. SG

      ... for anybody who isn't looking at these evals.

    12. EZ

      Sorry. (laughs)

    13. SG

      No, great.

    14. EZ

      Um, yeah, so what... yeah, looking at these Humanities Last Ex- uh, exam questions, I kind of, um... uh, one, one kind of category that is, like, actually quite big are these, like, tricker- trick questions that require, you know, basically people... like, y- if you're familiar with it, you'll be like, "Oh, they're- they're trying to get you to, like, assume something." But actually, like, if you think more carefully about this problem, that assumption doesn't hold. Um, and there's... this turns out to be, like, a bunch of those kinds of problems. So I think it's a... it's... they're pretty smart, but also they're more, I think, tripped up by some of these, like, tricky things. Um, but also they don't really... I think one of the core things is that they're not smart, like, emotionally or, like... they're not smart on the level of like actually understanding kind of what people care about or kind of, like, how to actually, like, help people accomplish the things that they care about.

    15. SG

      I wanna talk about this and your next mission, but just on this topic of- of even jagged intelligence within, like, the IQ domain, which I think every- almost everybody in the industry has been, uh, focused on un- until now, what would you recommend for people who are not researchers to develop some sort of intuition for that surface? Because that seems very important to making them useful.

    16. EZ

      Yeah. I guess one thing that's... that I think is, like, really important to keep in mind is that, like, the more kind of context you can give the current generation of models, the better you kind of are... uh, the- the better off you are.

    17. SG

      Mm-hmm.

    18. EZ

      Like, their answers are, uh, super sensitive to, like, you know, whatever additional information you can give them. Yeah, I th- I think this is, like, a really important thing. I would generally say, like, existing models are particularly good at handling questions that are, like, easy to answer in kind of, like, a closed form, like, um... if- if there's like a, you know, a simple numerical answer to what you're asking or, like, a simple, like, way of choosing from a set of things, this is s- something that these models actually, like... and obviously it's, like, all dependent, but this is something that makes it easier for the model. If you can imagine it being easy to check your answer-

    19. SG

      Mm-hmm.

    20. EZ

      ... that actually, I think makes it easier for the models.

    21. SG

      What, um, do you think is the most dominant explanation for attempts to use models in very verifi- more verifiable domains, like code, still failing at sophisticated tasks? Is it just, like, the wrong context has been fed to them? Is it, um, context window is simply not large enough to support the, like, scratch pad and continual testing? Like, what... why... in those domains, what is the biggest challenge?

    22. EZ

      Part of it is there's, I think, a balance. When people kind of want to give users these models, it's actually important that they're not annoyingly slow. And so I think there's actually, like, a number of problems where, like, if you gave the models more time, you know, they would actually be able to answer-

    23. SG

      Mm-hmm.

    24. EZ

      ... better. But, for example, in the kind of coding context-... you kind of have to be reasonably responsive, at least, it depend- it depends on the kind of setup, right? Like, if you look at products like, you know, OpenAI's Codex-

    25. SG

      Mm-hmm.

    26. EZ

      ... um, which, you know, is kind of this longer running background thing, uh, versus, like, uh, Cursor, which is, like, you know, more interactive. You- you have a bit more luxury with those, uh, more background approaches, uh, to tackle harder problems, I'd say. Yeah, I think- I think it- it's a- it's a tricky question. Uh, a lot of things depend on how far the distribution of what you're- what- what you're asking is from the distribution that the models were actually trained with.

    27. SG

      Mm-hmm.

    28. EZ

      Um, so, you know, if you happen to be asking a problem that's very similar to the kind of problems that it's seen before, then, you know, it'll do great. Uh, and if you're asking a problem that's like very, yeah, out of domain, it- it... So like, to some extent, this question is kind of hard to answer concretely-

    29. SG

      Mm-hmm.

    30. EZ

      ... without... Unless you know, like, basically what the da- what the RL data for a lot of these, you know, specific tasks is.

  7. 15:2322:08

    Human-Centric AI and Future Directions

    1. EZ

      capabilities axis. I- I do think that one... As you start thinking about some of these new kind of axes of scaling, it's actually very natural to realize that like there are ways to do them in ways that incorporate people and there's ways to do them in ways that kind of leave people out more and more.

    2. SG

      Mm-hmm.

    3. EZ

      And being very mindful of, oh, hey, I'm designing this new algorithm and it's going to scale IQ, you know, of this model by X amount.

    4. SG

      Mm-hmm.

    5. EZ

      If you effectively, like, keep people- to effectively keep people in the loop is actually like a very active decision.

    6. SG

      Mm-hmm.

    7. EZ

      Um, and so, you know, I think in general if you're thinking about these things, that's important.

    8. SG

      Wouldn't it be fair to claim that the instinct of many labs is to, like, try to get people out of the loop as much as possible from a scaling perspective? Because that's very messy, right? If I want to recruit people to, for example, take complex reasoning traces off them in tasks that are not in distribution for me yet, um, that is not as simple to execute on for an organization, uh, as like more roll-outs, right?

    9. EZ

      Yeah. For sure.

    10. SG

      Um, and so why is that important at all from a capabilities perspective?

    11. EZ

      Yeah.

    12. SG

      Maybe that's a good transition to, like, what are you doing? Yeah.

    13. EZ

      Yeah. I'd say that it, the- the main thing is just that, like, as you kind of have these models that, you know, expand in terms of like, uh, the horizon that they're automating, you know, we have these models, the- the recent like or recent-ish IMO results are like a kind of a good example of this. You have these models that go on for like, you know, hours of, you know, reasoning, um, without any kind of human intervention.

    14. SG

      Mm-hmm.

    15. EZ

      And this has kind of been an increasing, uh, measure of success, I would say, for these labs. So for example, you know, there's this METR, like meter-

    16. SG

      Yeah.

    17. EZ

      ... like, uh, benchmark that everyone likes to share whenever there's a new model. Uh, and it's like, oh, we went from being able to have these models work for two, like complete two-hour tasks autonomously without human intervention to 2.5-hour task, uh, without human intervention. And obviously, there's like questions of like what do those numbers actually mean, um, and how, like, should we take them like kind of at face value? But regardless, this has kind of been, like, the metric that, you know, people are looking at more and more, uh, to measure progress. But, you know, as we kind of, uh, get these models that increasingly, you know, remove people from the interaction, you end up with basically people having less say in kind of the things that get built.

    18. SG

      Mm-hmm.

    19. EZ

      You end up with like... You know, I think if you have a model that goes off and does its own thing for like eight hours and comes back to you with like something that like is somewhat there, um, I think this is like a weird regime where, like, people probably feel less, like, real agency over the things that they're building. And I think also...I kind of anticipate that people will feel like they don't really understand the things that are being built.

    20. SG

      Mm-hmm.

    21. EZ

      You know, I think this is-

    22. SG

      That's already true.

    23. EZ

      Yeah, I think it's already true.

    24. SG

      20,000 lines of generated code looks good to me. (laughs)

    25. EZ

      Yeah, it's just like you make these PRs and they're like 100,000 lines of like, you know, like...

    26. SG

      Yeah.

    27. EZ

      And I think in general, this is kind of going to be part of the trend.

    28. SG

      So do you think that it's important to have humans in the loop of, you know, producing the output or the reasoning because the ceiling is higher with humans who are in the loop, because it is more efficient because we can error correct when models are off path, or philosophically because people want that, or like some combination of all three?

    29. EZ

      Yeah, I think it's probably some combination. I think another thing that I kind of think about is like, you know, the most natural thing to do as you kind of automate away the existing set of tasks is, you know, you kind of look at the world GDP. You, like, carve out the parts that are, like, you know, most easy to replace with these models.

    30. SG

      Mm-hmm.

  8. 22:0835:33

    Eric’s New Venture: humans&

    1. EZ

    2. SG

      So, um, that goes to you are starting a new company, Humans And. I remember being, like, actually quite fundamentally surprised given all of your work on IQ and reasoning and coding and- and scale that you were interested in essentially EQ, and you also thought of EQ, and tell me if this is a wrong characterization, as, um, like, the- the emotional, uh, or the interactive capabilities of models to date have really shown up in things like character or, like, companionship tools only. And you thought of it as also, like, enablement-from-a-productivity perspective. Right? Um, so tell me about h- like, where this thread came from.

    3. EZ

      Yeah. I- I guess I've been thinking about this kind of stuff for some time now. Like, even back in my PhD, I think, one of my, I guess, less well-known works was actually about... We showed that you can train language models to simulate different kinds of students.

    4. SG

      Right.

    5. EZ

      Uh, and...

    6. SG

      For tests.

    7. EZ

      Yeah, yeah.

    8. SG

      Yeah.

    9. EZ

      And- and by simulating students, we- you can actually design better tests for those students. And that was, like, a really cool finding. Like, hey, if you have models that are really good at modeling people, you can actually design systems that are better for people. And, like, th- this was something that, like, I- I found really cool. And, um, and- and kind of as we moved towards the current kind of capabilities frontier, it became more and more obvious that like the, you know, we have these incredibly smart models, uh, that are, like, capable of so much, but they're not used for anywhere near what they're capable of. Like, the- the role that they play in people's lives, like, is a lot less deep, a lot less positive than it could be. And I spent a lot of time thinking about, like, "Okay, why- why is that?" Like, "Why are these models not, like, more, like I said, deeply positively integrated into people's lives?" And it seemed like a really big part of it is like that fundamentally these models don't really understand people. They don't understand people's goals. Um, they're trained... I would say part of it is, like, the general kind of training paradigm that the field is in. It's very, I would say, single-task focused or task-centric.

    10. SG

      Mm-hmm. It's ludicrous that all the benchmarks are still oriented this way. Yeah.

    11. EZ

      Yeah. I mean, like- like-

    12. SG

      Or most of them.

    13. EZ

      You know, I mean, even- even the ones that are like... L- like there's very few benchmarks out there that actually try to consider like, oh, what if you actually have, like, a person that's interacting with this model? Like, you know, at best, you have like some, you know, multi-turn benchmarks that, like, uh, try to simulate what an environment would respond in different, you know, to different inputs. Uh, but even that is, like, still, like, far from, you know, considering, hey, if you actually have this model that interacts with a person for like, you know-... some amount of time. Like, how does it actually affect that person's life? It's, it's really remarkable that the field is kind of like so stuck in this kind of task-centric regime. Um, and I think it, but it makes a lot of sense. One thing that I was told by some folks at, you know, at Google is that it, it, one of the reasons is that, like, it's actually very useful for, like, credit assignment. So like, being able to have, like, these benchmarks that are very easy to quantify and very easy to, like, relate to some, like, immediate thing means that you can kind of say, like, "Oh, yeah, this, like, you know, this, this team did, like, 2% better than this team so they deserve, like, all of the resources."

    14. SG

      Hmm.

    15. EZ

      Or, you know, "This team, like, improved the benchmark by, like, 10%, while this team improved it by 5%. So, you know, let's, let's allocate accordingly." And I think in general, like, that's, that's part of it. I think another part of it is, like, kind of more aligned with the easiest ways to train these models. Y- y- it's, it's not easy to, you know, have these RL environments and stuff. You have lots of these companies popping up, obviously, that are trying to sell, you know, environments to different people. But, uh-

    16. SG

      And the most popular are, of course, encoding and computer use.

    17. EZ

      Yeah.

    18. SG

      Um, rather than anything that requires simulating people.

    19. EZ

      Yeah. It, it, it's not that surprising that we're kind of in this current, uh, regime. But...

    20. SG

      So, what do models need to, um, know about people? Or like, what capabilities are they, um, either missing or have not been elicited from them?

    21. EZ

      The most fundamental thing is that the models kind of don't understand the long-term implications of the things that they do and say. When you treat every turn of a conversation as kind of its own game-

    22. SG

      Mm-hmm.

    23. EZ

      ... and you, you know, you basically think of it as like, okay, you had this interaction, you're done, you need to make sure that this one response has all of the possible answers, has all of the possible content, you don't ever, like, ask questions, you don't ever, like, try to clarify things, you don't really tend to express uncertainty, um, you don't tend to be proactive, you don't tend to think about the long-term, uh... Like, like, you, you see a lot of, like, even single-turn side effects of this kind of regime. Like, and most of them are treated as kind of their own problems to solve. You see issues around, like, that, that people highlight around, like, sycophancy. You see issues that, you know, that there was recent news around, like, you know, the psychosis stuff. There, there's a lot of these, like, uh, harmful effects that you get if you think about things in this very single task or, like, task-centric way. Um, but if you have models that actually consider, you know, the long-term implications of, oh, hey, if I tell this person to start, like, a, you know, a company that, you know, sells gloves for catching ice cream, if I, like, tell them that that sounds like a good business idea, they might actually go-

    24. SG

      (laughs)

    25. EZ

      ... and they might actually build that business, and they might realize that it was not actually a good business idea. Having

    26. EG

      (sighs)

    27. EZ

      ... a model that can kind of rule out the long-term implications of the things that I said-

    28. SG

      And then they won't trust me anymore, and then they won't pay for my compute.

    29. EZ

      Exactly.

    30. SG

      And then (laughs) it's all over.

  9. 35:3336:58

    Recruitment Goals for humans&

    1. SG

      Um, okay. Super unique mission, amazing research work, you're hiring an early team, getting a lot of compute. Who are you looking for on the recruiting side?

    2. EZ

      One, one thing that I think is actually probably a good thing that my previous company did is, you know, thinking of everyone kind of, to some extent as like engineers. I think, um, I'm looking for really strong infra folks who can build stuff. I'm looking for really strong researchers who can build stuff. I'm looking for really strong product folks who can build stuff. I'm looking for people who, like, have thought a lot about, like, users, who've thought a lot about, like, memory. You know, on the research side. I'm looking for, you know, on the infra side, for people who have thought about building distributed systems, really fast inference, people who've, uh, d- you know, been there to scale really big projects up. Um, on the product side, I think people who are like, you know, really creative about like new modes of interaction, people who have, who really deeply care about building beautiful, tasteful products.

    3. SG

      Awesome. Thanks so much, Aric.

    4. EZ

      Thank you so much.

    5. SG

      Congrats on the new company.

    6. EZ

      Thank you so much.

    7. SG

      (instrumental music) Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 36:58

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Oh0oQnKn9dw

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome