No PriorsNo Priors Ep. 135 | With Humans& Founder Eric Zelikman
EVERY SPOKEN WORD
70 min read · 13,900 words- 0:00 – 0:29
Eric Zelikman Introduction
- SGSarah Guo
(music plays) Hi, listeners. Welcome back to No Priors. Today, we're here with Erich Zelkman, previously of Stanford and xAI. We're gonna talk about the contributions he's made to research, reasoning, and scaling up RL, as well as his new company, Humans End. Erich, thank you so much for doing this.
- EZEric Zelikman
Thank you.
- SGSarah Guo
You have had an amazing impact as a researcher, including starting from just your time at Stanford.
- 0:29 – 1:29
Eric’s Early Interest in AI
- SGSarah Guo
I wanna hear about that, but first, background of how you got interested in machine learning at all.
- EZEric Zelikman
I- I guess going back, like, really far, I- I've- I've been motivated by this question of, like, you have, you know, all of these people out there who have, like, all of these things that they're really talented in, all of these things that people are really passionate about, that you have, like, so much, like... You know, there- there's just so much talent out there and I've always been, like, a little bit disappointed that, like, you know, like, so much of that talent doesn't get used just because everyone has, like, circumstances and, like, has, like, these, you know, situations where, you know, they can't actually pursue those things. And so for me, AI has-
- SGSarah Guo
All of humanity's not living up to their full potential.
- EZEric Zelikman
I mean-
- SGSarah Guo
And so then you got into AI. (laughs)
- EZEric Zelikman
(laughs) I mean, it's a... The- the thing I've always been excited about is, like, how do you actually build this technology that frees people up to kind of do the things that they are passionate about?
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Like, how do you basically, you know, s- a- yeah, allow people to actually focus on those things?
- 1:29 – 2:25
Challenges in AI and Automation
- EZEric Zelikman
You know, originally, I thought of automation as kind of, like, the most natural way of doing that. Like, you- you automate away the parts that, like, people kind of don't want to do, and that-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... you know, frees up people to do the things that they do want to do. But I guess I realized, like, increasingly that that's, like, it's actually, like, pretty complex. You actually have to understand, if you want to empower people to do what they want to do, you have to really understand what people actually want to do. Um, and building systems that understand kind of people's goals and outcomes is actually really hard.
- SGSarah Guo
Hmm.
- EZEric Zelikman
Um, yeah.
- SGSarah Guo
Did you have, like, um, this human-centric perspective when you were choosing research problems to work on originally?
- EZEric Zelikman
I- I guess, like, at the very beginning. I was just in- like, when I was choosing research problems, I was just interested in, like, how do you actually make these things half decent?
- SGSarah Guo
Okay.
- EZEric Zelikman
Like-
- SGSarah Guo
So it's more increased capability ev- first.
- 2:25 – 6:14
Research Contributions
- SGSarah Guo
- EZEric Zelikman
Yeah.
- SGSarah Guo
Yeah.
- EZEric Zelikman
I think, I think for me, like, you know, when I looked at, like, AI, like, or, you know, language models back in, like, 2021 or whatever, you know, I was like, "Th- these things aren't very smart. They can't do that much." And- and there- there was some, like, early work around there, like, um, that showed that, like for example, you could use, like, chain of thought to, like, you know-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... get models to answer more smartly. But it was still, like, only, like, a small step improvement at that time. Like, there was still the... you know, the- the benefit of that was, you know, as much as you can really get with just prompting. And so back then, I was, like, thinking about, "Okay, how do you actually make them, like, half decent at actually solving these harder problems?"
- SGSarah Guo
Can you give a broad... Like, we have, um, everything from researcher audience to business person audience here. Can you give a broad intuition for STAR?
- EZEric Zelikman
I guess the- the intuition is if you (laughs) have a model and it's able to solve these, like, basic... like, these, like, slightly harder questions by thinking about them, then what if you actually teach it? Like, "Hey, this solution that you came up with, that got you to the right answer. Good job." Or, you know, if you... or if the model didn't, then you basically, like, uh, don't reward it. I guess the original version of STAR actually had, like... Or yeah, uh, there were, like, no n- there wasn't a baseline at the time. Uh, we compared it to, uh, reinforce, which is this, like, m- popular algorithm in, I guess, reinforcement learning, like, very simple, like, policy gradient thing. But yeah, I guess, you know, at the time, it was, like, a very simple algorithm. Just, you know, you, uh, iteratively generate solutions. If the solutions get you to the right answer, you learn from them. If they don't, you don't. And then you just kind of keep doing this as the model solves harder and harder problems and then learns from harder and harder problems.
- SGSarah Guo
Did you, um... W- at what point in the research, uh, if at all, were you surprised by how well it worked or did you have some intuition for this being, like, something scalable?
- EZEric Zelikman
The- there was one experiment that I remember doing, though this was quite a while (laughs) ago at this point, um, but we looked at the... I think it was, like, end-digit, like, addition or multiplication. Sorry, it's been a second.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Uh, and one thing that was really interesting was that this... Back then, this was, like, a task that was considered, like, hard for-
- SGSarah Guo
Yeah.
- EZEric Zelikman
... language models.
- SGSarah Guo
Of course. It was considered, like, one of the examples of why they were still so stupid.
- EZEric Zelikman
Yeah.
- SGSarah Guo
Yeah. Like, yeah.
- EZEric Zelikman
Exactly. And- and I was like, "Okay." And one- one of the really interesting things for me was that as you actually trained for more and more iterations, the number of digits that it was actually able to do kept increasing.
- SGSarah Guo
Okay, cool.
- EZEric Zelikman
And I think that th- this was, like, one of those big surprises for me. Like, like, oh, wow. Like, there- there's no obvious plateau here.
- SGSarah Guo
And did you go directly from that to generally this should scale?
- EZEric Zelikman
I think I was generally, like, be interested in like... Yeah. I- I think there were a few things though. Like, uh, there was one part of it that we introduced to kind of... We- we observed that there was a bunch of the data that the model wasn't learning from and so we proposed another variant of this where we actually, uh, were like, "Oh, what if you actually take the ones where it fails and you, um, basically, like, ask it to reason about, like, why it should have gotten it right? And then you train as if it got it right?"
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Um, and this version, uh, was kind of a way of extending beyond the kind of, the parts of the data that it couldn't- that it couldn't see.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
So if you only train it on, like, the positive examples, then you end up in this kind of, like, potential minimum where there's just no more data that it can actually solve. And so back then, we were like, "What if we just, uh, s- show it the problems that it didn't solve and try to teach it from those?" But I guess an- another thing that other work has done since then is, oh, what if you just sample a lot? Uh, and that- that also seems to work, uh, in those works.
- SGSarah Guo
STAR has become a broadly used part of the reasoning paradigm since you published.
- 6:14 – 8:14
Q-STaR and Scaling Up AI
- SGSarah Guo
Uh, can you also describe... I think this was, like, sort of your last published work, like, Q*?
- EZEric Zelikman
Okay. Uh, so, so QuietStar was, um, kind of the... yeah, the last thing that I did back at Stanford, and it was really fun. I guess we- we showed a few things that were kind of cool. One of the main goals of that paper was to show that you could actually scale this up to, like, pre-training scale-
- SGSarah Guo
Yeah.
- EZEric Zelikman
... by using, like, basically pre-training style data. I guess now there's, like, a bunch of these works that have come out recently around, like, you know, RL pre-training and stuff like that. And that- that's, you know... I- I guess i- i- in some ways similar to some of the... what we showed in the QuietStar work. Instead of having question answer, if you actually just have, like, um, you know, these arbitrary kind of, like, chunk- chunks of text, for example, and you're trying to predict what's going to come next, which is, like, the standard language modeling objective, um, can you actually get models that more generally learn to reason? One of the kind of cooler things that I think is kind of overlooked about the original QuietStar paper is we showed a bunch of, like, uh, kind of key improvements to the star paper that were necessary to actually do this kind of thing. So that was, for example, showing that it's really valuable for this algorithm to be online.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Um, showing that it's really valuable for... uh, to have a baseline where you, like, you know, the harder... f- for harder problems, you learn more. Uh, for easier problems, you, like, you don't learn quite as much. And I think that there were a bunch of, like, nuggets in there that, uh, even at the time, I don't think I fully, you know, thought of as, like, "Oh, wow, that's actually, like, a cool improvement over the original thing."
- SGSarah Guo
So you ended up going to Grok for several years, and you, uh... sorry, xAI for several years.
- EZEric Zelikman
Yeah.
- SGSarah Guo
And you worked on a bunch of different paradigms, so pre-training data for Grok-2 and then overall the reasoning recipe for Grok-3. I'm sure I'm missing things, but... uh, tool use and agentic infrastructure for
- 8:14 – 15:23
Current State of AI Models
- SGSarah Guo
Grok-4. I- I guess when you w- if you s- level set us today, like, how smart are models? They can obviously do end digit, um, arithmetic at this point.
- EZEric Zelikman
I guess in terms of, like, IQ stuff, I'd say, like, there's a lot of th- they... and if you're able to pose the problem, like, very well, um, like some very advanced, like, physics problem or math problem, I would say they're... they're- they're reasonably smart. I think, like, a lot of the failures that people see-
- SGSarah Guo
Give me a human comparison. What is reasonably smart?
- EZEric Zelikman
I think, I think it's hard to compare directly because it's very jagged.
- SGSarah Guo
Yeah.
- EZEric Zelikman
Like, like, it's- it's true that, like, some of these... for example, some of the HLE questions that these models are able to solve are genuinely things that are, like, non-trivial for, like, actual, like, PhD researchers. I'm not saying they're, like, o- like, open problems or anything, uh, but they are, like, pretty non-trivial.
- SGSarah Guo
Hard. Yeah.
- EZEric Zelikman
Also, a lot of them are, like... you know, one, one interesting category of, like, these... I spend a lot of time looking at kind of the HLE questions. One interesting category of that-
- SGSarah Guo
Sorry, Humanities Last Exam-
- EZEric Zelikman
Sorry. (laughs)
- SGSarah Guo
... for anybody who isn't looking at these evals.
- EZEric Zelikman
Sorry. (laughs)
- SGSarah Guo
No, great.
- EZEric Zelikman
Um, yeah, so what... yeah, looking at these Humanities Last Ex- uh, exam questions, I kind of, um... uh, one, one kind of category that is, like, actually quite big are these, like, tricker- trick questions that require, you know, basically people... like, y- if you're familiar with it, you'll be like, "Oh, they're- they're trying to get you to, like, assume something." But actually, like, if you think more carefully about this problem, that assumption doesn't hold. Um, and there's... this turns out to be, like, a bunch of those kinds of problems. So I think it's a... it's... they're pretty smart, but also they're more, I think, tripped up by some of these, like, tricky things. Um, but also they don't really... I think one of the core things is that they're not smart, like, emotionally or, like... they're not smart on the level of like actually understanding kind of what people care about or kind of, like, how to actually, like, help people accomplish the things that they care about.
- SGSarah Guo
I wanna talk about this and your next mission, but just on this topic of- of even jagged intelligence within, like, the IQ domain, which I think every- almost everybody in the industry has been, uh, focused on un- until now, what would you recommend for people who are not researchers to develop some sort of intuition for that surface? Because that seems very important to making them useful.
- EZEric Zelikman
Yeah. I guess one thing that's... that I think is, like, really important to keep in mind is that, like, the more kind of context you can give the current generation of models, the better you kind of are... uh, the- the better off you are.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Like, their answers are, uh, super sensitive to, like, you know, whatever additional information you can give them. Yeah, I th- I think this is, like, a really important thing. I would generally say, like, existing models are particularly good at handling questions that are, like, easy to answer in kind of, like, a closed form, like, um... if- if there's like a, you know, a simple numerical answer to what you're asking or, like, a simple, like, way of choosing from a set of things, this is s- something that these models actually, like... and obviously it's, like, all dependent, but this is something that makes it easier for the model. If you can imagine it being easy to check your answer-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... that actually, I think makes it easier for the models.
- SGSarah Guo
What, um, do you think is the most dominant explanation for attempts to use models in very verifi- more verifiable domains, like code, still failing at sophisticated tasks? Is it just, like, the wrong context has been fed to them? Is it, um, context window is simply not large enough to support the, like, scratch pad and continual testing? Like, what... why... in those domains, what is the biggest challenge?
- EZEric Zelikman
Part of it is there's, I think, a balance. When people kind of want to give users these models, it's actually important that they're not annoyingly slow. And so I think there's actually, like, a number of problems where, like, if you gave the models more time, you know, they would actually be able to answer-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... better. But, for example, in the kind of coding context-... you kind of have to be reasonably responsive, at least, it depend- it depends on the kind of setup, right? Like, if you look at products like, you know, OpenAI's Codex-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... um, which, you know, is kind of this longer running background thing, uh, versus, like, uh, Cursor, which is, like, you know, more interactive. You- you have a bit more luxury with those, uh, more background approaches, uh, to tackle harder problems, I'd say. Yeah, I think- I think it- it's a- it's a tricky question. Uh, a lot of things depend on how far the distribution of what you're- what- what you're asking is from the distribution that the models were actually trained with.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Um, so, you know, if you happen to be asking a problem that's very similar to the kind of problems that it's seen before, then, you know, it'll do great. Uh, and if you're asking a problem that's like very, yeah, out of domain, it- it... So like, to some extent, this question is kind of hard to answer concretely-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... without... Unless you know, like, basically what the da- what the RL data for a lot of these, you know, specific tasks is.
- 15:23 – 22:08
Human-Centric AI and Future Directions
- EZEric Zelikman
capabilities axis. I- I do think that one... As you start thinking about some of these new kind of axes of scaling, it's actually very natural to realize that like there are ways to do them in ways that incorporate people and there's ways to do them in ways that kind of leave people out more and more.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
And being very mindful of, oh, hey, I'm designing this new algorithm and it's going to scale IQ, you know, of this model by X amount.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
If you effectively, like, keep people- to effectively keep people in the loop is actually like a very active decision.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
Um, and so, you know, I think in general if you're thinking about these things, that's important.
- SGSarah Guo
Wouldn't it be fair to claim that the instinct of many labs is to, like, try to get people out of the loop as much as possible from a scaling perspective? Because that's very messy, right? If I want to recruit people to, for example, take complex reasoning traces off them in tasks that are not in distribution for me yet, um, that is not as simple to execute on for an organization, uh, as like more roll-outs, right?
- EZEric Zelikman
Yeah. For sure.
- SGSarah Guo
Um, and so why is that important at all from a capabilities perspective?
- EZEric Zelikman
Yeah.
- SGSarah Guo
Maybe that's a good transition to, like, what are you doing? Yeah.
- EZEric Zelikman
Yeah. I'd say that it, the- the main thing is just that, like, as you kind of have these models that, you know, expand in terms of like, uh, the horizon that they're automating, you know, we have these models, the- the recent like or recent-ish IMO results are like a kind of a good example of this. You have these models that go on for like, you know, hours of, you know, reasoning, um, without any kind of human intervention.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
And this has kind of been an increasing, uh, measure of success, I would say, for these labs. So for example, you know, there's this METR, like meter-
- SGSarah Guo
Yeah.
- EZEric Zelikman
... like, uh, benchmark that everyone likes to share whenever there's a new model. Uh, and it's like, oh, we went from being able to have these models work for two, like complete two-hour tasks autonomously without human intervention to 2.5-hour task, uh, without human intervention. And obviously, there's like questions of like what do those numbers actually mean, um, and how, like, should we take them like kind of at face value? But regardless, this has kind of been, like, the metric that, you know, people are looking at more and more, uh, to measure progress. But, you know, as we kind of, uh, get these models that increasingly, you know, remove people from the interaction, you end up with basically people having less say in kind of the things that get built.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
You end up with like... You know, I think if you have a model that goes off and does its own thing for like eight hours and comes back to you with like something that like is somewhat there, um, I think this is like a weird regime where, like, people probably feel less, like, real agency over the things that they're building. And I think also...I kind of anticipate that people will feel like they don't really understand the things that are being built.
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
You know, I think this is-
- SGSarah Guo
That's already true.
- EZEric Zelikman
Yeah, I think it's already true.
- SGSarah Guo
20,000 lines of generated code looks good to me. (laughs)
- EZEric Zelikman
Yeah, it's just like you make these PRs and they're like 100,000 lines of like, you know, like...
- SGSarah Guo
Yeah.
- EZEric Zelikman
And I think in general, this is kind of going to be part of the trend.
- SGSarah Guo
So do you think that it's important to have humans in the loop of, you know, producing the output or the reasoning because the ceiling is higher with humans who are in the loop, because it is more efficient because we can error correct when models are off path, or philosophically because people want that, or like some combination of all three?
- EZEric Zelikman
Yeah, I think it's probably some combination. I think another thing that I kind of think about is like, you know, the most natural thing to do as you kind of automate away the existing set of tasks is, you know, you kind of look at the world GDP. You, like, carve out the parts that are, like, you know, most easy to replace with these models.
- SGSarah Guo
Mm-hmm.
- 22:08 – 35:33
Eric’s New Venture: humans&
- EZEric Zelikman
- SGSarah Guo
So, um, that goes to you are starting a new company, Humans And. I remember being, like, actually quite fundamentally surprised given all of your work on IQ and reasoning and coding and- and scale that you were interested in essentially EQ, and you also thought of EQ, and tell me if this is a wrong characterization, as, um, like, the- the emotional, uh, or the interactive capabilities of models to date have really shown up in things like character or, like, companionship tools only. And you thought of it as also, like, enablement-from-a-productivity perspective. Right? Um, so tell me about h- like, where this thread came from.
- EZEric Zelikman
Yeah. I- I guess I've been thinking about this kind of stuff for some time now. Like, even back in my PhD, I think, one of my, I guess, less well-known works was actually about... We showed that you can train language models to simulate different kinds of students.
- SGSarah Guo
Right.
- EZEric Zelikman
Uh, and...
- SGSarah Guo
For tests.
- EZEric Zelikman
Yeah, yeah.
- SGSarah Guo
Yeah.
- EZEric Zelikman
And- and by simulating students, we- you can actually design better tests for those students. And that was, like, a really cool finding. Like, hey, if you have models that are really good at modeling people, you can actually design systems that are better for people. And, like, th- this was something that, like, I- I found really cool. And, um, and- and kind of as we moved towards the current kind of capabilities frontier, it became more and more obvious that like the, you know, we have these incredibly smart models, uh, that are, like, capable of so much, but they're not used for anywhere near what they're capable of. Like, the- the role that they play in people's lives, like, is a lot less deep, a lot less positive than it could be. And I spent a lot of time thinking about, like, "Okay, why- why is that?" Like, "Why are these models not, like, more, like I said, deeply positively integrated into people's lives?" And it seemed like a really big part of it is like that fundamentally these models don't really understand people. They don't understand people's goals. Um, they're trained... I would say part of it is, like, the general kind of training paradigm that the field is in. It's very, I would say, single-task focused or task-centric.
- SGSarah Guo
Mm-hmm. It's ludicrous that all the benchmarks are still oriented this way. Yeah.
- EZEric Zelikman
Yeah. I mean, like- like-
- SGSarah Guo
Or most of them.
- EZEric Zelikman
You know, I mean, even- even the ones that are like... L- like there's very few benchmarks out there that actually try to consider like, oh, what if you actually have, like, a person that's interacting with this model? Like, you know, at best, you have like some, you know, multi-turn benchmarks that, like, uh, try to simulate what an environment would respond in different, you know, to different inputs. Uh, but even that is, like, still, like, far from, you know, considering, hey, if you actually have this model that interacts with a person for like, you know-... some amount of time. Like, how does it actually affect that person's life? It's, it's really remarkable that the field is kind of like so stuck in this kind of task-centric regime. Um, and I think it, but it makes a lot of sense. One thing that I was told by some folks at, you know, at Google is that it, it, one of the reasons is that, like, it's actually very useful for, like, credit assignment. So like, being able to have, like, these benchmarks that are very easy to quantify and very easy to, like, relate to some, like, immediate thing means that you can kind of say, like, "Oh, yeah, this, like, you know, this, this team did, like, 2% better than this team so they deserve, like, all of the resources."
- SGSarah Guo
Hmm.
- EZEric Zelikman
Or, you know, "This team, like, improved the benchmark by, like, 10%, while this team improved it by 5%. So, you know, let's, let's allocate accordingly." And I think in general, like, that's, that's part of it. I think another part of it is, like, kind of more aligned with the easiest ways to train these models. Y- y- it's, it's not easy to, you know, have these RL environments and stuff. You have lots of these companies popping up, obviously, that are trying to sell, you know, environments to different people. But, uh-
- SGSarah Guo
And the most popular are, of course, encoding and computer use.
- EZEric Zelikman
Yeah.
- SGSarah Guo
Um, rather than anything that requires simulating people.
- EZEric Zelikman
Yeah. It, it, it's not that surprising that we're kind of in this current, uh, regime. But...
- SGSarah Guo
So, what do models need to, um, know about people? Or like, what capabilities are they, um, either missing or have not been elicited from them?
- EZEric Zelikman
The most fundamental thing is that the models kind of don't understand the long-term implications of the things that they do and say. When you treat every turn of a conversation as kind of its own game-
- SGSarah Guo
Mm-hmm.
- EZEric Zelikman
... and you, you know, you basically think of it as like, okay, you had this interaction, you're done, you need to make sure that this one response has all of the possible answers, has all of the possible content, you don't ever, like, ask questions, you don't ever, like, try to clarify things, you don't really tend to express uncertainty, um, you don't tend to be proactive, you don't tend to think about the long-term, uh... Like, like, you, you see a lot of, like, even single-turn side effects of this kind of regime. Like, and most of them are treated as kind of their own problems to solve. You see issues around, like, that, that people highlight around, like, sycophancy. You see issues that, you know, that there was recent news around, like, you know, the psychosis stuff. There, there's a lot of these, like, uh, harmful effects that you get if you think about things in this very single task or, like, task-centric way. Um, but if you have models that actually consider, you know, the long-term implications of, oh, hey, if I tell this person to start, like, a, you know, a company that, you know, sells gloves for catching ice cream, if I, like, tell them that that sounds like a good business idea, they might actually go-
- SGSarah Guo
(laughs)
- EZEric Zelikman
... and they might actually build that business, and they might realize that it was not actually a good business idea. Having
- EGElad Gil
(sighs)
- EZEric Zelikman
... a model that can kind of rule out the long-term implications of the things that I said-
- SGSarah Guo
And then they won't trust me anymore, and then they won't pay for my compute.
- EZEric Zelikman
Exactly.
- SGSarah Guo
And then (laughs) it's all over.
- 35:33 – 36:58
Recruitment Goals for humans&
- SGSarah Guo
Um, okay. Super unique mission, amazing research work, you're hiring an early team, getting a lot of compute. Who are you looking for on the recruiting side?
- EZEric Zelikman
One, one thing that I think is actually probably a good thing that my previous company did is, you know, thinking of everyone kind of, to some extent as like engineers. I think, um, I'm looking for really strong infra folks who can build stuff. I'm looking for really strong researchers who can build stuff. I'm looking for really strong product folks who can build stuff. I'm looking for people who, like, have thought a lot about, like, users, who've thought a lot about, like, memory. You know, on the research side. I'm looking for, you know, on the infra side, for people who have thought about building distributed systems, really fast inference, people who've, uh, d- you know, been there to scale really big projects up. Um, on the product side, I think people who are like, you know, really creative about like new modes of interaction, people who have, who really deeply care about building beautiful, tasteful products.
- SGSarah Guo
Awesome. Thanks so much, Aric.
- EZEric Zelikman
Thank you so much.
- SGSarah Guo
Congrats on the new company.
- EZEric Zelikman
Thank you so much.
- SGSarah Guo
(instrumental music) Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Episode duration: 36:58
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode Oh0oQnKn9dw
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome