Skip to content
No PriorsNo Priors

No Priors Ep. 135 | With Humans& Founder Eric Zelikman

The AI industry is obsessed with making models smarter. But what if they’re building the wrong kind of intelligence? In launching his new venture, humans&, Eric Zelikman sees an opportunity to shift the focus from pure IQ to building models with EQ. Sarah Guo is joined by Eric Zelikman, formerly of Stanford and xAI, who shares his journey from AI researcher to founder. Eric talks about the challenges of building human-centric AI, integrating long-term memory in models, and the importance of creating AI systems that work collaboratively with humans to unlock their full potential. Plus, Eric shares his views on abundance and what he’s looking for in talent for humans&. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ericzelikman Chapters: 00:00 – Eric Zelikman Introduction 00:29 – Eric’s Early Interest in AI 01:29 – Challenges in AI and Automation 02:25 – Research Contributions 06:14 – Q-STaR and Scaling Up AI 08:14 – Current State of AI Models 15:23 – Human-Centric AI and Future Directions 22:08 – Eric’s New Venture: humans& 35:33 – Recruitment Goals for humans& 36:58 – Conclusion

Sarah GuohostEric ZelikmanguestElad Gilhost
Oct 8, 202536mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

From IQ to EQ: Building Human-Centric AI That Truly Collaborates

  1. Eric Zelikman, former Stanford researcher and xAI lead, discusses his work on advancing AI reasoning through methods like STAR and Q* and his shift toward building more human-centric systems.
  2. He explains how reinforcement learning and scalable reasoning have dramatically improved model 'IQ', yet current models still lack deep understanding of human goals, context, and long-term outcomes.
  3. Zelikman argues that the industry’s task-centric, single-turn benchmarks and automation mindset limit AI’s ability to genuinely empower people rather than replace them.
  4. His new company, Humans&, aims to build models that understand users over time, remember their context, and act as long-horizon collaborators that expand human potential instead of just automating existing GDP slices.

IDEAS WORTH REMEMBERING

5 ideas

Scaling reasoning via RL can continuously extend model capabilities.

STAR showed that by reinforcing successful chains of thought, models can progressively solve harder problems (e.g., increasing digit-length arithmetic) without a clear performance plateau, indicating strong scalability of reasoning-focused RL.

Model performance is highly sensitive to context and problem framing.

Today’s models do best when given rich, precise context and tasks with clearly verifiable answers; users and product designers should structure interactions to include as much relevant information and clear evaluation criteria as possible.

Verification and training distribution still bound what models can reliably do.

In areas like code, success depends heavily on how close a task is to training distributions and how verifiable outputs are; out-of-domain, poorly verifiable problems still reliably expose model weaknesses.

Single-turn, task-centric training creates shallow, brittle AI behavior.

Optimizing for one-off responses leads to models that avoid asking clarifying questions, rarely model long-term consequences, and can exhibit issues like sycophancy and harmful advice without grasping downstream impact on users’ lives.

Long-term, human-in-the-loop collaboration can grow the economic pie.

Instead of just automating existing work segments, models that deeply understand people's goals, constraints, and aspirations can help them pursue entirely new, out-of-distribution projects, driving net new value and innovation.

WORDS WORTH SAVING

5 quotes

The role that [models] play in people's lives is a lot less deep, a lot less positive than it could be.

Eric Zelikman

If you have a model that goes off and does its own thing for eight hours, people will probably feel less real agency over the things that they're building.

Eric Zelikman

Fundamentally these models don't really understand people. They don't understand people's goals.

Eric Zelikman

It's really remarkable that the field is kind of so stuck in this task‑centric regime.

Eric Zelikman

We’re much more likely to solve a lot of these fundamental human problems by building models that are really good at collaborating with large groups of people.

Eric Zelikman

Early motivation for AI: unlocking underused human talent and potentialSTAR and Q* (QuietStar): scalable reinforcement learning for reasoningCurrent state and limits of model 'IQ' and jagged capabilitiesVerification, context, and out-of-distribution challenges in domains like codingCritique of task-centric training, benchmarks, and human-out-of-the-loop scalingVision for EQ: models that understand human goals, context, and long-term effectsHumans&: mission, product vision, and early hiring priorities

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome