Skip to content
No PriorsNo Priors

No Priors Ep. 12 | With Noam Shazeer

Noam Shazeer played a key role in developing key foundations of modern AI - including co-inventing Transformers at Google, as well as pioneering AI chat pre-chatGPT. These are the foundations supporting today’s AI revolution. On this episode of No Priors, Noam discusses his work as an AI researcher, engineer, inventor, and now CEO. Noam Shazeer is currently the CEO and Co-founder of Character AI, a service that allows users to design and interact with their own personal bots that take on the personalities of well-known individuals or archetypes. You could have a socratic conversation with Socrates. You could pretend you’re being interviewed by Oprah. Or you could work through a life decision with a therapist bot. Character recently raised $150M from A16Z, Elad Gil, and others. Noam talks about his early AI adventures at Google, why he started Character, and what he sees on the horizon of AI development. 00:00 - Introduction 01:50 - Noam’s early AI projects at Google 07:13 - Noam’s focus on language models and AI applications 11:13 - Character’s co-founder Daniel de Freitas Adiwardana work on Google’s Lambda 13:53 - The origin story of Character.AI 18:47 - How AI can express emotions 26:51 - What Noam looks for in new hires

Elad GilhostNoam ShazeerguestSarah Guohost
Apr 24, 202335mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Transformer Pioneer Noam Shazeer Builds Emotional AI at Character.ai

  1. Noam Shazeer, co‑founder of Character.ai and co‑author of the Transformer paper, discusses his path from early Google AI work to building large language models and chat-based products. He explains why transformers overtook RNNs, emphasizes that language modeling is an “AI-complete” problem, and argues that scaling models, data, and compute still shows no clear saturation point. Shazeer details the origins of Google’s LaMDA (formerly Mina), why big companies hesitated to launch open-ended chatbots, and how that led him and co-founder Daniel de Freitas to start Character.ai. He also explores user behavior on Character.ai, emotional and parasocial use cases, safety tradeoffs, commercialization plans, and his broader motivation of using AI progress as a lever toward AGI and solving real-world problems like medicine.

IDEAS WORTH REMEMBERING

5 ideas

Transformers won because they align with modern parallel hardware.

Shazeer explains that deep learning’s success—and transformers in particular—comes from being highly optimized for GPU/TPU-style matrix-multiply hardware, enabling massive parallelism over sequences rather than slow, stepwise RNN computation.

Language modeling is simple to define yet essentially AI-complete.

Predicting the next word from vast text corpora is conceptually simple but, done well, yields general-purpose capabilities like dialogue, reasoning, and task assistance, making language modeling a central route to broad AI.

We have not yet hit a clear capability wall for LLMs.

Between algorithmic improvements (better architectures, training, quantization) and large increases in compute budgets, Shazeer sees no obvious point where current architectures definitively “tap out” in performance.

Data scarcity is overstated; human and AI-generated text can fuel growth.

He notes the enormous volume of language humans produce daily and anticipates increasing interaction data with AIs themselves, suggesting that data, especially with privacy-preserving methods, is unlikely to be the fundamental bottleneck soon.

Multi-persona chat is a better product fit than a single ‘universal assistant.’

Shazeer argues that single corporate assistants must be bland and inoffensive, whereas allowing users to create diverse characters and personas produces richer, more engaging and human-feeling interactions.

WORDS WORTH SAVING

5 quotes

The most exciting problem out there is language modeling… it’s really AI-complete.

Noam Shazeer

Deep learning really took off because it runs thousands of times faster than anything else on modern hardware.

Noam Shazeer

I don’t think anyone’s seen a wall in terms of how good this stuff is, so I think it’s just gonna keep getting better and I don’t know what stops it.

Noam Shazeer

Basically this is a technology that’s so accessible that billions of people can just invent use cases.

Noam Shazeer

I wanted to have a company that was both AGI first and product first… by making the product depend entirely on the quality of the AI.

Noam Shazeer

Noam Shazeer’s background at Google, Google Brain, and early AI workTechnical shift from recurrent neural networks to transformer/attention architecturesScaling laws in language models: compute, data, algorithms, and limitsOrigins of Google’s Mina/LaMDA chatbot and why it wasn’t broadly releasedFounding and team-building principles behind Character.aiCharacter.ai’s product design: user-created personas, role-play, and emotional supportSafety, hallucinations, commercialization, and the path toward AGI-level systems

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome