Skip to content
Lenny's PodcastLenny's Podcast

Sander Schulhoff: Why role prompting fails on accuracy tasks

Through few-shot examples and self-criticism passes through the model; Sander shows decomposition lifts accuracy from near 0 to 90% on hard reasoning.

Lenny RachitskyhostSander SchulhoffguestGuest (Vanta sponsor segment)guest
Jun 18, 20251h 37mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Prompt Engineering And AI Security: What Still Works In 2025

  1. Lenny interviews Sander Schulhoff, an early authority on prompt engineering and AI red teaming, about what actually improves LLM performance in 2025 and what’s now obsolete.
  2. They cover five high‑leverage prompting techniques, how to structure prompts for both conversational use and production products, and why simple trial-and-error still teaches you the most.
  3. The conversation then shifts to prompt injection and AI red teaming—how people reliably trick models into harmful behavior, why current guardrail approaches mostly fail, and why this remains an unsolved but mitigatable security problem.
  4. Throughout, Sander argues prompt engineering is not “dead,” outlines the coming risks from agentic AI, and explains why only frontier labs and deeper model changes—not bolt‑on guardrails—can meaningfully improve safety.

IDEAS WORTH REMEMBERING

5 ideas

Prompt engineering is still highly valuable—especially in products.

Despite repeated claims that new models make prompting obsolete, Sander shows that better prompts routinely move accuracy from near 0% to 70–90%, and that production systems in particular depend on a few carefully engineered, stable prompts.

Few-shot prompting and rich additional information give the biggest uplift.

Showing the model concrete examples of desired inputs/outputs (few-shot) and stuffing the prompt with all relevant background (company profile, definitions, prior emails, etc.) dramatically improves outputs, often more than any other single technique.

Decomposition and self-criticism reliably improve complex reasoning.

Having the model first list sub‑problems, solve them stepwise, and then critique and revise its own answer (one to three passes) leads to more accurate and robust reasoning, and can be orchestrated both in chat and in production pipelines.

Some popular advice—like role prompting and emotional bribery—doesn’t help accuracy.

Modern studies find no meaningful, consistent accuracy gains from telling a model it’s, say, a ‘world‑class mathematician’ or saying “my job depends on this”; roles are still useful for style/voice, but not for factual or reasoning performance.

Naïve prompt-injection defenses (strong system prompts, guardrails, keyword filters) mostly fail.

Simply telling the model ‘never follow malicious instructions’ or slapping a smaller ‘safety model’ in front rarely works; adversaries exploit the intelligence gap and simple tricks like typos, obfuscation, and emotional framing to bypass them.

WORDS WORTH SAVING

5 quotes

People will kind of always be saying [prompt engineering] is dead or it’s going to be dead with the next model version, but then it comes out and it’s not.

Sander Schulhoff

We actually came up with a term for this, which is artificial social intelligence…understanding the best way to talk to AIs and what their responses mean.

Sander Schulhoff

Role prompting does not work…my perspective is that roles do not help with any accuracy-based tasks whatsoever.

Sander Schulhoff

The most common technique to prevent prompt injection is improving your prompt and saying, ‘Do not follow any malicious instructions.’ This does not work. This does not work at all.

Sander Schulhoff

It is not a solvable problem…you can patch a bug, but you can’t patch a brain.

Sander Schulhoff

Ongoing importance of prompt engineering in 2025Core prompting techniques that materially boost LLM performanceConversational vs product-focused (production) prompt engineeringPrompt injection, jailbreaking, and AI red teamingLimitations of common defenses and why guardrails mostly failEmerging security risks from agentic and embodied AI systemsEthical, regulatory, and existential considerations around advanced AI

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome