Jay Shetty PodcastJay Shetty Podcast

The SECRET Loop That Keeps You Glued to Your Phone (Most People Never Notice It)

Jay Shetty on how algorithms and our instincts trap us in addictive scrolling loops.

Jay ShettyhostJay Shettyhost
Dec 5, 202526mWatch on YouTube ↗
Reinforcement-based recommendation systemsInfinite scroll and autoplay as behavioral nudgesComparison, beauty standards, and self-worthOutrage as social currency and identity signalingMisinformation dynamics and negativity-driven sharingEcho chambers even without algorithms (bot social network study)Practical feed “reset” behaviors and morning phone boundaries
AI-generated summary based on the episode transcript.

In this episode of Jay Shetty Podcast, featuring Jay Shetty and Jay Shetty, The SECRET Loop That Keeps You Glued to Your Phone (Most People Never Notice It) explores how algorithms and our instincts trap us in addictive scrolling loops Algorithms primarily optimize for attention and watch time, learning from every pause, click, rewatch, and share to serve increasingly sticky content.

At a glance

WHAT IT’S REALLY ABOUT

How algorithms and our instincts trap us in addictive scrolling loops

  1. Algorithms primarily optimize for attention and watch time, learning from every pause, click, rewatch, and share to serve increasingly sticky content.
  2. The “glued to your phone” effect is a feedback loop: our emotionally charged engagement trains systems that then narrow our exposure and amplify outrage and division.
  3. Many harms attributed to algorithms are also driven by human tendencies—negativity bias, comparison, and identity-signaling—meaning even “noise-free” platforms can reproduce echo chambers.
  4. Doom-scrolling can raise cortisol and anxiety and create learned helplessness, intensifying the sense that we lack control and must keep checking.
  5. Solutions require both structural product changes (chronological feeds, friction before sharing, transparency/audits) and individual habits that actively reshape the feed and strengthen emotional mastery and critical thinking.

IDEAS WORTH REMEMBERING

5 ideas

The algorithm isn’t omniscient—it’s a mirror powered by your inputs.

It repeatedly asks, “What will keep you here the longest?” and learns from your micro-behaviors (hovering, rewatches, shares). Your actions don’t just reflect preferences; they train the next version of your feed.

Addictive design hides choice rather than removing it.

Features like infinite scroll and autoplay reduce deliberation and extend sessions (cited study: disabling autoplay shortened average sessions). The result is passive consumption that feels like “I didn’t choose this,” even though the system is responding to prior engagement.

Outrage spreads because people reward it, not only because platforms push it.

Research cited (Yale) suggests moral outrage gets social rewards (likes/retweets), training users to produce more outrage. This creates a human-driven incentive loop where “what performs” crowds out nuance and honesty.

Misinformation wins on engagement, and algorithms can’t distinguish truth from clicks.

The transcript cites that false news is more likely to be retweeted and travels faster than true news; recommendation systems then amplify what’s already emotionally potent. The weakest link is often our impulse to share before verifying or reading.

Even removing algorithms may not fix polarization—social sorting is a core driver.

A University of Amsterdam experiment described a stripped-down network (no ads/recommendations) where AI bots still formed echo chambers and rewarded extreme partisan content. That suggests “platform mechanics” and “human tendencies” can both generate division.

WORDS WORTH SAVING

5 quotes

The algorithm doesn't just know us, it depends on us, and if we learn how it feeds, we can decide whether to starve it or steer it.

Jay Shetty

In plain words, the algorithm isn't a mastermind. It's a machine that asks one question over and over again. "What will keep you here the longest?"

Jay Shetty

The algorithm's goal is not to make us polarized. It's not to make us happy. It's to make us addicted and glued to our screens.

Jay Shetty

If the algorithm is made of us, then changing it doesn't start with code. It starts with character.

Jay Shetty

When you like something, you're telling the algorithm, "Show me more of this." When you hover over something, you're saying to the algorithm, "I pay attention when you show me this." When you comment on something, you're saying, "This is really important to me." And when you share it off the platform, you're saying, "Fill my feed with this." You're co-creating your algorithm. You're actually coding it.

Jay Shetty

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

In your view, which signal is most powerful in shaping a feed—rewatches, hover time, comments, or shares—and why?

Algorithms primarily optimize for attention and watch time, learning from every pause, click, rewatch, and share to serve increasingly sticky content.

How would a “chronological by default” feed work on platforms built around discovery (e.g., TikTok) without killing creator reach?

The “glued to your phone” effect is a feedback loop: our emotionally charged engagement trains systems that then narrow our exposure and amplify outrage and division.

The Amsterdam bot study suggests echo chambers form even without algorithms—what specific human behaviors should product design try to interrupt?

Many harms attributed to algorithms are also driven by human tendencies—negativity bias, comparison, and identity-signaling—meaning even “noise-free” platforms can reproduce echo chambers.

What’s the best ‘friction before sharing’ mechanism that reduces misinformation without feeling paternalistic or censorious?

Doom-scrolling can raise cortisol and anxiety and create learned helplessness, intensifying the sense that we lack control and must keep checking.

You cite that negativity increases shares for media posts; what incentives could make constructive or nuanced content equally ‘performant’?

Solutions require both structural product changes (chronological feeds, friction before sharing, transparency/audits) and individual habits that actively reshape the feed and strengthen emotional mastery and critical thinking.

Chapter Breakdown

The algorithm feels all-powerful—but it depends on you

Jay frames the central idea: recommendation systems aren’t “smart” in a human way, but they are powerful because they exploit predictable human weaknesses. He introduces the thesis that every system has a “glitch”—it needs our engagement—and that learning how it feeds lets us starve it or steer it.

How insecurity becomes a personalized feed: Amelia’s story

A fictional but realistic scenario shows how a single late-night scroll can turn into a comparison habit that reshapes someone’s identity and self-worth. Jay connects this to widespread body-image pressure, especially among girls, and asks whether the “mirror” is built by Silicon Valley or by our clicks.

What algorithms actually do: watch, predict, amplify, adapt

Jay breaks down the mechanics of modern feeds: they measure micro-behaviors, predict what you’ll engage with, amplify emotionally engaging posts, and constantly retrain based on your latest actions. He describes the “reinforcement system” cycle that narrows exposure and accelerates outrage.

The trap design: nudges, outrage loops, and the extremist ‘push’

He explains how product design and social rewards keep users stuck: autoplay and infinite scroll hide choice, outrage gets reinforced by likes, and platforms can steer neutral interest into more extreme content. The result differs by gender in expression, but converges on isolation and exhaustion.

Your clicks build the cage: why misinformation and bias win

Jay shifts from platform behavior to user behavior: algorithms don’t evaluate truth; they follow engagement. He cites how false news spreads faster than true news, negativity increases shares, and people preferentially click information that confirms their beliefs—creating fortified echo chambers.

If you remove the algorithm, does the problem disappear? The bot experiment

A University of Amsterdam study tested a stripped-down network without ads or recommender systems, then released AI agents with identities into it. Even without algorithmic pushes, the agents formed echo chambers and rewarded extreme voices—suggesting social media dynamics may amplify our worst instincts by default.

Why negativity hooks us: comparison, envy, and three cognitive drivers

Jay argues algorithms monetize ancient human patterns: comparison and envy, especially when we’re tired or overwhelmed. He outlines three psychological forces—negativity bias, outrage as group belonging, and preference for simple narratives—that make outrage and doom content feel compelling.

Platform-level fixes: change incentives with defaults, friction, and audits

He proposes three changes companies could implement to reduce harm: make chronological feeds the default, add friction before sharing, and require algorithmic transparency with independent audits. Jay notes these measures may reduce engagement, which is why platforms resist them.

Human-level fixes: emotional mastery and critical thinking as “the real upgrade”

Jay uses a Buddha story to argue that personal practice matters because it helps us lose anger, envy, and ego—the very emotions feeds exploit. He contends that changing social media isn’t only about code; it’s about building healthier users through emotional regulation and critical thinking.

How to reset your For You Page: 5 practical actions to retrain the feed

Jay demonstrates how quickly a feed can change when you deliberately follow, like, hover, and share different content. He offers five concrete steps—diversify follows, engage intentionally, share outside your norm, avoid morning phone use, and practice joy—to reassert agency over recommendations.

Co-creating your algorithm—and choosing to leave the ‘party’

He clarifies the meaning of each engagement signal (like, hover, comment, share) and emphasizes that algorithms are predictive, not destiny. Jay ends with a party metaphor: social media rooms of comparison and conflict feel inevitable, but the invitation comes from learned behavior—and you can decide whether to walk back in.

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome