The Joe Rogan ExperienceJoe Rogan Experience #1263 - Renée DiResta
Joe Rogan and Renée DiResta on inside Russia’s Social Media War: Troll Farms, Tribes, And Tension.
In this episode of The Joe Rogan Experience, featuring Joe Rogan and Renée DiResta, Joe Rogan Experience #1263 - Renée DiResta explores inside Russia’s Social Media War: Troll Farms, Tribes, And Tension Joe Rogan interviews disinformation researcher Renée DiResta about how Russia’s Internet Research Agency (IRA) systematically weaponized U.S. social media from roughly 2014–2017.
At a glance
WHAT IT’S REALLY ABOUT
Inside Russia’s Social Media War: Troll Farms, Tribes, And Tension
- Joe Rogan interviews disinformation researcher Renée DiResta about how Russia’s Internet Research Agency (IRA) systematically weaponized U.S. social media from roughly 2014–2017.
- DiResta explains how Russian operators built seemingly authentic communities (LGBT, Black activism, right‑wing patriot groups, etc.), nurtured them with non‑political content, then gradually injected divisive, highly targeted political narratives.
- They combined meme warfare, sock‑puppet accounts, ad buys, and real‑world event organizing to polarize Americans, depress turnout among key Democratic constituencies, and boost Donald Trump while undermining establishment Republicans and Hillary Clinton.
- The conversation also explores platform responsibility, free‑speech vs. moderation dilemmas, and how emerging technologies like deepfakes and AI‑generated personas will make future information manipulation even harder to detect.
IDEAS WORTH REMEMBERING
7 ideasRussian operators built trust first, then injected politics into identity communities.
Pages targeting Black Americans, LGBT people, Texas secessionists, and others began with relatable cultural content and in‑group affirmation, then slowly introduced messages like “as Black people we don’t vote” or “as LGBT people we hate Mike Pence,” leveraging tribal identity to steer political behavior.
Most “trolls” were human sock‑puppets, not simple bots, often using semi‑automation.
The IRA employed young, internet‑savvy staff who ran multiple personas (“cyborg” accounts) that mixed automated posting with real‑time engagement, harassment, and relationship‑building, making them far harder to detect than purely automated bots.
They repurposed and A/B‑tested pages like a professional marketing firm.
When themes flopped, the IRA would rename and rebrand pages—e.g., a failing Kermit‑meme Instagram became a Homer Simpson page, then eventually “Army of Jesus”—showing a data‑driven, iterative approach to maximizing engagement and reach.
Russian campaigns amplified real American content and grievances, not just fabrications.
To avoid linguistic and cultural giveaways, operators often copied local news, memes from U.S. partisan brands (Turning Point USA, Occupy Democrats), and authentic activist narratives, remixing them with subtle framing rather than inventing issues from scratch.
Online operations frequently spilled into offline action, including protests and trainings.
The IRA organized Facebook events for dueling rallies (e.g., Texas secessionists vs. pro‑Muslim groups), hired Americans for “Black Fist” self‑defense classes, and funded stunts like a Hillary‑in‑a‑jail‑truck, turning digital agitation into physical confrontation.
The measurable footprint was huge, but actual persuasion impact remains largely unknowable.
Datasets show millions of posts, hundreds of pages, and hundreds of millions of engagements, yet researchers lack comment threads and downstream behavior data, so they cannot reliably say whether this “swayed the election,” only that it likely shifted tone and norms in targeted communities.
Future disinformation will be supercharged by AI‑generated faces, voices, and videos.
Technologies like GAN‑generated profile photos (“thispersondoesnotexist.com”) and deepfake audio/video will make it trivial to create convincing fake people and events, increasingly blurring the line between reality and fabrication and complicating both user judgment and platform defense.
WORDS WORTH SAVING
5 quotesYou can believe two things simultaneously: that Trump did not collude and that this operation still happened.
— Renée DiResta
They were building tribes. ‘As Black people we don’t vote,’ ‘As LGBT people we hate Mike Pence’—that’s how they pushed people politically.
— Renée DiResta
It seemed like we had moved into another level of hostility that I’d never experienced before.
— Joe Rogan
People assume they’re too smart to fall for it. It’s just those liberals or those conservatives. No, it targets everybody.
— Renée DiResta
I’m in one sense horrified and in another sense deeply impressed.
— Joe Rogan
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsHow can ordinary users realistically distinguish between authentic grassroots communities and long‑game influence operations that look and feel ‘real’?
Joe Rogan interviews disinformation researcher Renée DiResta about how Russia’s Internet Research Agency (IRA) systematically weaponized U.S. social media from roughly 2014–2017.
What types of regulatory or oversight frameworks could pressure platforms to address algorithmic amplification without becoming de facto censors of political speech?
DiResta explains how Russian operators built seemingly authentic communities (LGBT, Black activism, right‑wing patriot groups, etc.), nurtured them with non‑political content, then gradually injected divisive, highly targeted political narratives.
Given that Russian campaigns mainly amplified existing divisions, what responsibility do domestic media and political elites bear for the underlying polarization?
They combined meme warfare, sock‑puppet accounts, ad buys, and real‑world event organizing to polarize Americans, depress turnout among key Democratic constituencies, and boost Donald Trump while undermining establishment Republicans and Hillary Clinton.
How should democracies prepare their citizens—through education or media literacy—for a world where AI‑generated humans and deepfake events are commonplace?
The conversation also explores platform responsibility, free‑speech vs. moderation dilemmas, and how emerging technologies like deepfakes and AI‑generated personas will make future information manipulation even harder to detect.
At what point does social‑media manipulation cross the line from propaganda into an act of aggression that should trigger a formal state‑level response?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome