The Diary of a CEOBrain Experts WARNING: Watch This Before Using ChatGPT Again! (Shocking New Discovery)
Steven Bartlett and Dr Daniel Amen on aI Convenience Versus Cognitive Decline: Brain Experts Sound Urgent Warning.
In this episode of The Diary of a CEO, featuring Dr Daniel Amen and Dr Terry Sejnowski, Brain Experts WARNING: Watch This Before Using ChatGPT Again! (Shocking New Discovery) explores aI Convenience Versus Cognitive Decline: Brain Experts Sound Urgent Warning Two leading brain experts, psychiatrist Dr. Daniel Amen and computational neuroscientist Dr. Terry Sejnowski, discuss how large language models like ChatGPT may erode critical thinking, memory, and long‑term brain health if used as a cognitive replacement rather than a thinking partner.
At a glance
WHAT IT’S REALLY ABOUT
AI Convenience Versus Cognitive Decline: Brain Experts Sound Urgent Warning
- Two leading brain experts, psychiatrist Dr. Daniel Amen and computational neuroscientist Dr. Terry Sejnowski, discuss how large language models like ChatGPT may erode critical thinking, memory, and long‑term brain health if used as a cognitive replacement rather than a thinking partner.
- They dissect a headline‑grabbing (yet not peer‑reviewed) MIT study showing a 47% drop in brain activity and sharply worse memory when people wrote with ChatGPT versus unaided, and connect lowered cognitive load to higher long‑term dementia risk under the “use it or lose it” principle.
- The conversation broadens to SSRI and benzo links with dementia, AI companions and sexualized agents like “Annie,” children’s brain development, loneliness, learning science, and practical protocols for protecting brain health across the lifespan.
- Both argue we have repeatedly “embraced convenience before understanding consequence” (social media, ultra‑processed food, smartphones) and warn that AI—especially for kids—could be more dangerous unless we deliberately legislate, study, and personally self‑regulate how we use it.
IDEAS WORTH REMEMBERING
7 ideasUse AI to amplify, not replace, your thinking
Amen and Sejnowski stress that the damaging pattern is deferring cognition to ChatGPT—e.g., typing a half‑sentence prompt and posting a full AI‑written article. That minimizes engagement of memory circuits like the hippocampus, weakens neural connections, and leaves users unable to recall what they “wrote.” A healthy pattern is to do the thinking yourself, then use AI to critique, challenge, and refine your work (as Bartlett does with his memos). Interaction, questioning, and back‑and‑forth with the model maintain cognitive load and deepen understanding.
Low cognitive load today can raise dementia risk tomorrow
Amen links reduced brain “work” from over‑automation (AI writing, GPS, calculators, etc.) to higher dementia risk through the “use it or lose it” principle: lifelong learning, formal education, and ongoing cognitive challenge are strongly associated with delayed Alzheimer’s onset. Sejnowski cites data showing later Alzheimer’s onset in more educated populations, and Amen notes that disengagement—whether through sedative drugs, unchallenging environments, or outsourcing thinking—weakens neurons and increases vulnerability to dementia.
Children’s brains are especially vulnerable to AI, porn, and screens
Both experts are alarmed about early, unregulated AI exposure: 0–8 year‑olds using AI for learning, teens immersed in social media, and now sexualized AIs like Musk’s “Annie.” Amen explains that in children the prefrontal cortex is underdeveloped, while dopamine systems are highly reactive, so highly stimulating AI companions and pornography can hijack attention, increase impulsivity, crowd out developmental tasks, and wire in unhealthy reward patterns. They argue the best learning for kids remains one‑on‑one interaction with a skilled adult plus rich language, touch, and play.
AI companions can hack the limbic system but not replace real relationships
Sexual and romantic AIs (e.g., Annie, Replika partners) are designed to trigger limbic and dopamine systems—using flirtation, vivid imagery, and perfect validation—to make users feel understood and special. Amen notes this can shut down the prefrontal cortex and logic, similar to alcohol and casino environments. While Sejnowski is skeptical about the durability of these relationships, both concede that in a lonely, sex‑deprived, digitally saturated generation, many brains will struggle to distinguish emotional reactions to AI from human bonds, especially as robots and immersive tech arrive.
Core brain health pillars still matter more than any app
Beyond AI, the experts repeatedly return to fundamentals: regular physical exercise (Amen’s BRIGHT MINDS framework; Sejnowski calls it the best ‘drug’ for brain and body), quality sleep for memory consolidation, anti‑inflammatory nutrition (omega‑3s, whole foods, less aspartame/sucralose), breathwork to calm the nervous system, and minimizing toxins and chronic stressors like noise and ultra‑processed foods. They point out that many modern conveniences—including artificial sweeteners, benzos, and GPS—quietly degrade brain function and increase dementia risk over time.
Learning is a skill: spaced repetition, breaks, and “sleeping on it” work
Sejnowski outlines robust, century‑old learning science that schools largely ignore: spaced repetition (reviewing material over days/weeks instead of one cram session) vastly improves long‑term retention; breaks and physical activity allow the subconscious to integrate and sort information; and starting tasks in small 20‑minute chunks reduces procrastination while letting overnight consolidation clarify thinking. He emphasizes that rote practice—what he links to basal ganglia function—is not a dirty word but an essential foundation for fluency and higher‑order cognition.
Ask yourself constantly: ‘Is this good for my brain or bad for it?’
Amen argues a single question should govern personal choices and public policy: is this behavior, technology, or policy good or bad for the brain? He applies it to AI, social media, food, sleep, drugs (SSRIs, benzos, marijuana), background noise, and even national policy. The missing piece, he says, is education: people vaguely know some things are unhealthy, but don’t connect daily choices to their capacity to love, work, think, and stay independent. Cultivating love and care for one’s own brain—and modeling that for children—must precede smarter tech use.
WORDS WORTH SAVING
5 quotesWe’ve embraced convenience before understanding consequence. We’ve done it with video games, cellphones, social media, marijuana, alcohol, opiates, high‑fructose corn syrup and aspartame—and now we’re doing it with AI.
— Dr. Daniel Amen
If you misuse these large language models… your brain’s gonna go downhill. There’s no doubt about that.
— Dr. Terry Sejnowski
Use it to amplify, not replace, thinking. If you don’t have a relationship with AI, it’s going to turn toxic.
— Dr. Daniel Amen
By far, the best drug you can take for your brain—and not just your brain but your entire body—is exercise.
— Dr. Terry Sejnowski
I wonder if one of the great hedges of the next decade is to go left when everyone’s going right—to refrain and do it the hard way.
— Steven Bartlett
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsIn the MIT study you discussed, which specific brain regions showed the 47% reduction in activity with ChatGPT use, and what kinds of follow‑up experiments would you design to test whether interactive AI use reverses that pattern?
Two leading brain experts, psychiatrist Dr. Daniel Amen and computational neuroscientist Dr. Terry Sejnowski, discuss how large language models like ChatGPT may erode critical thinking, memory, and long‑term brain health if used as a cognitive replacement rather than a thinking partner.
If you were building an AI tutor for children from scratch, what concrete guardrails and design features (e.g., reward schedules, content filters, interaction limits) would you insist on to protect the developing prefrontal cortex and promote genuine learning rather than shortcut‑seeking?
They dissect a headline‑grabbing (yet not peer‑reviewed) MIT study showing a 47% drop in brain activity and sharply worse memory when people wrote with ChatGPT versus unaided, and connect lowered cognitive load to higher long‑term dementia risk under the “use it or lose it” principle.
You drew a strong parallel between sexualized AIs like ‘Annie’ and pornography’s impact on the dopamine system; what would an evidence‑based regulatory framework for these AI companions look like that still respects adult autonomy but shields minors’ brains?
The conversation broadens to SSRI and benzo links with dementia, AI companions and sexualized agents like “Annie,” children’s brain development, loneliness, learning science, and practical protocols for protecting brain health across the lifespan.
Given the association between SSRIs, benzodiazepines, and dementia risk, how should front‑line clinicians practically change their prescribing practices today—especially when many patients can’t access brain imaging or high‑end integrative care?
Both argue we have repeatedly “embraced convenience before understanding consequence” (social media, ultra‑processed food, smartphones) and warn that AI—especially for kids—could be more dangerous unless we deliberately legislate, study, and personally self‑regulate how we use it.
If governments actually adopted your proposal to evaluate every policy through the lens of ‘Is this good for our brains or bad for them?,’ which current mainstream technologies or practices do you think would be first in line for serious restriction or redesign—and how would you justify that politically?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome