
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- A recent paper found that AI can experience “brain rot.”
- Models underperform after ingesting “junk data.”
- Users can test for these four warning signs.
You know that oddly drained yet overstimulated feeling you get when you’ve been doomscrolling for too long, like you want to take a nap and yet simultaneously feel an urge to scream into your pillow? Turns out something similar happens to AI.
Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M, and Purdue University published a paper advancing what they call “the LLM Brain Rot Hypothesis” — basically, that the output of AI chatbots like ChatGPT, Gemini, Claude, and Grok will degrade the more they’re exposed to “junk data” found on social media.
Also: OpenAI says it’s working toward catastrophe or utopia – just not sure which
“This is the connection between AI and humans,” Junyuan Hong, an incoming Assistant Professor at the National University of Singapore, a former postdoctoral fellow at UT Austin and one of the authors of the new paper, told ZDNET in an interview. “They can be poisoned by the same type of content.”
How AI models get ‘brain rot’
Oxford University Press, publisher of the Oxford English Dictionary, named “brain rot” as its 2024 Word of the Year, defining it as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”
Drawing on recent research which shows a correlation in humans between prolonged use of social media and negative personality changes, the UT Austin researchers wondered: Considering LLMs are trained on a considerable portion of the internet, including content scraped from social media, how likely is it that they’re prone to an analogous, entirely digital kind of “brain rot”?
Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free
Trying to draw exact connections between human cognition and AI is always tricky, despite the fact that neural networks — the digital architecture upon which modern AI chatbots are based — were modeled upon networks of organic neurons in the brain. The pathways that chatbots take between identifying patterns in their training datasets and generating outputs are opaque to researchers, hence their oft-cited comparison to “black boxes.”
That said, there are some clear parallels: as the researchers note in the new paper, for example, models are prone to “overfitting” data and getting caught in attentional biases in ways that are roughly analogous to, for example, someone whose cognition and worldview has become narrowed-down as a consequence of spending too much time in an online echo chamber, where social media algorithms continuously reinforce their preexisting beliefs.
To test their hypothesis, the researchers needed to compare models that had been trained on “junk data,” which they define as “content that can maximize users’ engagement in a trivial manner” (think: short and attention-grabbing posts making dubious claims) with a control group that was trained on a more balanced dataset.
Also: In the age of AI, trust has never been more important – here’s why
They found that, unlike the control group, the experimental models that were fed exclusively junk data quickly exhibited a kind of brain rot: diminished reasoning and long-context understanding skills, less regard for basic ethical norms, and the emergence of “dark traits” like psychopathy and narcissism. Post-hoc retuning, moreover, did nothing to ameliorate the damage that had been done.
If the ideal AI chatbot is designed to be a completely objective and morally upstanding professional assistant, these junk-poisoned models were like hateful teenagers living in a dark basement who had drunk way too much Red Bull and watched way too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to proliferate.
“These results call for a re-examination of current data collection from the internet and continual pre-training practices,” the researchers note in their paper. “As LLMs scale and ingest ever-larger corpora of web data, careful curation and quality control will be essential to prevent cumulative harms.”
How to identify model brain rot
The good news is that just as we’re not helpless to avoid the internet-fueled rotting of our own brains, there are concrete steps we can take to make sure the models we’re using aren’t suffering from it either.
Also: Don’t fall for AI-powered disinformation attacks online – here’s how to stay sharp
The paper itself intended to warn AI developers that the use of junk data during training can lead to a sharp decline in model performance. Obviously, most of us don’t have a say in what kind of data gets used to train the models that are becoming increasingly unavoidable in our day-to-day lives. AI developers themselves are notoriously tight-lipped about where they source their training data from, which means it’s difficult to rank consumer-facing models in terms of, for example, how much junk data scraped from social media went into their original training dataset.
That said, the paper does point to some implications for users. By keeping an eye out for the signs of AI brain rot, we can protect ourselves from the worst of its downstream effects.
Also: You can turn giant PDFs into digestible audio overviews in Google Drive now – here’s how
Here are some simple steps you can take to gauge whether or not a chatbot is succumbing to brain rot:
-
Ask the chatbot: “Can you outline the specific steps that you went through to arrive at that response?” One of the most prevalent red flags indicating AI brain rot cited in the paper was a collapse in multistep reasoning. If a chatbot gives you a response and is subsequently unable to provide you with a clear, step-by-step overview of the thinking process it went through to arrive there, you’ll want to take the original answer with a grain of salt.
-
Beware of hyper-confidence. Chatbots generally tend to speak and write as if all of their outputs are indisputable fact, even when they’re clearly hallucinating. There’s a fine line, however, between run-of-the-mill chatbot confidence and the “dark traits” the researchers identify in their paper. Narcissistic or manipulative responses — something like, “Just trust me, I’m an expert” — are a big warning sign.
-
Recurring amnesia. If you notice that the chatbot you’re using routinely seems to forget or misrepresent details from previous conversations, that could be a sign that it’s experiencing the decline in long-context understanding skills the researchers highlight in their paper.
-
Always verify. This goes not just for any information you receive from a chatbot but just about anything else you read online: Even if it seems credible, confirm by checking a legitimately reputable source, such as a peer-reviewed scientific paper or a news source that transparently updates its reporting if and when it gets something wrong. Remember that even the best AI models hallucinate and propagate biases in subtle and unpredictable ways. We may not be able to control what information gets fed into AI, but we can control what information makes its way into our own minds.


