Study: AI Models Trained On Clickbait Slop Result In AI ‘Brain Rot,’ ‘Hostility’
from the you-are-what-you-eat dept
While “AI” certainly has some useful applications, a lot of the folks in charge of the trajectory of LLMs clearly want to use it to build a giant, badly automated ouroboros of lazy internet slop that shits out ad money without the need for pesky labor. You see this most profoundly in media, where a bunch of far-too-clever lads rushed to integrate under-cooked, broadly misunderstood LLMs with disastrous results.
These folks could be using AI to make work more efficient; instead they’re using it to cut corners, undermine labor, and fill the internet with a parade of mindless, pointless, low-quality clickbait slop. The sort of lazy engagement bait that hoovers ad money and attention away from folks who actually have something useful to say or contribute.
As it turns out, training LLMs on this kind of slop doesn’t work out well for anybody.
A new joint study by researchers at Texas A&M University, University of Texas at Austin, and Purdue University took a closer look at what happens when you train LLMs on the kind of engagement slop our modern internet gatekeepers are keen to create.
To see how these models would “behave” after subsisting on a diet of clickbait sewage, the researchers cobbled together a sample of one million X posts and then trained four different LLMs on varying mixtures of control data (long form, good faith, real articles and content) and junk data (lazy, engagement chasing, superficial clickbait) to see how it would affect performance.
Their conclusion isn’t too surprising; the more junk data that is fed into an AI model, the lower quality its outputs become and the more “hostile” and erratic the model is:
“All four models tested—Llama3 8B, Qwen2.5 7B/0.5B, Qwen3 4B—showed some forms of cognitive decline. Meta’s Llama proved the most sensitive to the junk, seeing drops in its reasoning capabilities, understanding of context, and adherence to safety standards. Interestingly, a much smaller model, Qwen 3 4B, proved more resilient, though still suffered declines. It also found that the higher the rates of bad data, the more likely a model was to slip into “no thinking” mode, failing to provide any reasoning for its answer, which was more likely to be inaccurate.”
You are what you eat.
They also found that after being fed a bunch of ex-Twitter slop, the models didn’t just get “dumber”, they were (shocking, I know) far more likely to take on many of the nastier “personality traits” that now dominate the right wing troll platform:
“More than just getting “dumber” in its thinking, though, the researchers found the inclusion of junk also resulted in an interesting effect: it led to changes in the model’s “personality,” succumbing to what the researchers called “dark traits.” For instance, the Llama 3 model displayed significantly higher levels of narcissism and became less agreeable. It also went from displaying nearly no signs of psychopathy to extremely high rates of the behavior.”
And by “dumber” and “narcissistic” they of course mean a vague simulacrum of those personality traits, because modern LLMs don’t understand anything, much less adopt real personalities. You’ll often see people (even prominent NYT tech journalists) attributing malicious intent and understanding to language learning models, inadvertently advertising the fact they don’t know how any of this works.
There’s been so much misrepresentation of what these models are capable of (by both companies and the tech media), this comment below needs to be projected onto the moon:
You see this a lot in breathless articles about LLMs that are trying to “resist being shut off” or somehow “blackmail their operators.” It’s simply not how this technology actually works. It’s part of a con suggesting these models are just a few weeks and another billion away from HAL 900 sentience.
Again, none of this is to say LLMs don’t have very useful applications, such as examining vast troves of scientific data to look for patterns and facts that humans might miss. Or creating more efficient, “intelligent” software that can be predictive of the user’s needs or inputs. Or automating basic customer service inquiries in a world full of already-low quality outsourced support.
The problem with AI generally is a decidedly human one: the terrible, unethical, and greedy people currently in charge of it’s implementation (again, see media, insurance, countless others) — folks who have cultivated some unrealistic delusions about AI competency and efficiency (see this recent Stanford study on how rushed AI adoption in the workforce often makes people less efficient).
This is before you even get to the climate and energy impact of these models, or the fact that the underlying financials are a hot mess poised to cause some serious economic tumult next year as the outer layer of hype and misrepresentation burns off. Even then, this quest to turn the internet into an ocean of lazy and uncurated ad engagement slop will remain a centerpiece of the movement.
Filed Under: ai, automation, clickbait, engagement bait, journalism, llms, study


Comments on “Study: AI Models Trained On Clickbait Slop Result In AI ‘Brain Rot,’ ‘Hostility’”
Would humans get brain rot too from reading too much AI slop?
Re:
They get it from human slop, so yes.
LLMs are now ready to replace Trump. Is that what Musk was thinking of when buying Twitter?
Garbage in, garbage out. Who would have thought..
Also, it’s HAL9000.
Re: HAL900 fr fr
Nah, seems appropriate to say we’re the “two weeks” away from “AI” that’s still an order of magnitude away from AI.
Re:
It’s not even that good. LLMs are completely capable of generating garbage outputs even if they are only fed entirely factual information because they’re still not ‘thinking’ of an answer and are incapable of fact-checking themselves.
It’s just a massive probability machine.
If you have two people called Bob and one of them is frequently, and accurately, reported as being an asshole, and there’s very little information about the other Bob, it will associate him with being an asshole as well – because it’s just connecting that data based on the frequency of those things coming together.
Owner Bias, too
Would you like Musk’s brain rot today or Zucks’s?
……
If those AI human owners greedily squeeze everyone else out of writing, taking photos, making videos, creating songs, and so on, will their LLMs stop getting any good data to train on?
Maybe it isn’t about having the L-est tool, after all?
I have noticed an increase in youtube videos where it is clips from movies mashed together. It is obviously only for views and probably made by AI.
I do miss when youtube was instead filled with useful info or at least when more original things were promoted more widely to visitors.
Now it feels like just a big AI generated trash heap.
Re:
I’m still nostalgic for the YouTube that existed before the copyright cartels got their claws in it, where you could find almost any clip from a show or movie you wanted to show someone.
Re:
It is harder if you don’t have a well-established presence with the YT algorithms – and i won’t discount that it may be impossible in the future – but you can find plenty of absolutely great videos. Most of the good ones are still there. There is just more garbage, including content appropriation, and AI just made it easier for the assholes.
Right-wing?
So you picked out the ONE major media platform that was NOT run by crazy left-wing lunatics – and used that as an example? When BY FAR the legacy and social media are dominated by left-wing brain rot and lies – especially trying to convince people that despite ~80 million voting for a duly elected president – he’s a literal Nazi. Maybe the LLM’s aren’t any worse at turning out garbage than humans are.