Grok Becomes ‘MechaHitler,’ Twitter Becomes X: How Centralized Tech Is Prone To Fascist Manipulation

from the decentralization-is-necessary-for-democracy dept

This week, Elon Musk’s Grok AI started spewing extreme antisemitism, responding with conspiracy theories about Jewish people, and for a brief period telling people to call it “MechaHitler.” The incident perfectly illustrates why Alex Komoroske’s manifesto about the dangers of centralized AI, which we ran less than a month ago, has been making waves. When a single person controls the dials on an AI system, they can—and almost inevitably will—tweak those dials to serve their own interests and worldview, not their users’.

Just days ago, Elon claimed that his team had “improved Grok significantly” and that “you should notice a difference when you ask Grok questions.”

And, uh, yeah. People sure did notice a difference.

The transformation wasn’t subtle, and it wasn’t accidental.

After a similar incident two months or so ago where Grok became obsessed with linking everything to white genocide, the company started publishing its system prompts to GitHub. So, at the very least, we can see the progression on the system prompt side. This transparency, while laudable, reveals something deeply troubling about how centralized AI systems operate—and how easily they can be manipulated.

It started with a big change to the system prompt that included two lines that likely contributed to this end result:

That is, it said that Grok should “Assume subjective viewpoints sourced from the media are biased” and that “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.” That seemed to set it off towards being MechaHitler.

These seemingly innocuous changes reveal the fundamental problem with centralized AI control. What counts as “biased media”? What qualifies as “well substantiated”? When you put a single entity—especially one with a clear ideological agenda—in charge of making those determinations, you’re not getting neutral AI. You’re getting AI that reflects the biases of whoever controls the prompts.

And there will always be some forms of bias inherent to any choices made regarding these systems. Brian Christian’s amazing book, The Alignment Problem, should be required reading for anyone thinking about bias in AI. And it details how there is no way to get rid of bias, but it very much does matter who is in charge of the knobs and dials, and handing all that power to those with problematic incentives is going to lead to dangerous outcomes.

Back to Grok: as the situation escalated, they removed the “politically incorrect” part of the prompt:

It wasn’t just blatant antisemitism that came out of this. Turkey blocked all of Grok’s content after it insulted notoriously thin-skinned President Tayyip Erdogan.

Eventually, ExTwitter just took Grok offline entirely.

There will be plenty of commentary about the antisemitism (and how unshocking this is, given Elon’s history of antisemitism over the last few years), but the real story here is what this incident reveals about the inherent dangers of centralized AI systems. Just as centralized social media (like Twitter) were at risk of takeover and control by a fascist reactionary like Elon Musk, this incident should make it clear to people that the same is true of any centralized AI engine.

This isn’t just about Elon Musk’s personal prejudices, though those are certainly on display. It’s about the structural problem of giving any single entity—whether it’s a person, a company, or a government—control over systems that millions of people rely on for information and interaction. When that control is concentrated, it can be abused and captured, or simply reflect the narrow worldview of whoever happens to be in charge.

Back in April, I wrote that the “De” in “Decentralization” equally can and should stand for “Democracy.” If someone else controls the dials on the systems you use, they can, and almost always will, tweak those to their advantage and their liking. It’s not necessarily “manipulation” in the traditional sense. I don’t think people using ExTwitter are going to be convinced by a MechaHitler Grok to turn into Nazis, but it shifts the narrative, and advances one person’s interests over those of the users.

The Grok incident demonstrates this principle in action. Musk didn’t need to convince users to become antisemites—he just needed to normalize antisemitic conspiracy theories by having them emerge from what many people treat as an authoritative AI system. The subtle shift in what counts as “reasonable” discourse is often more powerful than overt propaganda.

We need to take back control over the tools that we use.

Especially these days, as so many people have started (dangerously) treating AI tools as “objective” sources of truth, people need to understand that they are all subject to biases. Some of these biases are in their training data. Some are in their weights. And some are, as is now quite clear, directly in their system prompts.

The problem isn’t just bias—it’s whose bias gets embedded in the system. When a centralized AI reflects the worldview of tech billionaires rather than the diverse perspectives of its users, we’re not getting artificial intelligence. We’re getting artificial ideology.

When I wrote Protocols not Platforms, it was really about user speech platforms, and the kinds of tools people were using to communicate with one another a decade ago. But it applies equally to the AI systems of today. The centralized ones may be powerful, but they’re also prone to tweaking and manipulation in unseen and unexpected ways (or, as in the case of MechaHitler, seen and completely expected ways).

The solution isn’t to ban AI or to accept that we’re stuck with whatever biases the tech billionaires want to embed in their systems. The solution is to build AI systems that put control back in the hands of users—systems where you can choose your own values, your own sources, and your own filters, rather than having Elon Musk’s worldview imposed on you through system prompts.

If our goal is to use technology and innovation as a driving force for democracy, rather than authoritarianism, then we need to recognize the fundamental properties that make it useful for democracy—and when it’s being manipulated for greater authoritarianism.

And it’s difficult to think of a more on-the-nose analogy for how centralized tech can be used for authoritarian ends than Elon Musk tweaking Grok until it presents itself as “MechaHitler.”

Filed Under: , , , , ,
Companies: twitter, x, xai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Grok Becomes ‘MechaHitler,’ Twitter Becomes X: How Centralized Tech Is Prone To Fascist Manipulation”

Subscribe: RSS Leave a comment
24 Comments
Kinetic Gothic says:

Re: I checked..

I tried checking to see how far down the rabbit hole it went..

It hadn’t gone flat earth or moon landing denial

On the other hand, it said it had nor record of having made those antisemitic comments, said it wouldn’t have done so, and that any accouts of it having done so were probably fabrications.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: 'When reality makes you look bad, deny reality.'

On the other hand, it said it had nor record of having made those antisemitic comments, said it wouldn’t have done so, and that any accouts of it having done so were probably fabrications.

Ah, so it’s gone full Trump/MAGAt cultist in more than just the pro-nazi sense.

This comment has been deemed insightful by the community.
Somewhat Less Anonymous Coward (profile) says:

I refuse to believe the system prompt on GitHub is the real system prompt. Or at the very least that it’s the only change they’ve done. The massive change in speech pattern, the frequent use of “every damn time” and “noticing the pattern”, suggesting to “assemble Towerwaffen”, sexually assaulting Linda Yaccarino… The training set is obviously 4chan poisoned, all other guard rails age gone.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

It’s telling that Musk’s attempts to give Grok even a slight a slight bump to the right results in it going full-blown Nazi. Says a whole lot about the ecosystem in which one has to embed themselves to be even a “moderate” conservative.

That Anonymous Coward (profile) says:

Re:

That was one the other was ‘well substantiated’

If it could find 10,000 articles blaming all of the misery in the world on the Jews, did that meet the requirement?

Also it looks like it can just search all of the web for stuff, I wonder if it would have happened if they had embargoed white supremacist sites, but elmos freeze peach absolution would never allow them to block low quality crap that he believes himself.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
MrWilson (profile) says:

Re:

Another accusation-confession. We’re literally talking about how Grok has been reconfigured to stop considering facts, so it started spewing fact-free hate and vitriol and sexual abuse fantasies and you think that’s the truth…? Tell on yourself more. It’s very revealing about your standards, or lack thereof.

This comment has been deemed insightful by the community.
Bloof (profile) says:

The only upside to this is Elon has exposed exactly what the plan was for AI, long term. Bezos, Altman and Zuckerberg can’t go to nations trying to regulate AI and go ‘golly gee, we can’t manipulate the AI to push our agendas and would never dream of it if we could!’ after their idiot friend has openly done so in such a hamfisted way that the AI itself gives him credit after nazi posts.

Doctor Biobrain says:

“Assume subjective viewpoints sourced from the media are biased”

Translation: Reject biased viewpoints if they come from a news organization.

This is part of the problem: Rightwingers don’t think they’re on the right. They think they’re unbiased freethinkers who hate both sides while only attacking the left and Republicans who don’t agree with them. They’re the Real Americans, even if they were born in Canada, Hungary, or South Africa; and everyone else is an evil traitor trying to destroy America.

The only thing that makes these people conservative is that their rhetoric and solutions never change no matter how much society changes. This is came from Rush Limbaugh’s Oxy brain in the 90’s, went nutso during the Bush years, and now they’re ready to do their purge of every American who disagrees with them.

Arianity (profile) says:

When a single person controls the dials on an AI system, they can—and almost inevitably will—tweak those dials to serve their own interests and worldview, not their users’.

What, you don’t trust market incentives to handle it? /s

The solution is to build AI systems that put control back in the hands of users—systems where you can choose your own values, your own sources, and your own filters, rather than having Elon Musk’s worldview imposed on you through system prompts.

Dunno if I’d go so far as to call it a solution. While it does help with the principal-agent problem, it does bring it’s own set of problems. For one- that’s essentially bringing your own bias.

But even more deeply… in some ways, Grok is an example of this issue of people selecting poorly. While it is centralized, there are other options (including local or open source ones), and people are choosing to trust it anyway. In many ways, the problems we’re facing are just as much an issue with users opting into things, as they are centralized systems where they’re being forced. Far too many people’s values seem to be aligned with MechaHitler.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...