Substack’s Algorithm Accidentally Reveals What We Already Knew: It’s The Nazi Bar Now

from the "we-think-you'll-also-like-nazi-content" dept

Back in April 2023, when Substack CEO Chris Best refused to answer basic questions about whether his platform would allow racist content, I noted that his evasiveness was essentially hanging out a “Nazis Welcome” sign. By December, when the company doubled down and explicitly said they’d continue hosting and monetizing Nazi newsletters, they’d fully embraced their reputation as the Nazi bar.

Last week, we got a perfect demonstration of what happens when you build your platform’s reputation around welcoming Nazis: your recommendation algorithms start treating Nazi content as more than worth tolerating, to content worth promoting.

As Taylor Lorenz reported on User Mag’s Patreon account, Substack sent push notifications to users encouraging them to subscribe to “NatSocToday,” a newsletter that “describes itself as ‘a weekly newsletter featuring opinions and news important to the National Socialist and White Nationalist Community.'”

As you can see, the notification included the newsletter’s swastika logo, leading confused users to wonder why they were getting Nazi symbols pushed to their phones.

“I had [a swastika] pop up as a notification and I’m like, wtf is this? Why am I getting this?” one user said. “I was quite alarmed and blocked it.” Some users speculated that Substack had issued the push alert intentionally in order to generate engagement or that it was tied to Substack’s recent fundraising round. Substack is primarily funded by Andreessen Horowitz, a firm whose founders have pushed extreme far right rhetoric.

“I thought that Substack was just for diaries and things like that,” a user who posted about receiving the alert on his Instagram story told User Mag. “I didn’t realize there was such a prominent presence of the far right on the app.”

Substack’s response was predictable corporate damage control:

“We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”

But here’s the thing about algorithmic “errors”—they reveal the underlying patterns your system has learned. Recommendation algorithms don’t randomly select content to promote. They surface content based on engagement metrics: subscribers, likes, comments, and growth patterns. When Nazi content consistently hits those metrics, the algorithm learns to treat it as successful content worth promoting to similar users.

There may be some randomness involved, and algorithms aren’t perfectly instructive of how a system has been trained, but it at least raises some serious questions about what Substack thinks people will like based on its existing data.

As Lorenz notes, the Nazi newsletter that got promoted has “746 subscribers and hundreds of collective likes on Substack Notes.” More troubling, users who clicked through were recommended “related content from another Nazi newsletter called White Rabbit,” which has over 8,600 subscribers and “is also being recommended on the Substack app through its ‘rising’ leaderboard.”

This isn’t a bug. It’s a feature working exactly as designed. Substack’s recommendation systems are doing precisely what they’re built to do: identify content that performs well within the platform’s ecosystem and surface it to potentially interested users. The “error” isn’t that the algorithm malfunctioned—it’s that Substack created conditions where Nazi content could thrive well enough to trigger promotional systems in the first place.

When you build a platform that explicitly welcomes Nazi content, don’t act surprised when that content performs well enough to trigger your promotional systems. When you’ve spent years defending your decision to help Nazis monetize their content, you can’t credibly claim to be “disturbed” when your algorithms recognize that Nazi content is succeeding on your platform.

The real tell here isn’t the push notification itself—it’s that Substack’s discovery systems are apparently treating Nazi newsletters as content worth surfacing to new users. That suggests these publications aren’t just surviving on Substack, they’re thriving well enough to register as “rising” content worthy of algorithmic promotion.

This is the inevitable endpoint of Substack’s content moderation philosophy. You can’t spend years positioning yourself as the platform that won’t “censor” Nazi content, actively help those creators monetize, and then act shocked when your systems start treating that content as editorially valuable.

This distinction matters enormously in terms of what sort of speech you are endorsing: there’s a world of difference between passively hosting speech and actively promoting it. When Substack defended hosting Nazi newsletters, they could claim they were simply providing infrastructure for discourse. But push notifications and algorithmic recommendations are something different—they’re editorial decisions about what content deserves amplification and which users might be interested in it.

To be clear, that’s entirely protected speech under the First Amendment as all editorial choices are protected. Substack is allowed to promote Nazis. But they should really stop pretending they don’t mean to. They’ve made it clear that they welcome literal Nazis on their platform and now it’s been made clear that their algorithm recognizes that Nazi content performs well.

This isn’t about Substack “supporting free speech”—it’s about Substack’s own editorial speech and what it’s choosing to say. They’re not just saying “Nazis welcome.” They’re saying “we think other people will like Nazi content too.”

And the public has every right to use their own free speech to call out and condemn such a choice. And use their own free speech rights of association to say “I won’t support Substack” because of this.

All the corporate apologies in the world can’t change what their algorithms revealed: when you welcome Nazis, you become the Nazi bar. And when you become the Nazi bar, your systems start working to bring more customers to the Nazis.

Your reputation remains what you allow. But it’s even more strongly connected to what you actively promote.

Filed Under: , , , , , ,
Companies: substack

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Substack’s Algorithm Accidentally Reveals What We Already Knew: It’s The Nazi Bar Now”

Subscribe: RSS Leave a comment
30 Comments
Anonymous Coward says:

“When you’ve spent years defending your decision to help Nazis monetize their content, you can’t credibly claim to be “disturbed” when your algorithms recognize that Nazi content is succeeding on your platform.

Agreed. And — apparently — it didn’t occur to anyone at Substack that maybe they should put an output filter on the algorithms to flag and hold promotion of anything matching certain key words, phrases, users, etc. If they’d asked a junior software engineer to put in even an afternoon on this, they could have avoided this debacle.

But they couldn’t be bothered.

Anonymous Coward says:

Re: Re:

True. But (a) that’s not an excuse for failing to have them (b) they can only game them after they’re aware of their existence and (c) I specifically mentioned “users” for a reason. Substack’s own analytics — if they bothered to pay attention to it — should tell them who the top Nazis are on their site. Anything/everything from those users should bump into a filter and be held for human review before it’s promoted/pushed.

Or, and hear me out here: they could have just started throwing Nazis out the moment they began to show up, i.e., they could have chosen — years ago — not to become the Nazi bar.

Anonymous Coward says:

Huh.

Now that’s a side of substack I’ve not seen before.

The accounts I follow there are all in a network of ones debunking anti-mask, anti-germ theory covid conspiracy theories, tracking far right propaganda funding streams into Europe from the US and Russia, exposing which critical research the current US government has cut this week, etc.

This comment has been flagged by the community. Click here to show it.

Illarion says:

Re:

Yup, in the same way that the German Democratic Republic (East Germany) was democratic and that the Democratic People’s Republic of Korea (North Korea) is a democratic republic.

I can call myself the cleverest, most handsome man in the room – doesn’t make it so 😉

There was nothing remotely socialist or left wing about the Nazis, quite the opposite. It was a form of Fascism, far Right and totalitarian.

This comment has been deemed insightful by the community.
MrWilson (profile) says:

Re:

This is low effort even for you.

Just like conservative Christians don’t read the Bible, conservative trolls don’t know the history of their own rhetoric.

First, words are not always representative of what they claim.

The Democratic People’s Republic of Korea is not democratic, does not belong to the people, is not a republic, and doesn’t cover all of Korea. The Holy Roman Empire wasn’t holy, Roman, or an empire.

The Nazis appealed to socialist ideologies to try to gain early support, but they later purged the party of even conservatives were weren’t right wing and fascist enough. Their version of “socialism” was “national socialism” which involved people being for the state rather than the people.

Your ignorance isn’t surprising and your trolling isn’t novel.

Bloof (profile) says:

Re:

I’m starting to suspect that whichever Troll farm controls the Koby spam account has had a personel change as the nonsense has become super low effort now. There was a period where they avoided topics where they would have to defend hypocrisy, carefully picking ones where they have the heritage foundation talking points ready to parrot, like the way USAID became the source of all corruption and government waste, now, the output is like this, tired 4chan level ‘No, you’re the real nazis!’ content. Tired and just kind of sad.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Some of our users didn't want that content. Others though...'

“We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”

They’re not sorry they’re now well and truly known as supporting nazi content to the point that they’re actively recommending it to users, rather to the extent that they’re ‘sorry’ it’s because they got caught so red-handed as to remove all shadow of plausible deniability about their support of such vile content.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re:

“Historians have a word for Germans who joined the Nazi party, not because they hated Jews, but out of a hope for restored patriotism, or a sense of economic anxiety, or a hope to preserve their religious values, or dislike of their opponents, or raw political opportunism, or convenience, or ignorance, or greed.

That word is “Nazi.” Nobody cares about their motives anymore.

They joined what they joined. They lent their support and their moral approval. And, in so doing, they bound themselves to everything that came after. Who cares any more what particular knot they used in the binding?”
― A.R. Moxon

Ghosthost (profile) says:

The algorithm doesn’t lie

You’re right, The algorithm doesn’t lie (discounting an actual error). perhaps those receiving such alerts should stop to consider if what they have been viewing lately is Nazi propaganda disguised by quoting some fantasy version of the Bible that’s been wrapped in an American flag. At least the algorithm isn’t afraid to call a Nazi a Nazi.

This comment has been flagged by the community. Click here to show it.

Ninja (profile) says:

The wealthiest person on Earth did not one but TWO nazi salutes and nothing happened. The other wealthy morons are all flocking around an administration that is clearly fascist and houses nazis within them and are actively dropping diversity efforts without much resistance. Teocratic State is applying nazi tactics to decimate population they don’t like and little is being done to stop them. This is just another chapter of this story. And people say nazis were defeated. The difference is we don’t have the communists to save us now.

Leave a Reply to Illarion Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...