Back in April 2023, when Substack CEO Chris Best refused to answer basic questions about whether his platform would allow racist content, I noted that his evasiveness was essentially hanging out a “Nazis Welcome” sign. By December, when the company doubled down and explicitly said they’d continue hosting and monetizing Nazi newsletters, they’d fully embraced their reputation as the Nazi bar.
Last week, we got a perfect demonstration of what happens when you build your platform’s reputation around welcoming Nazis: your recommendation algorithms start treating Nazi content as more than worth tolerating, to content worth promoting.
As Taylor Lorenz reported on User Mag’s Patreon account, Substack sent push notifications to users encouraging them to subscribe to “NatSocToday,” a newsletter that “describes itself as ‘a weekly newsletter featuring opinions and news important to the National Socialist and White Nationalist Community.'”
As you can see, the notification included the newsletter’s swastika logo, leading confused users to wonder why they were getting Nazi symbols pushed to their phones.
“I had [a swastika] pop up as a notification and I’m like, wtf is this? Why am I getting this?” one user said. “I was quite alarmed and blocked it.” Some users speculated that Substack had issued the push alert intentionally in order to generate engagement or that it was tied to Substack’s recent fundraising round. Substack is primarily funded by Andreessen Horowitz, a firm whose founders have pushed extreme far right rhetoric.
“I thought that Substack was just for diaries and things like that,” a user who posted about receiving the alert on his Instagram story told User Mag. “I didn’t realize there was such a prominent presence of the far right on the app.”
Substack’s response was predictable corporate damage control:
“We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”
But here’s the thing about algorithmic “errors”—they reveal the underlying patterns your system has learned. Recommendation algorithms don’t randomly select content to promote. They surface content based on engagement metrics: subscribers, likes, comments, and growth patterns. When Nazi content consistently hits those metrics, the algorithm learns to treat it as successful content worth promoting to similar users.
There may be some randomness involved, and algorithms aren’t perfectly instructive of how a system has been trained, but it at least raises some serious questions about what Substack thinks people will like based on its existing data.
As Lorenz notes, the Nazi newsletter that got promoted has “746 subscribers and hundreds of collective likes on Substack Notes.” More troubling, users who clicked through were recommended “related content from another Nazi newsletter called White Rabbit,” which has over 8,600 subscribers and “is also being recommended on the Substack app through its ‘rising’ leaderboard.”
This isn’t a bug. It’s a feature working exactly as designed. Substack’s recommendation systems are doing precisely what they’re built to do: identify content that performs well within the platform’s ecosystem and surface it to potentially interested users. The “error” isn’t that the algorithm malfunctioned—it’s that Substack created conditions where Nazi content could thrive well enough to trigger promotional systems in the first place.
When you build a platform that explicitly welcomes Nazi content, don’t act surprised when that content performs well enough to trigger your promotional systems. When you’ve spent years defending your decision to help Nazis monetize their content, you can’t credibly claim to be “disturbed” when your algorithms recognize that Nazi content is succeeding on your platform.
The real tell here isn’t the push notification itself—it’s that Substack’s discovery systems are apparently treating Nazi newsletters as content worth surfacing to new users. That suggests these publications aren’t just surviving on Substack, they’re thriving well enough to register as “rising” content worthy of algorithmic promotion.
This is the inevitable endpoint of Substack’s content moderation philosophy. You can’t spend years positioning yourself as the platform that won’t “censor” Nazi content, actively help those creators monetize, and then act shocked when your systems start treating that content as editorially valuable.
This distinction matters enormously in terms of what sort of speech you are endorsing: there’s a world of difference between passively hosting speech and actively promoting it. When Substack defended hosting Nazi newsletters, they could claim they were simply providing infrastructure for discourse. But push notifications and algorithmic recommendations are something different—they’re editorial decisions about what content deserves amplification and which users might be interested in it.
To be clear, that’s entirely protected speech under the First Amendment as all editorial choices are protected. Substack is allowed to promote Nazis. But they should really stop pretending they don’t mean to. They’ve made it clear that they welcome literal Nazis on their platform and now it’s been made clear that their algorithm recognizes that Nazi content performs well.
This isn’t about Substack “supporting free speech”—it’s about Substack’s own editorial speech and what it’s choosing to say. They’re not just saying “Nazis welcome.” They’re saying “we think other people will like Nazi content too.”
And the public has every right to use their own free speech to call out and condemn such a choice. And use their own free speech rights of association to say “I won’t support Substack” because of this.
All the corporate apologies in the world can’t change what their algorithms revealed: when you welcome Nazis, you become the Nazi bar. And when you become the Nazi bar, your systems start working to bring more customers to the Nazis.
Your reputation remains what you allow. But it’s even more strongly connected to what you actively promote.
Back in April Substack founder/CEO Chris Best gave an interview to Nilay Patel in which he refused to answer some fairly basic questions about how the company planned to handle trust & safety issues on their new Substack Notes microblogging service. As I noted at the time, Best seemed somewhat confused about how all this worked, and by refusing to be explicit in their policies he was implicitly saying that Substack welcomed Nazis. As we noted, this was the classic “Nazi bar” scenario: if you’re not kicking out Nazis, you get the reputation as “the Nazi bar” even if you, yourself, don’t like Nazis.
What I tried to make clear in that post (which some people misread) was that the main issue I had was Best trying to act as if his refusal to make a statement wasn’t a statement. As I noted, if you’re going to welcome Nazis to a private platform, don’t pretend you’re not doing that. Be explicit about it. Here’s what I said at the time:
If you’re not going to moderate, and you don’t care that the biggest draws on your platform are pure nonsense peddlers preying on the most gullible people to get their subscriptions, fucking own it, Chris.
Say it. Say that you’re the Nazi bar and you’re proud of it.
Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”
You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.
And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.
Most companies that want to get large enough recognize that playing to the grifters and the nonsense peddlers works for a limited amount of time, before you get the Nazi bar reputation, and your growth is limited. And, in the US, you’re legally allowed to become the Nazi bar, but you should at least embrace that, and not pretend you have some grand principled strategy.
The key point: your reputation as a private site is what you allow. If you allow garbage, you’re a garbage site. If you allow Nazis, you’re a Nazi site. You’re absolutely allowed to do that, but you shouldn’t pretend to be something that you’re not. You should own it, and say “these are our policies, and we realize what our reputation is.”
Substack has finally, sorta, done that. But, again, in the dumbest way possible.
A few weeks back, the Atlantic ran an article by Jonathan Katz with the headline Substack Has a Nazi Problem. In what should be no surprise given what happened earlier this year with Best’s interview, the Nazis very quickly realized that Substack was a welcome home for them:
An informal search of the Substack website and of extremist Telegram channels that circulate Substack posts turns up scores of white-supremacist, neo-Confederate, and explicitly Nazi newsletters on Substack—many of them apparently started in the past year. These are, to be sure, a tiny fraction of the newsletters on a site that had more than 17,000 paid writers as of March, according to Axios, and has many other writers who do not charge for their work. But to overlook white-nationalist newsletters on Substack as marginal or harmless would be a mistake.
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics. Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate, and one recent post features a racist caricature of a Chinese person. A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory that inspired deadly mass shootings at a Pittsburgh, Pennsylvania, synagogue; two Christchurch, New Zealand, mosques; an El Paso, Texas, Walmart; and a Buffalo, New York, supermarket. Other newsletters make prominent references to the “Jewish Question.” Several are run by nationally prominent white nationalists; at least four are run by organizers of the 2017 “Unite the Right” rally in Charlottesville, Virginia—including the rally’s most notorious organizer, Richard Spencer.
Some Substack newsletters by Nazis and white nationalists have thousands or tens of thousands of subscribers, making the platform a new and valuable tool for creating mailing lists for the far right. And many accept paid subscriptions through Substack, seemingly flouting terms of service that ban attempts to “publish content or fund initiatives that incite violence based on protected classes.” Several, including Spencer’s, sport official Substack “bestseller” badges, indicating that they have at a minimum hundreds of paying subscribers. A subscription to the newsletter that Spencer edits and writes for costs $9 a month or $90 a year, which suggests that he and his co-writers are grossing at least $9,000 a year and potentially many times that. Substack, which takes a 10 percent cut of subscription revenue, makes money when readers pay for Nazi newsletters.
Again, none of this should be surprising. If you signal publicly that you allow Nazis (and allow them to make money), don’t be surprised when the Nazis arrive. In droves. Your reputation is what you allow.
And, of course, once that happens some other users might realize they don’t want to support the platform that supports Nazis. So a bunch of Substackers got together and sent a group letter saying they didn’t want to be on a site supporting Nazis and wanted to know what the Substack founders had to say for themselves.
From our perspective as Substack publishers, it is unfathomable that someone with a swastika avatar, who writes about “The Jewish question,” or who promotes Great Replacement Theory, could be given the tools to succeed on your platform. And yet you’ve been unable to adequately explain your position.
In the past you have defended your decision to platform bigotry by saying you “make decisions based on principles not PR” and “will stick to our hands-off approach to content moderation.” But there’s a difference between a hands-off approach and putting your thumb on the scale. We know you moderate some content, including spam sites and newsletters written by sex workers. Why do you choose to promote and allow the monetization of sites that traffic in white nationalism?
Eventually, the Substack founders had to respond. They couldn’t stare off into the distance like Best did during the Nilay Patel interview in April. So another founder, Hamish McKenzie, finally published a Note saying “yes, we allow Nazis and we’re not going to stop.” Of course, as is too often the case on these things, he tried to couch it as a principled stance:
I just want to make it clear that we don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.
We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power. We are committed to upholding and protecting freedom of expression, even when it hurts. As @Ted Gioia has noted, history shows that censorship is most potently used by the powerful to silence the powerless. (Ted’s note: substack.com/profile/4937458-ted-gioia/…)
Our content guidelines do have narrowly defined proscriptions, including a clause that prohibits incitements to violence. We will continue to actively enforce those rules while offering tools that let readers curate their own experiences and opt in to their preferred communities. Beyond that, we will stick to our decentralized approach to content moderation, which gives power to readers and writers.
So this is, more or less, what I had asked them to do back in April. If you’re going to host Nazis just say “yes, we host Nazis.” And, I even think it’s fair to say that you’re doing that because you don’t think that moderation does anything valuable, and certainly doesn’t stop people from being Nazis. And, furthermore, I also think Substack is correct that its platform is slightly more decentralized than systems like ExTwitter or Facebook, where content mixes around and gets promoted. Since most of Substack is individual newsletters and their underlying communities, it’s more equivalent to Reddit, where the “moderation” questions are pushed further to the edges: you have some moderation that is centralized from the company, some that is just handled by people deciding whether or not to subscribe to certain Substacks (or subreddits), and some that is decided by the owner of each Substack (or moderators of each subreddit).
And Hamish and crew are also not wrong that censorship is frequently used by the powerful to silence the powerless. This is why we are constantly fighting for free speech rights here, and against attempts to change that, because we know how frequently those rights are abused.
But the Substack team is mixing up “free speech rights” — which involve what the government can limit — with their own expressive rights and their own reputation. I don’t support laws that stop Nazis from saying what they want to say, but that doesn’t mean I allow Nazis to put signs on my front lawn. This is the key fundamental issue anyone discussing free speech has to understand. There is a vast and important difference between (1) the government passing laws that stifle speech and (2) private property owners deciding whether or not they wish to help others, including terrible people, speak.
Because, as private property owners, you have your own free speech rights in the rights of association. So while I support the rights of Nazis to speak, that does not mean I’m going to assist them in using my property to speak, or assist them in making money.
Substack has chosen otherwise. They are saying that they will not just allow Nazis to use their property, but they will help fund those Nazis.
That’s a choice. And it’s a choice that should impact Substack’s own reputation.
Ken “Popehat” White explained it well in his own (yes, Substack) post on all of this.
First, McKenzie’s post consistently blurs the roles and functions of the state and the individual. For instance, he pushes the hoary trope that censoring Nazis just drives them underground where they are more dangerous: “But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.” That may be true for the state, but is it really true for private actors? Do I make the Nazi problem worse by blocking Nazis who appear in my comments? Does a particular social media platform make Nazis worse by deciding that they, personally, are not going to host Nazis? How do you argue that, when there are a vast array of places for Nazis to post on the internet? Has Gab fallen? Is Truth Social no more?
McKenzie continues the blurring by suggesting that being platformed by private actors is a civil right: “We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power. We are committed to upholding and protecting freedom of expression, even when it hurts.” That’s fine, but nobody has the individual right, civil liberty, or freedom of expression to be on Substack if Substack doesn’t want them there. In fact that’s part of Substack’s freedom of expression and civil liberties — to build the type of community it wants, that expresses its values. If Substack’s values is “we publish everybody” (sort of, as noted below) that’s their right, but a different approach doesn’t reflect a lack of support for freedom of expression. McKenzie is begging the question — assuming his premise that support of freedom of expression requires Substack to accept Nazis, not just for the government to refrain from suppressing Nazis.
As Ken further notes, Substack’s own terms of service and the moderation they already do does already block plenty of 1st Amendment protected speech, including hate speech, sexually explicit content, doxxing, and spam. There are good reasons that a site might block any of that speech, but it then stands out when you decide to say “but, whoa whoa whoa, Nazis, that’s a step too far, and an offense to free speech.” It’s all about choices.
Your reputation is what you allow. And Substack has decided that its reputation is “sex is bad, but Nazis are great.”
Or, as White notes:
My point is not that any of these policies is objectionable. But, like the old joke goes, we’ve established what Substack is, now we’re just haggling over the price. Substack is engaging in transparent puffery when it brands itself as permitting offensive speech because the best way to handle offensive speech is to put it all out there to discuss. It’s simply not true. Substack has made a series of value judgments about which speech to permit and which speech not to permit. Substack would like you to believe that making judgments about content “for the sole purpose of sexual gratification,” or content promoting anorexia, is different than making judgment about Nazi content. In fact, that’s not a neutral, value-free choice. It’s a valued judgment by a platform that brands itself as not making valued judgments. Substack has decided that Nazis are okay and porn and doxxing isn’t. The fact that Substack is engaging in a common form of free-speech puffery offered by platforms doesn’t make it true.
And this is exactly the argument that we keep trying to make and have been trying to make for years about content moderation questions. Supporting free speech has to mean supporting free speech against government attempts at suppression and also supporting the right of private platforms to make their own decisions about to allow and what not to allow. Because if you say that private platforms must allow all speech, then you don’t actually get more speech. You get a lot less. Because most platforms will decide they don’t want to be enabling Nazis, and only the ones who eagerly cater to Nazis survive. That leaves fewer places to speak, and fewer people willing to speak in places adjacent to Nazis.
Substack has every right to make the choices it has made, but it shouldn’t pretend that it’s standing up for civil rights or freedoms, because it’s not. It’s making value judgments that everyone can see, and its value judgment is “Nazis are welcome, sex workers aren’t.”
Your reputation is what you allow. Substack has hung out its shingle saying “Nazis welcome.”
Everyone else who uses the platform now gets to decide whether or not they wish to support the site that facilitates the funding of Nazis. Some will. Some will find the tradeoffs acceptable. But others won’t. I’ve already seen a few prominent Substack writers announce that they have moved or that they’re intending to do so.
These are all free speech decisions as well. Substack has made its decision. Substack has declared what its reputation is going to be. I support the company’s free speech rights to make that choice. But that does not mean I need to support the platform personally.
Your reputation is what you allow and Substack has chosen to support Nazis.
I get it. I totally get it. Every tech dude comes along and has this thought: “hey, we’ll be the free speech social media site. We won’t do any moderation beyond what’s required.” Even Twitter initially thought this. But then everyone discovers reality. Some discover it faster than others, but everyone discovers it. First, you realize that there’s spam. Or illegal content such as child sexual abuse material. And if that doesn’t do it for you, the copyright police will.
But, then you realize that beyond spam and content that breaks the rules, you end up with malicious users who cause trouble. And trouble drives away users, advertisers, or both. And if you don’t deal with the malicious users, the malicious users define you. It’s the “oh shit, this is a Nazi bar now” problem.
And, look, sure, in the US, you can run the Nazi bar, thanks to the 1st Amendment. But running a Nazi bar is not winning any free speech awards. It’s not standing up for free speech. It’s building your own brand as the Nazi bar and abdicating your own free speech rights of association to kick Nazis out of your private property, and to craft a different kind of community. Let the Nazis build their own bar, or everyone will just assume you’re a Nazi too.
It was understandable a decade ago, before the idea of “trust & safety” was a thing, that not everyone would understand all this. But it is unacceptable for the CEO of a social media site today to not realize this.
Enter Substack CEO Chris Best.
Substack has faced a few controversies regarding the content moderation (or lack thereof) for its main service, which allows writers to create blogs with subscription services built in. I had been a fan of the service since it launched (and had actually spoken with one of the founders pre-launch to discuss the company’s plans, and even whether or not we could do something with them as Techdirt), as I think it’s been incredibly powerful as a tool for independent media. But, the exec team there often seems to have taken a “head in sand” approach to understanding any of this.
That became ridiculously clear on Thursday when Chris Best went on Nilay Patel’s Decoder podcast at the Verge to talk about Substack’s new Notes product, which everyone is (fairly or not) comparing to Twitter. Best had to know that content moderation questions were coming, but seemed not just unprepared for them, but completely out of his depth.
This clip is just damning. Chris just trying to stare down Nilay just doesn’t work.
Our host Nilay asked Substack CEO Chris Best the tough questions about whether racist speech should be allowed in their new consumer product, Substack Notes. #techtok#technews#substack#ceo
The larger discussion is worth listening to, or reading below. As Nilay notes in his commentary on the transcript, he feels that there should be much less moderation the closer you get to being an infrastructure provider (this is something I not only agree with, but have spent a lot of time discussing). Substack has long argued that its more hands-off approach in providing its platform to writers is because it’s more like infrastructure.
But the Notes feature takes the company closer to consumer facing social media, and so Nilay had some good questions about that, which Chris just refused to engage with. Here’s the full context that provides more than just the video above. The bold text is Nilay and the non-bold is Chris:
Notes is the most consumer-y feature. You’re saying it’s inheriting a bunch of expectations from the consumer social platforms, whether or not you really want it to, right? It’s inheriting the expectations of Twitter, even from Twitter itself. It’s inheriting the expectations that you should be able to flirt with people and not have to subscribe to their email lists.
In that spectrum of content moderation, it’s the tip of the spear. The expectations are that you will moderate that thing just like any big social platform will moderate. Up until now, you’ve had the out of being able to say, “Look, we are an enterprise software provider. If people don’t want to pay for this newsletter that’s full of anti-vax information, fine. If people don’t want to pay or subscribe to this newsletter where somebody has harsh views on trans people, fine.” That’s the choice. The market will do it. And because you’re the enterprise software provider, you’ve had some cover. When you run a social network that inherits all the expectations of a social network and people start posting that stuff and the feed is algorithmic and that’s what gets engagement, that’s a real problem for you. Have you thought about how you’re going to moderate Notes?
We think about this stuff a lot, you might be surprised to learn.
I know you do, but this is a very different product.
Here’s how I think about this: Substack is neither an enterprise software provider nor a social network in the mold that we’re used to experiencing them. Our self-conception, the thing that we are attempting to build, and I think if you look at the constituent pieces, in fact, the emerging reality is that we are a new thing called the subscription network, where people are subscribing directly to others, where the order in the system is sort of emergent from the empowered — not just the readers but also the writers: the people who are able to set the rules for their communities, for their piece of Substack. And we believe that we can make something different and better than what came before with social networking.
The way that I think about this is, if we draw a distinction between moderation and censorship, where moderation is, “Hey, I want to be a part of a community, of a place where there’s a vibe or there’s a set of rules or there’s a set of norms or there’s an expectation of what I’m going to see or not see that is good for me, and the thing that I’m coming to is going to try to enforce that set of rules,” versus censorship, where you come and say, “Although you may want to be a part of this thing and this other person may want to be a part of it, too, and you may want to talk to each other and send emails, a third party’s going to step in and say, ‘You shall not do that. We shall prevent that.’”
And I think, with the legacy social networks, the business model has pulled those feeds ever closer. There hasn’t been a great idea for how we do moderation without censorship, and I think, in a subscription network, that becomes possible.
Wow. I mean, I just want to be clear, if somebody shows up on Substack and says “all brown people are animals and they shouldn’t be allowed in America,” you’re going to censor that. That’s just flatly against your terms of service.
So, we do have a terms of service that have narrowly prescribed things that are not allowed.
That one I’m pretty sure is just flatly against your terms of service. You would not allow that one. That’s why I picked it.
So there are extreme cases, and I’m not going to get into the–
Wait. Hold on. In America in 2023, that is not so extreme, right? “We should not allow as many brown people in the country.” Not so extreme. Do you allow that on Substack? Would you allow that on Substack Notes?
I think the way that we think about this is we want to put the writers and the readers in charge–
No, I really want you to answer that question. Is that allowed on Substack Notes? “We should not allow brown people in the country.”
I’m not going to get into gotcha content moderation.
This is not a gotcha… I’m a brown person. Do you think people on Substack should say I should get kicked out of the country?
I’m not going to engage in content moderation, “Would you or won’t you this or that?”
That one is black and white, and I just want to be clear: I’ve talked to a lot of social network CEOs, and they would have no hesitation telling me that that was against their moderation rules.
Yeah. We’re not going to get into specific “would you or won’t you” content moderation questions.
Why?
I don’t think it’s a useful way to talk about this stuff.
But it’s the thing that you have to do. I mean, you have to make these decisions, don’t you?
The way that we think about this is, yes, there is going to be a terms of service. We have content policies that are deliberately tuned to allow lots of things that we disagree with, that we strongly disagree with. We think we have a strong commitment to freedom of speech, freedom of the press. We think these are essential ingredients in a free society. We think that it would be a failure for us to build a new kind of network that can’t support those ideals. And we want to design the network in a way where people are in control of their experience, where they’re able to do that stuff. We’re at the very early innings of that. We don’t have all the answers for how those things will work. We are making a new thing. And literally, we launched this thing one day ago. We’re going to have to figure a lot of this stuff out. I don’t think…
You have to figure out, “Should we allow overt racism on Substack Notes?” You have to figure that out.
No, I’m not going to engage in speculation or specific “would you allow this or that” content.
You know this is a very bad response to this question, right? You’re aware that you’ve blundered into this. You should just say no. And I’m wondering what’s keeping you from just saying no.
I have a blanket [policy that] I don’t think it’s useful to get into “would you allow this or that thing on Substack.”
If I read you your own terms of service, will you agree that this prohibition is in that terms of service?
I don’t think that’s a useful exercise.
Okay. I’m granting you the out that when you’re the email service provider, you should have a looser moderation rule. There are a lot of my listeners and a lot of people out there who do not agree with me on that. I’ll give you the out that, as the email service provider, you can have looser moderation rules because that is sort of a market-driven thing, but when you make the consumer product, my belief is that you should have higher moderation rules. And so, I’m just wondering, applying the blanket, I understand why that was your answer in the past. It’s just there’s a piece here that I’m missing. Now that it’s the consumer product, do you not think that it should have a different set of moderation standards?
You are free to have that belief. And I do think it’s possible that there will be different moderation standards. I do think it’s an interesting thing. I think the place that we maybe differ is you’re coming at this from a point where you think that because something is bad… let’s grant that this thing is a terrible, bad thing…
Yeah, I think you should grant that this idea is bad.
That therefore censorship of it is the most effective tool to prevent that. And I think we’ve run, in my estimation over the past five years, however long it’s been, a grand experiment in the idea that pervasive censorship successfully combats ideas that the owners of the platforms don’t like. And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer the way that you seem to be presenting it to be. I don’t think that that’s a question of whether some particular objection or belief is right or wrong.
I understand the philosophical argument. I want to be clear. I think government speech regulations are horrible, right? I think that’s bad. I don’t think there should be government censorship in this country, but I think companies should state their values and go out into the marketplace and live up to their values. I think the platform companies, for better or worse, have missed it on their values a lot for a variety of reasons. When I ask you this question, [I’m asking], “Do you make software to spread abhorrent views, that allows abhorrent views to spread?” That’s just a statement of values. That’s why you have terms of service. I know that there’s stuff that you won’t allow Substack to be used for because I can read it in your terms of service. Here, I’m asking you something that I know is against your terms of service, and your position is that you refuse to say it’s against your terms of service. That feels like not a big philosophical conversation about freedom of speech, which I will have at the drop of a hat, as listeners to this showknow. Actually, you’re saying, “You know what? I don’t want to state my values.” And I’m just wondering why that is.
I think the conversation about freedom of speech is the essential conversation to have. I don’t think this “let me play a gotcha and ask this or that”–
Substack is not the government. Substack is a company that competes in the marketplace.
Substack is not the government, but we still believe that it’s essential to promote freedom of the press and freedom of speech. We don’t think that that is a thing that’s limited to…
So if Substack Notes becomes overrun by racism and transphobia, that’s fine with you?
We’re going to have to work very hard to make Substack Notes be a great place to have the readers and the writers be in charge, where you can have the kinds of conversations that you find valuable. That’s the exciting challenge that we have ahead of us.
I get the academic aspect of where Chris is coming from. He’s correct that content moderation hasn’t made crazy ideas go away. These are the reasons I coined the Streisand Effect years ago, to point out the futility of just trying to stifle speech. And these are the reasons I talk about “protocols, not platforms” as a way to explore enabling more speech without centralized systems that suppress speech.
But Substack is a centralized system. And a centralized system that doesn’t do trust & safety… is the Nazi bar. And if you have some other system that you think allows for “moderation but not censorship” then be fucking explicit about what it is. There are all sorts of interventions short of removing content that have been shown to work well (though, with other social media, they still get accused of “censorship” for literally expressing more speech). But the details matter. A lot.
I get that he thinks his focus is on providing tools, but even so two things stand out: (1) he’s wrong about how all this works and (2) even if he believes that Substack doesn’t need to moderate, he has to own that in the interview rather than claiming that Nilay is playing gotcha with him.
If you’re not going to moderate, and you don’t care that the biggest draws on your platform are pure nonsense peddlers preying on the most gullible people to get their subscriptions, fucking own it, Chris.
Say it. Say that you’re the Nazi bar and you’re proud of it.
Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”
You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.
And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.
Most companies that want to get large enough recognize that playing to the grifters and the nonsense peddlers works for a limited amount of time, before you get the Nazi bar reputation, and your growth is limited. And, in the US, you’re legally allowed to become the Nazi bar, but you should at least embrace that, and not pretend you have some grand principled strategy.
This is what Nilay was getting at. When you’re not the government, you can set whatever rules you want, and the rules you set are the rules that will define what you are as a service. Chris Best wants to pretend that Substack isn’t the Nazi bar, while he’s eagerly making it clear that it is.
It’s stupidly short-sighted, and no, it won’t support free speech. Because people who don’t want to hang out at the Nazi bar will just go elsewhere.