Readers of this site will know by now that Nintendo polices its intellectual property in an extremely draconian fashion. However, there are still differences in the instances in which the company does so. In many cases, Nintendo goes after people or groups in a fashion that stretches, if not breaks, any legitimate intellectual property concerns. Other times, Nintendo’s actions are well within its rights, but those actions often times appear to do far more harm to the company than whatever IP concern is doing to it. This is probably one of those latter stories.
There’s a new Zelda game coming out in a few weeks on the Switch: The Legend of Zelda: Tears of the Kingdom. As with any rabid fanbase, fans of the series have been gobbling up literally any information they can find about the unreleased game. It was therefore unsurprising that there was a ton of interest in a leaked art book that would accompany its release. It is also not a shock that Nintendo DMCA’d the leaks and discussion of the leaks that occurred on Discord, even though that almost certainly brought even more attention to the leaks in a classic Streisand Effect.
The posts include images from the 204-page artbook that will come with the collector’s edition of the game. They quickly spread to other Discord servers, various subreddits, and beyond. While a ton of original art for the game was in the leak, it didn’t end up revealing much about the mysteries surrounding Tears of the Kingdom players have spent months speculating about. There was no real developer commentary in the leak, and barely any spoilers outside of some minor enemy reveals.
But now Nintendo is also seeking to get a subpoena to unmask the leaker, ostensibly to “protect its rights”, which will almost certainly involve going after the leaker with every legal tactic the company can muster. This despite the all of that context above about what was and was not included in the leak.
Now, I can certainly understand why Nintendo is upset about the leak. It has a book to sell and scans from that book showing up on the internet is irritating. I would argue that those scans in no way replace a 204 page physical artbook, and frankly might serve to actually generate more interest in the book and drive sales, but I can understand why the company might not see it that way.
In which case seeking to bury the links and content via the DMCA is the proper move, even if I think that only serves to generate more interest in the leaks themselves. The only real point of unmasking the leaker is to go after that individual. While Nintendo may still be within its rights to do so, that certainly feels like overkill to say the least.
Referencing the notices sent to Discord in respect of the “copyright-protected and unreleased special edition art book for The Legend of Zelda: Tears of the Kingdom” the company highlights a Discord channel and a specific user.
“[Nintendo of America] is requesting the attached proposed subpoena that would order Discord Inc. …to disclose the identity, including the name(s), address(es), telephone number(s), and e-mail addresses(es) of the user Julien#2743, who is responsible for posting infringing content that appeared at the following channel Discord channel Zelda: Tears of the Kingdom..[..].
As we’ve said in the past, unmasking anonymous speakers on the internet ought to come with a very high bar over which the requester should need to jump. Do some scans from an artbook temporarily appearing on the internet really warrant this unmasking? Is there real, demonstrable harm here? Especially when this appears to be something of a fishing expedition?
Information available on other platforms, Reddit in particular, suggests that the person Nintendo is hoping to identify is the operator of the Discord channel and, at least potentially, the person who leaked the original content.
A two-month-old comment on the origin of the leak suggests the source was “a long time friend.” A comment in response questioned why someone would get a friend “fired for internet brownie points?”
There are an awful lot of qualifiers in there. And if this is just Nintendo fishing for a leaker for which it has no other evidence, then the request for the subpoena should be declined by the court.
You know the drill by now. In October of 2020, the NY Post ran a story about the contents of a laptop hard drive that Hunter Biden apparently left at a computer repair store. There were questions about the provenance of that hard drive, and, given the history of foreign election interference, as well as some questions about the story itself, Twitter made the (ultimately unwise and mistaken) decision to block links to that story, and (in some cases) to suspend accounts that were sharing it. A day later, the company admitted this was a mistake and changed its policy.
As we’ve explained at great length, the conspiracy stories that came out of this one incident are ridiculous and out of touch with reality. The company made one dumb move, which (despite what you might have heard) was not pushed on them by the government or the Biden campaign (which was not the government). They corrected it relatively quickly. This is the nature of content moderation. Mistakes will be made.
Yet, the conspiracy theories continue to spread, and even Elon Musk (the now owner of Twitter) has bought into many of them, and has even suggested that this was some of the reason he chose to purchase Twitter, as right after the announced purchase, he declared that it was “obviously incredibly inappropriate” for Twitter to have done that to “a major news organization.”
Leaving aside that Musk’s own Twitter also blocked the NY Post incorrectly just recently, it appears that it is also somewhat aggressively blocking links to certain other news stories as well.
You’ve likely heard about recent leaks of Pentagon documents that were first leaked via a Discord server. On Wednesday, the Washington Post’s Shane Harris and Samuel Oakford broke quite a story about where the documents came from, discussing the small, private Discord group, and the guy who operated it, who apparently went to great lengths to leak these classified documents.
The young member read OG’s message closely, and the hundreds more that he said followed on a regular basis for months. They were, he recalled, what appeared to be near-verbatim transcripts of classified intelligence documents that OG indicated he had brought home from his job on a “military base,” which the member declined to identify. OG claimed hespent at least some of his day inside a secure facility that prohibited cellphones and other electronic devices, which could be used to document the secret information housed on governmentcomputer networks or spooling out from printers. He annotated some of the hand-typed documents, the member said, translating arcane intel-speak for the uninitiated, such as explaining that“NOFORN” meant the information in the document was so sensitive it must not be shared with foreign nationals.
OG told the group he toiled for hours writing up the classified documents to share with his companions in the Discord server he controlled. The gathering spot had been a pandemic refuge, particularly for teen gamers locked in their houses and cut off from their real-world friends. The members swapped memes, offensive jokes and idle chitchat. They watchedmovies together, joked around and prayed.But OG also lectured them about world affairs and secretive government operations. He wanted to “keep us in the loop,” the member said, and seemed to think that his insider knowledge would offer the others protection from the troubled world around them.
This is pretty good reporting, and on Thursday, the FBI arrested someone that they allege was the leaker described in the article.
Glenn Greenwald, who appears to have an incredibly warped view of what journalism is, freaked out that the Washington Post would report on what it had turned up about the leaker claiming it was doing “the job of the US Security State by hunting down its leakers.” But, uh, that makes zero sense. It’s one thing for a journalist to protect whistleblowers/leakers who come to those journalists to share documents. It’s another altogether to say journalists should not try to report the story of who was sharing classified documents in a gamer Discord server for clout, not as a whistleblower or anything like that.
But, of course, Elon agreed with Glenn, because that’s what he does these days.
Reporting on someone leaking information is kind of a thing that reporters do. Glenn wrote an entire book about Ed Snowden, after all. Yes, it’s different in that Snowden went to Glenn with his docs, but it’s still a reporter’s job to report on stuff like this.
Anyhow, all that is lead up to the fact that Twitter now appears to be permanently suspending at least some accounts that have shared the Washington Post story.
Professor Kathy Gill explained that she attempted to share that story on Twitter with a screenshot of the headline with an annotation noting that it was a teenager who told the story to the WaPo reporters. It didn’t work. She received a message saying “Tweet not sent” instead.
When it didn’t work a second time, she “appealed” to Twitter, noting that it was just a link to the story and a screenshot of the headline:
And, in response: her account was suspended permanently.
Very free speechy from the free speech king.
It’s unclear exactly why Kathy’s account was suspended. It’s difficult to see what rules were broken here, and when the dude in charge insists that blocking major media organizations is “obviously incredibly inappropriate” you kinda have to wonder.
I mean, the reality is that content moderation at scale is impossible to do well, and mistakes are made. This seems likely to be a mistake. But since the supporters of Elon seem to think that you can judge the entire management based on just one such mistake, even to the point of launching congressional inquiries… it seems worth noting this particular bit of content moderation.
So, we’re just handing out top secret security clearance to everyone, I guess. It was clear from the documents posted to Discord (before spreading everywhere), the person behind them would soon be located.
The folded security briefings were obviously smuggled out of secure rooms in someone’s pocket and then photographed carelessly, in one case on top of a hunting magazine. I mean, that narrows it down to people who still buy stuff printed on physical media, a number that shrinks exponentially by the day.
On top of that, the entry level for the leaked info — much of it related to the current invasion of Ukraine by Russia — was Discord, which no one has considered to be the equivalent of Signal or any other secure site for the dissemination of sensitive material.
The DOJ and Pentagon obliquely admitted that, despite some obvious clues, this hunt for the leak source might take some time. In its own estimation, the Defense Department estimated “thousands” of government employees might have access to these briefings and other national security documents. But for it to end up here (if, in fact, the government has actually gotten its man) is both surprising and a bit depressing.
Jack Teixeira, a 21-year-old member of the Massachusetts Air National Guard, was arrested by federal authorities Thursday in connection to the investigation of classified documents that were leaked on the internet.
FBI agents took Teixeira into custody earlier Thursday afternoon “without incident,” Attorney General Merrick Garland announced in brief remarks at the Department of Justice, which has been conducting a criminal investigation into the matter.
We’re apparently letting an army of weekend contributors — a division of the military best known for sandbag deployment and shooting college students — access sensitive information pertaining to a war taking place halfway around the world that they’re in no danger of being deployed to.
Perhaps this is the unintended consequence of de-siloing of intel after investigations showed the government’s ability to keep secrets from itself contributed to its inability to prevent the 9/11 attacks. Or perhaps this is the government taking a lackadaisical approach to operational security, assuming it can absorb any exposure and/or adequately punish anyone taking advantage of the government’s willingness to grant security clearance to nearly anyone remotely involved in national security.
These are still criminal allegations. But whoever was behind the leaks wasn’t doing this to serve the public good, at least not if other members of the Discord server these documents first appeared in are to be believed. Teixeira apparently dumped classified docs there because it was easy to do and he hoped these multiple federal law violations would secure him the friendship of other server members.
The young member was impressed by OG’s seemingly prophetic ability to forecast major events before they became headline news, things “only someone with this kind of high clearance” would know. He was by his own account enthralled with OG, who he said was in his early to mid-20s.
“He’s fit. He’s strong. He’s armed. He’s trained. Just about everything you can expect out of some sort of crazy movie,” the member said.
In a video seen by The Post, the man who the member said is OG stands at a shooting range, wearing safety glasses and ear coverings and holding a large rifle. He yells a series of racial and antisemitic slurs into the camera, then fires several rounds at a target.
While “OG” periodically made claims he wanted other server members to “see” how the US government “really works,” he also espoused conspiracy theories and often expressed his anger that members weren’t showing enough interest in his posts. One member of this server (Thug Shaker Central, itself a bit of a racial slur) decided to post these to another Discord server. It spread from there, finally surfacing on social media sites where anyone could view them, rather than just server members.
That an air guardsman would have this access is a bit of shock, as is the lack of internal controls at whatever base employed him. More shocking is the fact the government didn’t discover this leak until after thousands of people had seen them, after they spread from Discord to Telegram to Twitter. The DOJ will definitely try to make Teixeira’s head roll, but the Pentagon has to be doing some headhunting of its own.
Whatever happens, this isn’t someone leaking documents as a service to the public. From all appearances, these leaks were motivated by a desire to win respect from online peers in a closed group. Not that it matters. An espionage prosecution doesn’t allow defendants to present public service arguments in their defense. And this case, unlike most we have covered here, doesn’t seem to have that crucial element that might justify the exposure of extremely sensitive information — especially information related to an invasion that has the possibility to result in nuclear weapon deployment and/or a Third World War. This wasn’t a selfless act. This was self-promotion.
AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.
When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.
In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.
In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them).
AI Panic Marketing: Exhibit A: Sam Altman.
Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.”
In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”
Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).
Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”
It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”
“AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”
During the Techlash days in 2019, which focused on social media,Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”:
This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”:
“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”
AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.
Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse.
In March,Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”
In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.
Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering).
To further escalate the AI panic, Tristan Harrispublished an OpEd in The New York TimeswithYuval Noah HarariandAza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”
Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them.
“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.”
This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology).
Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.”
Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots.
“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalisticview of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitistpoint of view.”
Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”
Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria.
“Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun(Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”
“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.”
The problem is that “irrational fears” sell. They are beneficial to the ones who spread them.
How to Spot an AI Doomer?
On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?
One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”
Considering all of the above, I decided to define “AI doomer” and provide some criteria:
Doomers tend to live in a tradeoff-free fantasy land.
Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.
Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven.
Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.”
I get it. I totally get it. Every tech dude comes along and has this thought: “hey, we’ll be the free speech social media site. We won’t do any moderation beyond what’s required.” Even Twitter initially thought this. But then everyone discovers reality. Some discover it faster than others, but everyone discovers it. First, you realize that there’s spam. Or illegal content such as child sexual abuse material. And if that doesn’t do it for you, the copyright police will.
But, then you realize that beyond spam and content that breaks the rules, you end up with malicious users who cause trouble. And trouble drives away users, advertisers, or both. And if you don’t deal with the malicious users, the malicious users define you. It’s the “oh shit, this is a Nazi bar now” problem.
And, look, sure, in the US, you can run the Nazi bar, thanks to the 1st Amendment. But running a Nazi bar is not winning any free speech awards. It’s not standing up for free speech. It’s building your own brand as the Nazi bar and abdicating your own free speech rights of association to kick Nazis out of your private property, and to craft a different kind of community. Let the Nazis build their own bar, or everyone will just assume you’re a Nazi too.
It was understandable a decade ago, before the idea of “trust & safety” was a thing, that not everyone would understand all this. But it is unacceptable for the CEO of a social media site today to not realize this.
Enter Substack CEO Chris Best.
Substack has faced a few controversies regarding the content moderation (or lack thereof) for its main service, which allows writers to create blogs with subscription services built in. I had been a fan of the service since it launched (and had actually spoken with one of the founders pre-launch to discuss the company’s plans, and even whether or not we could do something with them as Techdirt), as I think it’s been incredibly powerful as a tool for independent media. But, the exec team there often seems to have taken a “head in sand” approach to understanding any of this.
That became ridiculously clear on Thursday when Chris Best went on Nilay Patel’s Decoder podcast at the Verge to talk about Substack’s new Notes product, which everyone is (fairly or not) comparing to Twitter. Best had to know that content moderation questions were coming, but seemed not just unprepared for them, but completely out of his depth.
This clip is just damning. Chris just trying to stare down Nilay just doesn’t work.
Our host Nilay asked Substack CEO Chris Best the tough questions about whether racist speech should be allowed in their new consumer product, Substack Notes. #techtok#technews#substack#ceo
The larger discussion is worth listening to, or reading below. As Nilay notes in his commentary on the transcript, he feels that there should be much less moderation the closer you get to being an infrastructure provider (this is something I not only agree with, but have spent a lot of time discussing). Substack has long argued that its more hands-off approach in providing its platform to writers is because it’s more like infrastructure.
But the Notes feature takes the company closer to consumer facing social media, and so Nilay had some good questions about that, which Chris just refused to engage with. Here’s the full context that provides more than just the video above. The bold text is Nilay and the non-bold is Chris:
Notes is the most consumer-y feature. You’re saying it’s inheriting a bunch of expectations from the consumer social platforms, whether or not you really want it to, right? It’s inheriting the expectations of Twitter, even from Twitter itself. It’s inheriting the expectations that you should be able to flirt with people and not have to subscribe to their email lists.
In that spectrum of content moderation, it’s the tip of the spear. The expectations are that you will moderate that thing just like any big social platform will moderate. Up until now, you’ve had the out of being able to say, “Look, we are an enterprise software provider. If people don’t want to pay for this newsletter that’s full of anti-vax information, fine. If people don’t want to pay or subscribe to this newsletter where somebody has harsh views on trans people, fine.” That’s the choice. The market will do it. And because you’re the enterprise software provider, you’ve had some cover. When you run a social network that inherits all the expectations of a social network and people start posting that stuff and the feed is algorithmic and that’s what gets engagement, that’s a real problem for you. Have you thought about how you’re going to moderate Notes?
We think about this stuff a lot, you might be surprised to learn.
I know you do, but this is a very different product.
Here’s how I think about this: Substack is neither an enterprise software provider nor a social network in the mold that we’re used to experiencing them. Our self-conception, the thing that we are attempting to build, and I think if you look at the constituent pieces, in fact, the emerging reality is that we are a new thing called the subscription network, where people are subscribing directly to others, where the order in the system is sort of emergent from the empowered — not just the readers but also the writers: the people who are able to set the rules for their communities, for their piece of Substack. And we believe that we can make something different and better than what came before with social networking.
The way that I think about this is, if we draw a distinction between moderation and censorship, where moderation is, “Hey, I want to be a part of a community, of a place where there’s a vibe or there’s a set of rules or there’s a set of norms or there’s an expectation of what I’m going to see or not see that is good for me, and the thing that I’m coming to is going to try to enforce that set of rules,” versus censorship, where you come and say, “Although you may want to be a part of this thing and this other person may want to be a part of it, too, and you may want to talk to each other and send emails, a third party’s going to step in and say, ‘You shall not do that. We shall prevent that.’”
And I think, with the legacy social networks, the business model has pulled those feeds ever closer. There hasn’t been a great idea for how we do moderation without censorship, and I think, in a subscription network, that becomes possible.
Wow. I mean, I just want to be clear, if somebody shows up on Substack and says “all brown people are animals and they shouldn’t be allowed in America,” you’re going to censor that. That’s just flatly against your terms of service.
So, we do have a terms of service that have narrowly prescribed things that are not allowed.
That one I’m pretty sure is just flatly against your terms of service. You would not allow that one. That’s why I picked it.
So there are extreme cases, and I’m not going to get into the–
Wait. Hold on. In America in 2023, that is not so extreme, right? “We should not allow as many brown people in the country.” Not so extreme. Do you allow that on Substack? Would you allow that on Substack Notes?
I think the way that we think about this is we want to put the writers and the readers in charge–
No, I really want you to answer that question. Is that allowed on Substack Notes? “We should not allow brown people in the country.”
I’m not going to get into gotcha content moderation.
This is not a gotcha… I’m a brown person. Do you think people on Substack should say I should get kicked out of the country?
I’m not going to engage in content moderation, “Would you or won’t you this or that?”
That one is black and white, and I just want to be clear: I’ve talked to a lot of social network CEOs, and they would have no hesitation telling me that that was against their moderation rules.
Yeah. We’re not going to get into specific “would you or won’t you” content moderation questions.
Why?
I don’t think it’s a useful way to talk about this stuff.
But it’s the thing that you have to do. I mean, you have to make these decisions, don’t you?
The way that we think about this is, yes, there is going to be a terms of service. We have content policies that are deliberately tuned to allow lots of things that we disagree with, that we strongly disagree with. We think we have a strong commitment to freedom of speech, freedom of the press. We think these are essential ingredients in a free society. We think that it would be a failure for us to build a new kind of network that can’t support those ideals. And we want to design the network in a way where people are in control of their experience, where they’re able to do that stuff. We’re at the very early innings of that. We don’t have all the answers for how those things will work. We are making a new thing. And literally, we launched this thing one day ago. We’re going to have to figure a lot of this stuff out. I don’t think…
You have to figure out, “Should we allow overt racism on Substack Notes?” You have to figure that out.
No, I’m not going to engage in speculation or specific “would you allow this or that” content.
You know this is a very bad response to this question, right? You’re aware that you’ve blundered into this. You should just say no. And I’m wondering what’s keeping you from just saying no.
I have a blanket [policy that] I don’t think it’s useful to get into “would you allow this or that thing on Substack.”
If I read you your own terms of service, will you agree that this prohibition is in that terms of service?
I don’t think that’s a useful exercise.
Okay. I’m granting you the out that when you’re the email service provider, you should have a looser moderation rule. There are a lot of my listeners and a lot of people out there who do not agree with me on that. I’ll give you the out that, as the email service provider, you can have looser moderation rules because that is sort of a market-driven thing, but when you make the consumer product, my belief is that you should have higher moderation rules. And so, I’m just wondering, applying the blanket, I understand why that was your answer in the past. It’s just there’s a piece here that I’m missing. Now that it’s the consumer product, do you not think that it should have a different set of moderation standards?
You are free to have that belief. And I do think it’s possible that there will be different moderation standards. I do think it’s an interesting thing. I think the place that we maybe differ is you’re coming at this from a point where you think that because something is bad… let’s grant that this thing is a terrible, bad thing…
Yeah, I think you should grant that this idea is bad.
That therefore censorship of it is the most effective tool to prevent that. And I think we’ve run, in my estimation over the past five years, however long it’s been, a grand experiment in the idea that pervasive censorship successfully combats ideas that the owners of the platforms don’t like. And my read is that that hasn’t actually worked. That hasn’t been a success. It hasn’t caused those ideas not to exist. It hasn’t built trust. It hasn’t ended polarization. It hasn’t done any of those things. And I don’t think that taking the approach that the legacy platforms have taken and expecting it to have different outcomes is obviously the right answer the way that you seem to be presenting it to be. I don’t think that that’s a question of whether some particular objection or belief is right or wrong.
I understand the philosophical argument. I want to be clear. I think government speech regulations are horrible, right? I think that’s bad. I don’t think there should be government censorship in this country, but I think companies should state their values and go out into the marketplace and live up to their values. I think the platform companies, for better or worse, have missed it on their values a lot for a variety of reasons. When I ask you this question, [I’m asking], “Do you make software to spread abhorrent views, that allows abhorrent views to spread?” That’s just a statement of values. That’s why you have terms of service. I know that there’s stuff that you won’t allow Substack to be used for because I can read it in your terms of service. Here, I’m asking you something that I know is against your terms of service, and your position is that you refuse to say it’s against your terms of service. That feels like not a big philosophical conversation about freedom of speech, which I will have at the drop of a hat, as listeners to this showknow. Actually, you’re saying, “You know what? I don’t want to state my values.” And I’m just wondering why that is.
I think the conversation about freedom of speech is the essential conversation to have. I don’t think this “let me play a gotcha and ask this or that”–
Substack is not the government. Substack is a company that competes in the marketplace.
Substack is not the government, but we still believe that it’s essential to promote freedom of the press and freedom of speech. We don’t think that that is a thing that’s limited to…
So if Substack Notes becomes overrun by racism and transphobia, that’s fine with you?
We’re going to have to work very hard to make Substack Notes be a great place to have the readers and the writers be in charge, where you can have the kinds of conversations that you find valuable. That’s the exciting challenge that we have ahead of us.
I get the academic aspect of where Chris is coming from. He’s correct that content moderation hasn’t made crazy ideas go away. These are the reasons I coined the Streisand Effect years ago, to point out the futility of just trying to stifle speech. And these are the reasons I talk about “protocols, not platforms” as a way to explore enabling more speech without centralized systems that suppress speech.
But Substack is a centralized system. And a centralized system that doesn’t do trust & safety… is the Nazi bar. And if you have some other system that you think allows for “moderation but not censorship” then be fucking explicit about what it is. There are all sorts of interventions short of removing content that have been shown to work well (though, with other social media, they still get accused of “censorship” for literally expressing more speech). But the details matter. A lot.
I get that he thinks his focus is on providing tools, but even so two things stand out: (1) he’s wrong about how all this works and (2) even if he believes that Substack doesn’t need to moderate, he has to own that in the interview rather than claiming that Nilay is playing gotcha with him.
If you’re not going to moderate, and you don’t care that the biggest draws on your platform are pure nonsense peddlers preying on the most gullible people to get their subscriptions, fucking own it, Chris.
Say it. Say that you’re the Nazi bar and you’re proud of it.
Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”
You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.
And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.
Most companies that want to get large enough recognize that playing to the grifters and the nonsense peddlers works for a limited amount of time, before you get the Nazi bar reputation, and your growth is limited. And, in the US, you’re legally allowed to become the Nazi bar, but you should at least embrace that, and not pretend you have some grand principled strategy.
This is what Nilay was getting at. When you’re not the government, you can set whatever rules you want, and the rules you set are the rules that will define what you are as a service. Chris Best wants to pretend that Substack isn’t the Nazi bar, while he’s eagerly making it clear that it is.
It’s stupidly short-sighted, and no, it won’t support free speech. Because people who don’t want to hang out at the Nazi bar will just go elsewhere.
Learn Spanish, French, Italian, German, and many more languages with Babbel. Developed by over 100 expert linguists, Babbel is helping millions of people speak and understand a new language quickly. After just one month, you will be able to speak confidently about practical topics, such as transportation, dining, shopping, directions, making friends, and much more. Having a lifetime subscription to Babbel means you can brush up on your skills or learn an additional language any time you want. It’s on sale for $149.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
What could possibly go wrong? Earlier this week we wrote about an Arkansas bill, SB396, which was modeled after Utah’s recent unconstitutional social media bill, and tries to ban kids from social media. Except, as we noted, it appeared to explicitly exempt pretty much all of social media, except for maybe Facebook and Twitter. The wording is so unclear, but also, oddly specific. For example, it says it exempts:
Social media company that allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative
Which, yes, seems strange. However, Governor Sarah Huckabee Sanders, who also recently signed into law a bill to end the process of age verifying workers in meat processing plants, is suddenly concerned about the welfare of children, and so happily signed this bill, claiming it will protect children.
Yes, now only meat packing plants can get away with exploiting kids for profit in Arkansas.
And, of course, all of the evidence suggests that this law will actually put many kids in much greater danger, as well as harm everyone’s privacy. Age gating sites will require intrusive age verification procedures, which create all sorts of security nightmares. Requiring parental permission will put kids who are estranged from their parents, or who engage in activities or beliefs at odds with their parents, at risk.
But, still, the bill is so poorly worded that even the sponsor of the bill, Senator Tyler Dees, is confused. Again, the bill appears to explicitly exempt social media that involves “short video clips of dancing” (which could likely exclude TikTok, Snapchat, Instagram, and YouTube in some form). But the sponsor claims it’s targeting those very sites.
“The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”
Except it sure looks like you exempted most of those.
In just the last five years, the “right to repair” movement has shifted from nerdy niche to the mainstream, thanks in part to significant support from the Biden FTC. We’ve seen numerous state bills make significant inroads in passing laws opening the door to undermining repair monopolies, even though industry lobbying has, at times, neutered the proposals in a bid to make them useless (see: Kathy Hochul in New York State).
Getting any federal legislation passed has, as usual, been an uphill climb courtesy of a corrupt and dysfunctional Congress. To that end, a bipartisan coalition of 28 state attorneys general have fired off a letter to the chairs and ranking members of the House Energy and Commerce Committee and the Senate Commerce, Science and Transportation Committee urging action on stalled right to repair bills:
“The Right-to-Repair is a bipartisan issue that impacts every consumer, household, and farm in a time of increasing inflation. It is about ensuring that consumers have choices as to who, where, when and at what cost their vehicles can be repaired. It is about ensuring that farmers can repair their tractors for a reasonable price and quickly enough to harvest their crops.”
The letter cites three bills that have stalled that the AGs would like to see pushed forward in a bid to open the door to greater self-repair and independent repair of everything from consumer electronics to agricultural and medical equipment:
The Fair Repair Act, which would require manufacturers to make certain tools and documents available to independent repair providers and owners.
The SMART Act, which would allow repair shops to use alternative or off-brand parts to repair vehicles.
The REPAIR Act, which would prevent manufacturers from mandating specific brands of parts and equipment be used on a vehicle, while also requiring they provide a standardized platform for owners and repair shops to access data and diagnostics.
Lobbyists have increasingly fired up their attacks on such proposals, usually by falsely claiming that cracking down on these companies’ attempted repair monopolies would create all manner of privacy and security risks for U.S. consumers. Automakers in Massachusetts went so far recently as to lie and claim the state’s right to repair efforts would embolden sexual predators.
As usual, a coalition of companies keen on monopolizing repair (John Deere, Apple, Verizon, Microsoft, U.S. automakers, U.S. medical equipment makers) have worked tirelessly to ensure any federal legislative solution remains sidelined, despite widespread, bipartisan support for such measures (polls consistently show public support for reform ranging anywhere from 75 to 95 percent).