One of the things we’ve tried to get across over the years (perhaps unsuccessfully), is that not only are laws to get rid of hate speech almost always abused, they’re also counterproductive in the actual fight against hate. For those who support those laws, they seem to think that without them, that means that there is nothing at all that can be done about “hate speech.” But that’s false. There are all sorts of ways to actually combat hate speech, and part of that is in making it socially and economically unacceptable.
For years, people have kept insisting that social media companies have “no incentive” to keep hate speech off of their platforms, and for years, we’ve explained why that’s wrong. If your platform is overrun with hate speech it’s bad for the platform. Users start to go elsewhere. And if your business model is advertising, so do the advertisers.
And now we have some empirical evidence to show this. CCIA has released a report on the impact of harmful content on brands and advertising, done through creating surveys of users in hypothetical scenarios on social media where hate speech is and is not moderated Turns out, as we said, if you allow hate speech on your website it drives users and advertisers away (someone should tell Elon). It also makes users think poorly of the advertisers who remain.
In a hypothetical scenario where hate speech was not moderated on social media services, research also found negative implications for brands that advertise on the services when hate speech was viewed. Proximity to content that included hate speech resulted in some respondents reporting that the content made them like the advertiser less. It also resulted in a slight decrease in favorable opinions of the advertiser brand, as well as a larger change in net favorability, with some of the movement shifting from favorable opinions to neutral (i.e., neither favorable nor unfavorable) opinions. Respondents who viewed content with hate speech also reported a lower likelihood of purchasing the advertised brand that directly preceded the content, compared to those respondents who viewed social media content with a positive or neutral tone right after the ad.
The results suggest that consumer sentiment toward a social media service would decline if it did not remove user-generated hate speech, and that consumer sentiment would also decline for brands that advertise on the same platform adjacent to said content. These findings indicate that social media services have a rational incentive to moderate harmful content such as hate speech and are consistent with digital services’ assertions that not all engagement adds value and that, in fact, some engagement is of negative value.
While this particular paper actually seems targeted at responding to laws on the other side of the aisle — such as the contested laws in Texas and Florida that would create “must carry” requirements for certain forms of speech, I think the argument applies equally as well to states like New York and California that are trying to pressure companies with legal mandates to remove such information.
However, a number of “must-carry” bills have been proposed in various jurisdictions that, if enacted, could limit social media services’ ability to remove or deprioritize harmful user-generated content. Two such bills recently became law in Texas and Florida, but are not yet in effect, due to pending consideration by the U.S. Supreme Court. Until this paper, there has been little public-facing research exploring the implications of hypothetical legal requirements that would require social media services to display content that would otherwise violate their current hate speech policies.
The study here is basically highlighting that both types of laws are bad. For Texas and Florida, it’s bad in that it would do real damage to the business models of these companies, because the market (remember when the GOP was supposed to be the party supporting the free market?) is telling websites and advertisers that they don’t want hate speech on their platforms.
As these surveys show, websites moderating hate speech are doing so for perfectly legitimate business reasons (to avoid having users and advertisers flee). It’s not because they’re “woke” or trying to silence anyone. They’re just trying to keep the people on their platform from killing each other.
And, the study is also suggesting that the laws in California and New York don’t help either, as the companies have financial incentives to avoid platforming hate speech as well. They don’t need a law to come in and tell them this. The market actually functions just fine as a motivator.
There is no space for nuanced discussion about reality any more, as it seems that nonsense floods the zone. So, please try to follow along here as there needs to be some nuance to finally get down to the details of this issue. It’s nonsense, piled on top of nonsense, piled on top of nonsense, which ends with Elon Musk suggesting he’s going to sue George Soros for… advocating for laws that Elon doesn’t like (for what it’s worth, I’m pretty sure the laws being talked about are problematic, but the details aren’t clear, and there’s no law against advocating for bad laws).
Let’s start here: we’re extremely skeptical of any sort of “hate speech law.” This is not because we like hate speech, far from it. But as we’ve reportedagain and again and again, in practice hate speech laws are frequently abused by the powerful to punish the powerless and marginalized. We’ve long argued that there are better ways to deal with hate speech than criminalizing it. Shun it, shame it, diminish it’s power, counter it, etc.
Of course, in the age of social media, some very, very silly people consider attempts to do the latter the equivalent of censorship. That is, when a private company chooses to de-prioritize hateful speech, they claim that this is the same thing as the government “censoring” it. But nothing is farther from the truth. The government cracking down on hate speech is a free speech issue. A private company refusing to host or promote hate speech is a way for them to use their own speech to condemn such speech. It is a quintessential “more speech” type of response.
One of the people who has a long history of misrepresenting private companies expressing their own free speech rights of association as the equivalent of government censorship is a nonsense peddler named Michael Shellenberger, one of the hand-picked nonsense peddlers that Elon gave some of his “Twitter Files” to, allowing them to completely misrepresent things. Shellenberger, who has a long career peddling complete and utter nonsense, took to the job perfectly, and so completely misunderstood things in the Twitter Files that he ridiculously claimed that FBI was paying Twitter to censor accounts.
The truth was nothing of the sort, and anyone with even the most basic understanding of the law and basic facts could explain it to you (as far as I can tell, Shellenberger has yet to retract or correct his false statements on this). What he was actually reporting on were investigatory requests for data under a 2703(d) request (which require a court order or a warrant, depending on the type of data sought). These are requests for customer communications or records, not for taking down information. That law says that when the government makes a 2703(d) request, the government needs to reimburse the service provider “for such costs as are reasonably necessary and which have been directly incurred in search for, assembling, reproducing, or otherwise providing such information.”
Now there are lots of concerns about the 2703(d) program, and we (unlike the nonsense peddlers who are screaming today) have been calling out the problems with that program for at least a decade. But we’re focused on what the program actually is, not some made up idea that this is “the FBI paying Twitter to censor.”
Shellenberger has continued to peddle more nonsense about social media content moderation, a concept he does not seem to understand one bit, falsely accusing researchers who study information flows of being part of a “censorship industrial complex” and a bunch of other ridiculous stuff. But, of course, a bunch of very silly people eat this nonsense up and believe every word of it, because why not?
Not surprisingly, Shellenberger these days has a very popular Substack where his nonsense is often published. He probably makes more money each week from subscribers than Techdirt makes in a year, because we deal in facts, not nonsense, and facts don’t seem to pay as well.
Anyway, on his Substack, he had another reporter publish an article with the headline “Soros-Funded NGOs Demand Crackdown on Free Speech as Politicians Spread Hate Misinformation.” The article is behind a paywall, so I have no idea what it’s actually referring to. It is entirely possible that Open Society (which is funded by Soros) is advocating for hate speech laws, but the parts that are available to read are just a lot of fluff about whether or not hate is on the rise in Ireland, not the specific laws or what the various NGOs are advocating for.
So maybe Open Society NGOs are supporting hate speech laws. If true, that would be bad, as we’ve described above (and for years here on Techdirt) how such laws are prone to abuse and don’t do much to stop actual hate. But, of course, Soros is free to spend his money as he wishes, and the NGOs he funds are free to advocate for whatever laws they want. That’s part of their free speech.
Anyway, here’s where we finally get around to Elon Musk, who saw this story being promoted… and claimed he’s going to sue over it.
That’s Elon responding to a Shellenberger tweet. Shellenberger’s tweet says:
Politicians & George Soros-funded NGOs say “hate incidents” are rising, but they’re not. The data show the opposite: higher-than-ever and rising levels of tolerance of minorities. The reason they’re spreading hate misinformation is to justify a draconian crackdown on free speech.
So… first off, an increase in “levels of tolerance of minorities” (which is, by itself, an odd way to frame this) is not mutually exclusive with “rising hate incidents.” Both things could be true. I don’t know what points are being conveyed in the article itself (again, paywall), but the Irish police have published stats saying that “hate crimes” and “hate related incidents” went up from 2021 to 2022.
That’s not to say those stats are trustworthy. Also, hate speech and hate crime are not the same thing.
None of that means that Irish politicians aren’t overhyping the matter. They may well be. They may also be pushing for laws that intend to stifle free speech. I’m sure some are, because politicians all over the world seem to keep doing that. And it’s possible that Open Society funded NGOs are supporting some of those laws. And, as frustrating as that may be to us, it’s still very much allowed because of free speech.
Yet, then we have Elon jumping in to respond to Shellenberger’s already questionable claim by saying:
Exactly.
X will be filing legal action to stop this. Can’t wait for discovery to start!
There’s a lot to break down in this short tweet. What is he saying “exactly” about? And what kind of legal action is he filing?
But, first, let’s just make this point that I’ve made before, but is important to make again. It’s pretty common when lawsuits are threatened for some people to say something along the lines of “can’t wait for discovery,” which generally just shows that they have no idea how any of this works. Many people seem to think that “discovery” is some magical process by which all your dirty laundry gets aired publicly.
That is… rarely the case. First off, while discovery is a total pain for basically everyone involved, discovery is (generally) limited to issues directly dealing with the legal issues at hand. Parties may seek a lot more, and those on the other side may push back on those requests. But, more importantly, most of the time, what’s handed over in discovery never sees the light of day. Sometimes there are direct limits on what parties can share publicly, and often the only bits of discovery that become public are what’s revealed as the case moves towards trial (if it gets that far). People who are “eager” for discovery are… usually disappointed.
And, of course, in theory, any such “legal action” would take place in Ireland, which seems to have fairly similar discovery rules as the US, such that any discovery has to be “relevant and necessary” to the claims at hand.
Which brings us to the big question: who is he suing and for what? Many people (perhaps reasonably?) interpreted Musk’s statement to mean he was going to sue Soros. But, of course, he has no standing whatsoever for that, and the only thing he could possibly sue Soros over was for his advocacy (and funding), both of which would be protected speech. If the implication is that Elon is going to sue Soros for his free speech, that will (yet again!) raise questions about Elon’s actual commitment to “free speech.”
Perhaps a more charitable explanation here is that Elon actually means he’d be suing in Ireland (or, perhaps more likely, in the European Court of Justice?) to block any such law should it pass. But… that would require the details of the law to understand what the issue was. And, if that was the plan, then it’s difficult to see what sorts of “discovery” he’s expecting to get access to.
And, sure, if Ireland passes a really bad law, I do hope that exTwitter challenges it in court. But that’s got nothing to do with Soros, and I don’t see how discovery is going to be even remotely meaningful.
Of course, even if his plan really is to challenge the eventual Irish law (should it ever become law), it’s pretty clear from the replies to his tweet, that most of his gullible fans think he’s talking about suing Soros directly for his speech… and they’re ridiculously claiming that this shows how much Elon supports free speech. It’s possible that Elon recognizes that his confusingly worded tweet implies one thing when he really means another, though he hasn’t tried to correct the misperception at all. Or, of course, he really thinks that he’s going to sue Soros for exercising his own free speech, and his idiot fans are insisting that suing someone for their own speech is support of free speech.
Significant human rights issues included credible reports of: torture and other cruel, inhuman, and degrading treatment or punishment by government authorities; arbitrary arrest and detention; political prisoners or detainees; arbitrary or unlawful interference with privacy; serious restrictions on freedom of expression and media, including harassment and intimidation of journalists, unjustified arrests or prosecutions of journalists, censorship, and enforcement of and threat to enforce criminal libel laws; serious restrictions on internet freedom; substantial interference with the freedom of peaceful assembly and freedom of association, including overly restrictive laws on the organization, funding, or operation of nongovernmental organizations and civil society organizations; inability of citizens to elect their executive branch of government or upper house of parliament; lack of investigation of and accountability for gender-based violence, including but not limited to domestic or intimate partner violence, sexual violence, and other harmful practices; violence or threats of violence targeting lesbian, gay, bisexual, transgender, queer, or intersex persons; and significant restrictions on workers’ freedom of association, including threats against labor activists.
That definitely explains why the ruler of Jordan, King Abdullah II, would sign a bill making this hideous environment even worse. (It also possibly explains why NSO Group chose to sell its spyware to this country. The Israeli-based malware firm has definitely shown a predilection for hawking surveillance tech to human rights abusers.)
The King of Jordan approved a bill Saturday to punish online speech deemed harmful to national unity, according to the Jordanian state news agency, legislation that has drawn accusations from human rights groups of a crackdown on free expression in a country where censorship is on the rise.
The measure makes certain online posts punishable with months of prison time and fines. These include comments “promoting, instigating, aiding, or inciting immorality,” demonstrating ”contempt for religion” or “undermining national unity.”
There’s nothing quite like tying a chosen religion to a non-representative form of government. When you do that, you can start writing laws that define “morality” or “unity” in self-serving ways without having to worry about getting your legislation rejected by people actually willing to serve their constituents or rejected by courts as blatantly illegal violations of guaranteed rights.
The country’s government apparently assumes the humans it presides over have no rights. So, they’ll be subject to arrest and possible imprisonment for saying things the government doesn’t like. On top of that, they can expect to be punished for attempting to protect themselves from this punishment, or for being so bold as to point out wrongdoing by law enforcement.
It also punishes those who publish names or pictures of police officers online and outlaws certain methods of maintaining online anonymity.
The king and his most immediate subservients want to be able to easily identify people in need of punishment for violating these new draconian measures. And they don’t want anyone pointing out who’s being tasked with handling arrests for this new list of speech crimes.
As with so many censorial laws are these days, it’s an amendment to an existing “cybercrime” bill — the sort of handy foundational material autocrats can use to justify increased domestic surveillance and widespread silencing/punishing of dissent.
Then there’s this, which makes you wonder why the State Department ever bothered taking a look at the human rights situation in Jordan in the first place.
The measure is the latest in a series of crackdowns on freedom of expression in Jordan, a key U.S. ally seen as an important source of stability in the volatile Middle East.
Come on, America. Make better friends. Buddying up with someone more closely aligned to the religion-based dictators surrounding him than the ideals that turned this country into the leader of the free world is never going to work out well.
To date Yaccarino has shown a talent for… using buzzwords to say absolutely meaningless nonsense that everyone knows is laughable. And not much else. The latest example was in an interview where Yaccarino continued to throw out a bunch of buzzwords that add up to nothing, combined with obvious lies. I mean…
“The rebrand really represented a liberation from Twitter, a liberation that allows us to evolve past a legacy mindset and to reimagine how everyone … around the world is going to change how we congregate, how we transact, all in one place,” Yaccarino said, adding that users would soon be able to make video calls and payments through the platform.
“It’s developing into this global town square that is fueled by free expression, where the public gathers in real time,” she said.
This is exactly the kind of empty platitude, meaningless business jargon, nonsense speech that Elon Musk himself would mock mercilessly if it came from a competing platform’s CEO.
“There’s also a lot of hate and a lot of vitriol and conspiracy theories and those attract a lot of eyeballs too,” Eisen countered. “And so, if you’re a brand and a business, why would you feel safe advertising?”
Conceding that it was an “appropriate question,” Yaccarino then claimed some of these “headline comments and phrases need to be continually brought to light and debunked.” Referencing her time as a top ad executive for NBCUniversal, she noted that Twitter was “our number one social partner” and was always considered safe.
“A lot of brands have left,” Eisen shot back.
“I hear you,” Yaccarino replied. “I want to take that last 10 years and put it in perspective, because by all objective metrics, X is a much healthier and safer platform than it was a year ago.”
There is no rational human being on earth who believes this.
Meanwhile, around the same time, exTwitter rolled out its new “brand safety sensitivity settings.” In theory, if the platform were really as safe as Yaccarino claims, why does it need these tools?
Of course, all of this is basically begging folks to go looking at where ads are appearing on exTwitter, and Media Matters came up with quite a scoop on that front, showing that major advertisers had their ads appearing next to a verified account that praises Hitler.
Under the leadership of CEO Linda Yaccarino, X (formerly known as Twitter) has been placing ads for brands like The New York Times Co.’s The Athletic, MLB, the Atlanta Falcons, Sports Illustrated, USA Today, Amazon, and Office Depot on a verified pro-Adolf Hitler account that encourages antisemitic harassment. The company continues to monetize the openly antisemitic account despite reportedly acknowledging it had violated the platform’s “rules against violent speech.”
The link includes some more details about the account, New American Union, and the pro-Hitler memes it posted. We’re not posting them here, but anyone looking at that account will come away thinking “boy, that account sure likes Hitler.” And it’s verified.
Soon after the Media Matters article started to go viral, the account was finally suspended. But, as Media Matters notes, the account had been “verified” since April, had amassed many thousands of followers, and had been posting pro-Hitler rhetoric for a while.
CNN later reported that some advertisers whose ads were appearing next to that account have paused all advertising on exTwitter. So, good going, Linda Yaccarino. Your job was to attract advertisers to come back, and somehow you’ve instead convinced more to leave by trying to bullshit them about how the site was safer for brands.
Of course, most of the attention on this story was focused on the big name advertisers and where their brands were appearing. But as Jason O. Gilbert noted on Bluesky, it kinda seems like that story should be secondary to the fact that exTwitter had a VERIFIED pro-Hitler account:
Of course, I kinda see both sides to this. If the focus were just on the neo-Nazi account, people would shrug and say “yeah, well, we already know that Elon is cool with platforming Nazis, what’s the story?” But if you talk about the advertisers, well, they might just pull their ads. The “marketplace of ideas” at work.
Anyway, I look forward to Yaccarino excitedly announcing next week how X has expanded its sensitivity settings to include a “don’t show my ads next the neo-Nazis that are core to the platform” setting.
But, you know, not everyone likes to read detailed treatises on this subject. Some people prefer fun, action-packed YouTube videos. And we’d like to help you out there too.
So, it was nice to see that the always excellent Legal Eagle recently did a fantastic video version exploring some wrong free speech tropes, including both of the ones mentioned above, along with a few others.
For what it’s worth, he also discusses how private platforms have the absolute right under the 1st Amendment to ban you or remove your content, which people often mistake as being permitted by Section 230, not the 1st Amendment. You may recall that we posted about another recent Legal Eagle video about Section 230, which was also great.
Anyway, there’s not much more to say on this, but I figured many of the folks who enjoy our discussions on the 1st Amendment might, similarly, enjoy this video.
Last year, the Supreme Court handed down a ruling in a school free speech case that came down squarely, if very narrowly, on the side of the student. The student suing over being kicked off the cheerleading squad for sending a snapchat message saying “fuck school fuck softball fuck cheer fuck everything” prevailed, with the nation’s top court finding her speech, however crude, was protected by the First Amendment.
But it wasn’t a blanket ruling on off-campus speech by students. Schools can still engage in discipline over off-campus speech, but the court suggested they were better off erring on the side of caution than assuming they’re permitted to replace parental supervision in all cases involving off-campus speech. Of particular importance to this case was the government’s interest in providing disruption-free education to other students. The “disruption” claimed by the school in this case was nothing more than a “5-10 minute” disruption of a single class over a period of a couple of days.
The lower courts are now offering their interpretations of this ruling, which created no bright line standard for dealing with off-campus speech. Erring on the side of restraint may be the guideline SCOTUS suggested, but it’s not really a good baseline.
So, we’re getting rulings like this one [PDF] recently issued by the Ninth Circuit Court of Appeals. (h/t Eric Goldman)
In this case, the speech central to the case was objectively far more objectionable than the f-bomb-laden mini-rant delivered by the irritated cheerleader.
For example, in early February 2017, Epple uploaded a photograph in which a Black member of the AHS girls’ basketball team was standing next to the team coach, who was also Black, and Epple drew nooses around both their necks and added the caption “twinning is winning.” In another post, he combined (1) a screen shot of a particular Black student’s Instagram post in which she stated “I wanna go back to the old way” with (2) the statement “Do you really tho?”, accompanied by a historical drawing that appears to depict a slave master paddling a naked Black man who is strung up by rope around his hands. On February 11, 2017, he posted a screenshot of texts in which he and a Black classmate were arguing, and he added the caption “Holy shit I’m on the edge of bringing my rope to school on Monday.” Other posts, although not referencing specific students, contained images either depicting, or making light of, Ku Klux Klan violence against Black people. One post included what appears to be a historical photograph of a lynched man still hanging from a tree; another depicts a Klan member in a white hood; and a third combines the caption “Ku klux starter pack” with pictures of a noose, a white hood, a burning torch, and a Black doll.
So, truly terrible stuff from a bunch of minors who had decided to spend their time engaging with each other’s basest instincts. And that’s not even the worst of it. Other posts used derogatory, racist terms like “gorilla,” nappy ass” and “nigger.”
On the other hand, there was more at stake in this lawsuit. The Supreme Court’s Mahanoy decision involved someone being banned from participating in an extracurricular activity. The plaintiffs here were first suspended, then expelled, prevented from attending school altogether.
Also of interest to this case was the nature of the account. It was not publicly accessible. It was an invitation-only Instagram group composed of Cedric Epple’s closest confidants. Of course, thirteen can keep a secret if twelve of them are dead, as the saying goes. Eventually, the contents of this invitation-only group were made public, resulting in some actual (at least in comparison to the cheerleading case) disruption.
During the weekend of March 18–19, 2017, one of the account’s followers showed multiple photos from that account to the girls’ basketball player who had been depicted with a noose. On Monday, March 20, that student, in turn, shared what she had learned with several other students who had been targeted by the account’s posts. That same day, one of the followers of the account was asked to lend his phone to a student who claimed to need to call her mother, and while this student had the phone, she took it into the restroom, where she and another student took pictures of some of the contents of the yungcavage account. Those photographs were then shared with other students.
As knowledge of the account rapidly spread, a group of about 10 students gathered at the school, several of whom were upset, yelling, or crying. Although the next class period had started, the students “were all too upset to go to class.” The school’s Principal, Jeff Anderson, asked them to come to the conference room adjacent to his office, where they were joined by two of the school’s Assistant Principals, Melisa Pfohl and Tami Benau. Benau stated that she had “never seen a group of students as upset as these girls were.” The school administrators summoned the school’s counselors and mental health staff to join them, and around the same time, some of the students’ parents (who had presumably been contacted by their children) began to arrive.
In the following days, both students behind the account were suspended before being expelled. Students who had knowledge of the posts wished to speak to their instructors about what they had seen, further disrupting already disrupted classes. Some students expressed their unwillingness to attend classes with these students and others reported feeling scared, bullied, or otherwise unable to resume their studies. A rally and an on-campus demonstration also followed these disclosures, with the demonstration culminating with two of the students who were members of the private Instagram group being punched by other students.
The Ninth Circuit says the facts of this case are distinguishable from the Supreme Court’s 2021 decision. Substantial disruption occurred. And that disruption was (apparently) foreseeable, even if the students did take the precaution of limiting access to their racist comments by operating within the confines of a private social media group.
[O]nce Epple’s posts hit their targets, the school was confronted with a situation in which a number of its students thereby became the subjects of “serious or severe bullying or harassment targeting particular individuals”— which Mahanoy specifically identifies as an “off-campus circumstance[]” in which “[t]he school’s regulatory interests remain significant.” As Epple acknowledges, he was expelled on the ground that he had engaged in “bullying” within the meaning of the generally applicable and speech-neutral prohibitions contained California Education Code section 48900.4. AlthoughEpple may be correct that his parents have the primary responsibility for policing his off-campus use of social media, the school’s authority and responsibility to act in loco parentis also includes the role of protecting other students from being maltreated by their classmates. Epple’s conduct here strongly implicated that “significant” interest of the school.
While the Appeals Court is obviously correct it was foreseeable the posts would cause disruption after their targets viewed them, that’s not the same thing as being a foreseeable outcome when the messages were still contained by the boundaries of the thirteen-member Instagram group. So, that seems to be a bit of cart-ahead-of-horse reasoning that suggests the plaintiffs should have known it was inevitable their atrocious Instagram posts would be exposed. If it was so obviously foreseeable, you’d think even a bunch of bigoted minors would know better than to create the content in the first place. Then again, stupid people do stupid things all the time, even when the negative outcomes are blatantly obvious. The court explains it this way:
Epple again emphasizes that he did not ever intend for the targets of his posts to ever see them. But having constructed, so to speak, a ticking bomb of vicious targeted abuse that could be readily detonated by anyone following the account, Epple can hardly be surprised that his school did not look the other way when that shrapnel began to hit its targets at the school. And, as we have explained, recognizing an authority in school administrators to respond to the sort of harassment at issue here presents no risk that they will thereby be able to “punish[] students engaged in protected political speech in the comfort of their own homes.” Epple’s actions had a sufficient nexus to AHS, and his discipline fits comfortably within Tinker’s framework and does not threaten the “marketplace of ideas” at AHS.
In the context of this case, Epple’s speech is not protected. He was not making any larger statement about his allegiance to racist factions or expressing displeasure with societal changes. He freely admitted he made these posts for no other reason than to entertain himself and other members of his group — a recognizably juvenile justification for being ignorant and hateful.
Not protected in this context — which involves the recognition of educational institutions having an obligation to protect students from discrimination and maintain disruption-free learning environments. That makes sense. But the concurrence, written by Judge Ronald Gould, suggests this speech should not be protected in any cases involving students and public schools.
Hate speech has no role in our society and contributes little or nothing to the free-flowing marketplace of ideas that is essential to protect in a school environment. Just as a school cannot be forced to teach hate speech, neither should it be forced to entertain and tolerate within its walls hate speech promulgated by arrantly misguided students. When school authorities take action to root out the persistent echoes of racism that arise from time to time in American society, courts should not stop them, instead allowing racist comments to be rooted out and notdeemed protected by the First Amendment. These principles apply with cogent force to hate speech that threatens to dehumanize ethnic or racial groups within our multiracial society.
[…]
In my view, civilized society should not tolerate imagery encouraging hate; government bodies, consistent with the Constitution, can and should be able to take steps to stop it.
Judge Gould may briefly refer to “case-by-case basis,” but his proposal suggests governments replace parents in cases involving off-campus speech, even when the speech does not cause significant disruption of on-campus learning, supposedly for the good of the nation at whole.
His follow-up sentence that ends the next paragraph in his concurrence uses an even broader brush. While most of the paragraph refers to minority students being subjected to hate speech (itself a slippery term with no clear definition), his concluding sentence simply says “government bodies,” which could be read to include any government agency that comes across hate speech. The most charitable reading suggests the judge is still referring to schools. But even so, this would allow schools to directly regulate off-campus behavior — something that may be conducive to rooting out hate speech but is the sort of overreach that has never been considered as an acceptable compromise to accomplish this noble aim.
Germany’s uncomfortable relationship with free speech continues. The country has always been sensitive about certain subjects (rhymes with Bitler and, um, Yahtzee), resulting in laws that suppress speech referring to these subjects, apparently in hopes of preventing a Fourth Reich from taking hold.
But the censorship of speech extends far beyond the lingering aftereffects of Germany’s supremely troubled past. The government has passed laws outlawing speech with the vaguest of contours, like “hate speech” and “fake news.” And it has swung a pretty powerful hammer to ensure cooperation, stripping away intermediary immunity to hold platforms directly accountable for user-generated content. You know, like a nation run by authoritarians, except ones that enact penalties for references to a certain former authoritarian.
Germany may wish to escape its abusive past. But its speech-related laws encourage abuse by powerful people. Allow this timeline to run without interruption long enough, and you’re staring down the barrel of history.
Maybe it won’t be the second coming of national socialism. But it might just be the conversion of Germany into something resembling the USSR farm team East Germany was until the fall of the Berlin Wall. As this New York Times report details, people are being arrested for being careless online — something that suggests far too many local politicians desire a Stasi of their own.
When the police pounded the door before dawn at a home in northwest Germany, a bleary-eyed young man in his boxer shorts answered. The officers asked for his father, who was at work.
They told him that his 51-year-old father was accused of violating laws against online hate speech, insults and misinformation. He had shared an image on Facebook with an inflammatory statement about immigration falsely attributed to a German politician. “Just because someone rapes, robs or is a serious criminal is not a reason for deportation,” the fake remark said.
The police then scoured the home for about 30 minutes, seizing a laptop and tablet as evidence, prosecutors said.
If the police already had copies of the posting, it doesn’t make much sense for them to search a home and seize devices. But that’s what they do. And, according to the New York Times article, this happens nearly one hundred times a day all over the nation, day after day after day.
Normally, when someone suggests efforts like these produce a chilling effect, governments issue statements affirming support for free speech and obliquely suggest the original commenter is misinformed or misinterpreting these actions. Not so with Germany. The chilling effect is the entire point — something the government freely admits.
German authorities have brought charges for insults, threats and harassment. The police have raided homes, confiscated electronics and brought people in for questioning. Judges have enforced fines worth thousands of dollars each and, in some cases, sent offenders to jail. The threat of prosecution, they believe, will not eradicate hate online, but push some of the worst behavior back into the shadows.
This means the earlier efforts — those forcing social media platforms to immediately and proactively remove anything the German government might find offensive — haven’t worked as well as politicians hoped. It was an impossible demand, one made by people who don’t believe anything is impossible if it’s backed by a government mandate and (most importantly) entirely the responsibility of other people.
Since the government can’t make social media companies perform the impossible, prosecutors have decided to go after internet users, who are far easier to threaten, intimidate, and jail into silence. Chilling is what we do, say prosecutors, citing the supposed success similar tactics have had in the fight against online piracy(!):
Daniel Holznagel, a former Justice Ministry official who helped draft the internet enforcement laws passed in 2017, compared the crackdown to going after copyright violators. He said people stopped illegally downloading music and movies as much after authorities began issuing fines and legal warnings.
“You can’t prosecute everyone, but it will have a big effect if you show that prosecution is possible,” said Mr. Holznagel, who is now a judge.
Since it’s almost impossible to tell what will trigger police action and prosecution, German citizens are likely engaging in self-censorship regularly. Sarcasm, irony, parody, shitposting… all of this is under scrutiny, since it’s apparent the government isn’t capable of performing anything but a straightforward reading of user-generate content. It would be almost comical if it weren’t for the police raids, prosecutions, device seizures, and jail time.
No national figures exist on the total number of people charged with online speech-related crimes. But in a review of German state records, The New York Times found more than 8,500 cases. Overall, more than 1,000 people have been charged or punished since 2018, a figure many experts said is probably much higher.
This effort siphons resources from law enforcement agencies asked to police more serious criminal acts — the kind that result in actual victims who can show actual harm, rather than the theoretical harm posed by posts that fall outside of the boundaries set by the German government’s escalating desire to regulate speech.
But that doesn’t mean local police aren’t welcoming the new duties. Some seem to particularly relish literally policing the internet.
Authorities in Lower Saxony raid homes up to multiple times per month, sometimes with a local television crew in tow.
Internet use is under constant surveillance. This always-on monitoring provides law enforcement with targets. Arrestees who refuse to submit to device searches aren’t slowing down investigators. Electronics are sent to crime labs and subjected to forensic searches by Cellebrite devices. Millions of dollars fund these efforts… and for what?
Swen Weiland, a software developer turned internet hate speech investigator, is in charge of unmasking people behind anonymous accounts. He hunts for clues about where a person lives and works, and connections to friends and family. After an unknown Twitter user compared Covid restrictions to the Holocaust, he used an online registry of licensed architects to help identify the culprit as a middle-aged woman.
“I try to find out what they do in their normal life,” Mr. Weiland said. “If I find where they live or their relatives then I can get the real person. The internet does not forget.”
That’s what the government is going after. Germany is in the business of punishing stupidity. But only certain forms of stupidity. Far more threatening posts are ignored by law enforcement. An activist interviewed by the NYT said she was doxxed and threatened by online commenters. When she took this info to law enforcement, officers responded by giving her a brochure about online hate and telling nothing that was said broke any laws. Doxxing is ok. Threats are ok. Making an extremely terrible analogy? A criminal offense.
So is this:
Last year, Andy Grote, a city senator responsible for public safety and the police in Hamburg, broke the local social distancing rules — which he was in charge of enforcing — by hosting a small election party in a downtown bar.
After Mr. Grote later made remarks admonishing others for hosting parties during the pandemic, a Twitter user wrote: “Du bist so 1 Pimmel” (“You are such a penis”).
Three months later, six police officers raided the house of the man who had posted the insult, looking for his electronic devices.
While it’s somewhat understandable that speech restrictions have been put in place in hopes of preventing history from repeating, the government’s desire to turn ignorance into criminal activity is hugely problematic. Laws like this are never taken off the books. They either linger forever or are subjected to endless expansions, allowing the government to start serving its own interests rather than those it imagines the general public values. There’s no impending slippery slope here. Germany is already headed down it.
The internet is about speech. That’s basically all the internet is. It’s a system for communicating, and that communication is speech. What’s becoming increasingly frustrating to me is how in all of these attempts to regulate the internet around the globe, policymakers (and many others) seem to ignore that, and act as if they can treat internet issues like other non-speech industries. We see it over and over again. Privacy law for the internet? Has huge speech implications. Antitrust for the internet? Yup, speech implications.
That’s not to argue that all such regulations can’t be done in ways that don’t violate free speech rights, but to note that those who completely ignore the free speech implications of their regulations are going to create real problems for free speech.
The latest area where this is showing up is that the UN has been working on a “Cybercrime Treaty.” And, you can argue that having a more global framework for responding to internet-based crime sounds like a good thing, especially as such criminal behavior has been rapidly growing. However, the process is already raising lots of concerns about the potential impact on human rights. And, most specifically, there are massive concerns about how a Cybercrime Treaty might include speech related crimes.
So it is concerning that some UN Member States are proposing vague provisions to combat hate speech to a committee of government representatives (the Ad Hoc Committee) convened by the UN to negotiate a proposed UN Cybercrime treaty. These proposals could make it a cybercrime to humiliate a person or group, or insult a religion using a computer, even if such speech would be legal under international human rights law.
Including offenses based on harmful speech in the treaty, rather than focusing on core cybercrimes, will likely result in overbroad, easily abused laws that will sweep up lawful speech and pose an enormous menace to the free expression rights of people around the world. The UN committee should not make that mistake.
As we’ve been noting for years, “hate speech laws” are almost always abused by governments to silence dissent, rather than protect the marginalized. Indeed, one look at the countries pushing for the Cybercrime Treaty to include hate speech crimes should give you a sense of the intent of the backers:
For example, Jordan proposes using the treaty to criminalize “hate speech or actions related to the insulting of religions or States using information networks or websites,” while Egypt calls for prohibiting the “spreading of strife, sedition, hatred or racism.” Russia, jointly with Belarus, Burundi, China, Nicaragua, and Tajikistan, also proposed to outlaw a wide range of vaguely defined speech intending to criminalize protected speech: “the distribution of materials that call for illegal acts motivated by political, ideological, social, racial, ethnic, or religious hatred or enmity, advocacy and justification of such actions, or to provide access to such materials, by means of ICT (information and communications technology),” as well as “humiliation by means of ICT (information and communications technology) of a person or group of people on account of their race, ethnicity, language, origin or religious affiliation.”
It’s like a who’s who of countries known for oppressing dissent at every opportunity.
Once again, it’s reasonable to argue that there should be some more regulations for the internet, but if you don’t recognize how those will be abused to stifle speech, you’re a part of the problem.
Okay, so this bill is nowhere near as bad as the Texas and Florida bills, or a number of other bills out there about content moderation. But that doesn’t mean it’s still not pretty damn bad. New York has passed a variation of a content moderation bill in that state that requires websites to have a “hateful conduct policy.”
The entire bill is pretty short and sweet, but the basics are what I said above. It has a very broadly defined hateful conduct definition:
“Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.
Okay, so first off, that’s pretty broad, but also most of that speech is (whether you like it or not) protected under the 1st Amendment. Requiring websites to put in place editorial policies regarding 1st Amendment protected speech raises… 1st Amendment concerns, even if it is left up to the websites what those policies are.
Also, the drafters of this law are trying to pull a fast one on people. By calling it “hateful conduct” rather than “hateful speech,” they’re trying to avoid the 1st Amendment issue that is obviously a problem with this bill. You can regulate conduct but you can’t regulate speech. Here, the bill tries to pretend it’s regulating conduct, but when you read the definition, you realize it’s only talking about speech.
So, yes, in theory you can abide by this bill by putting in place a “hateful conduct” policy that says “we love hateful conduct, we allow it.” But, obviously, the intent of this bill is to use the requirements here to pressure companies into removing speech that is likely protected under the 1st Amendment. That’s… an issue.
Also, given that the definition is somewhat arbitrary, what’s to stop future legislators from expanding the definition. We’ve already seen efforts in many places to make speaking negatively about the cops into “hate speech.”
Next, the law applies to “social media networks” but here, again, the definition is incredibly broad:
“Social media network” means service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public.
There appear to be no size qualifications whatsoever. So, one could certainly read this law to mean that Techdirt is a “social media network” under the law, and we may be required to create a “hateful conduct” policy for the site or face a fine. But, the moderation that takes place in the comments is not policy driven. It’s community driven. So, requiring a policy makes no sense at all.
And now that’s also a big issue. Because if we’re required to create a policy, and we do so, but it’s our community that decides what’s appropriate, that means that the community might not agree with the policy, and might not follow what’s in the policy. What happens then? Are we subject to consumer protection fines for having a “misleading” policy?
At the very least, New York State pretty much just guaranteed that small sites like ours need to find and pay a lawyer in New York to tell us what we can do to avoid liability.
Do I want hateful conduct on the site? No. But we’ve created ways of dealing with it that don’t require a legally binding “hateful conduct” policy. And it’s absolutely ridiculous (and just totally disconnected from how the world works) to think that forcing websites to have a “hateful conduct” policy will suddenly make sites more aware of hateful conduct.
The whole thing is political theater, disconnected from the actual realities of running a website.
And that’s made especially clear by the next section:
A social media network that conducts business in the state, shall provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct. Such mechanism shall be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website, and shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.
So, now every website has to build in special reporting mechanisms, that might not match with how their site actually works? We have the ability to fill out a form and alert us to things, but we also allow people to submit those reports anonymously. As far as I can tell, we might not be able to do that under this law, because we have to be able to “provide a direct response” to anyone who reports information to us. But how do we do that if they don’t give us their contact info? Do we need to build in a whole separate messaging tool?
Each social media network shall have a clear and concise policy readily available and accessible on their website and application which includes how such social media network will respond and address the reports of incidents of hateful conduct on their platform.
Again, this makes an implicit, and false, assumption that every website that hosts user content works off of policies. That’s not how it always works.
The drafters of this bill then try to save it from constitutional problems by pinky swearing that nothing in it limits rights.
Nothing in this section shall be construed (a) as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the first amendment to the United States Constitution, or
(b) to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report.
I mean, sure, great, but the only reason to have a law like this is as a weak attempt to force companies to take down 1st Amendment protected speech. But then you add in something like this to pretend that’s not what you’re really doing. Yeah, yeah, sure.
The enforcement of the law is at least somewhat limited. Only the Attorney General can enforce it… but remember, this is in a state where we already have an Attorney General conducting unconstitutional investigations into social media companies, as a blatant deflection from anyone looking to closely at the state’s own failings in stopping a mass shooting. The fines from violating the law are capped at $1,000 per day, which would be nothing for larger companies, but could really hurt smaller ones.
Even if you agree with the general sentiment that websites should do more to remove hateful speech on their sites, that still should make you very concerned about this bill. Because if states like NY can require this of websites, other states can require other kinds of policies, and other concepts to be put in place regarding content moderation.
If you work for the government and the government is leaning towards more power and less accountability, why wouldn’t you be supportive of the government, no matter who’s running the joint? That’s what happened in the Intelligence Community, according to a whistleblower who oversaw the IC’s internal chat services for nearly a decade.
An internal U.S. intelligence messaging system became a “dumpster fire” of hate speech during the Trump administration, a veteran National Security Agency contractor says. And it’s “ongoing,” another Defense Department contractor tells SpyTalk.
Dan Gilmore, who was in charge of overseeing internal chat rooms for the Intelink system for over a decade starting in 2011, says that by late 2020 the system was afire with incendiary hate-filled commentary, especially on “eChirp,” the intelligence community’s clone of Twitter.
None of this is surprising. People get into government work for a number of reasons, but those deeply involved in law enforcement and surveillance rarely get into it to make the world a better place. Law enforcement has long been home to racists and bullies — a culture it has cultivated since its inception as an entity charged with tracking down escaped slaves.
The Intelligence Community isn’t much better. It saw a massive expansion of power following the 9/11 attacks. It was given free rein to track down people who worshipped a different god and had too much pigmentation. If it wasn’t white and Christian, it was suspect — an attitude supported by many Americans who believed anything they didn’t immediately understand or relate to must be dangerous.
Donald Trump didn’t win the popular vote, but he won the votes that mattered. His ascension to power became a justification for all the hatred and bigotry regular people felt they couldn’t express publicly. With Trump in power, hatred for all things not white and presumably “unamerican” became acceptable. Trump’s version of “draining the swamp” consisted of eliminating anyone opposed to his authoritarian dreams and the expansion of power for law enforcement and national security agencies.
Dan Gilmore saw this self-interest unfold in real time. By the time the 2020 election was underway, IC members were openly supporting the Trump supporters who raided the Capitol building and attacked law enforcement officers who defended the Capitol building.
Fast forward to late 2020. Hate speech was running rampant on our applications. I’m not being hyperbolic. Racist, homophobic, transphobic, Islamaphobic, and misogynistic speech was being posted in many of our applications.
On top of that, there were many employees at CIA, DIA, NSA, and other IC agencies that openly stated that the January 6th terrorist attack on our Capitol was justified.
Gilmore apparently tried several times to inform IC management about the hate speech being distributed by IC internal chat channels. Other IC members were also concerned with what they were seeing. But they were apparently in the minority. And they could be safely ignored because the man (temporarily) in power was publicly supportive of racism, misogyny, and insurrection.
For his attempts to curb this hatred — and for informing other concerned IC members he was doing what he could — Gilmore was fired.
On July 9th, 2021, I was called into a meeting with my company team lead, and he said “We’re going to have to let you go”. I asked why, and he said, “You were told to not give internal information to folks outside the organization, and you did”.
They had chatroom transcripts of what I had said to people outside my organization in reference to internal information in our ticketing system. Keep in mind, all this information is completely unclassified. The information I was providing these government employees was for them to take to their own agencies’ Inspector General. They didn’t trust Intelink to do the right thing, so they were taking their complaints to the next level.
None of this is surprising. The government hates people who point out its wrongdoing. And support for insurrectionists isn’t government employees arguing against their own best interests. In Trump, these employees saw a leader who valued power over accountability, a sentiment they firmly agreed with. And if it meant destroying democracy to ensure a lifetime of employability without the irritation of oversight, so be it. Those who stood in their way — including other government employees not so willing to become part of an authoritarian regime — had to go. By serving Trump, they were setting themselves up for a massive influx of power unrestrained by constitutional checks and balances.
That’s why so many law enforcement officials and officers made the trip to Washington, DC on January 6th to participate in an attempt to deny an elected president his new position. There are few things more perverse than unfettered self-interest, especially when it involves people who are supposed to be servants of the public.
Trump encouraged bigotry and hatred with his statements, policies, and directives. The “war on terror” is largely predicated on the assumption that Muslims are violent and untrustworthy. Anti-immigration efforts are supported by bullshit claims that immigrants are more dangerous than US citizens. The “war on drugs” combines an inherent distrust of foreigners with “too big to be accountable” government thinking — something that has been sustained without meaningful interruption for nearly 50 years.
What was observed by Gilmore was the government freely speaking its mind. It has very little respect for the general public. It has even less for those it considers to be undeserving of rights and protections. The election of Joe Biden meant questions might be asked and powers might be slightly curtailed. That was apparently unacceptable, so IC members cheered on insurrectionists, apparently hoping it would eliminate the lawfully elected interloper from rolling back some of the powers granted by a president who mobilized a base loaded with bigots to move America closer to embracing the ideals of authoritarianism.
Gilmore’s exit and his subsequent blacklisting by the US government means that, despite the regime change, no one’s really interested in ejecting racists and misogynists from government positions, even when they openly call for the overthrowing of the same government that employs them. Gilmore is gone but these assholes are still operating surveillance programs and curating collected intel to ensure it aligns with their worldview. The problem hasn’t gone away. It’s simply no longer being observed by someone who finds it problematic.