EU Commission Says Social Media Companies Must Take Down 'Terrorist Content' Within One Hour

from the plus-more-internet-hobbling-guidelines dept

Once social media companies and websites began acquiescing to EU Commission demands for content takedown, the end result was obvious. Whatever was already in place would continually be ratcheted up. And every time companies failed to do the impossible, the EU Commission would appear on their virtual doorsteps, demanding they be faster and more proactive.

Facebook, Twitter, Google, and Microsoft all agreed to remove hate speech and other targeted content within 24 hours, following a long bitching session from EU regulators about how long it took these companies to comply with takedown orders. As Tim Geigner pointed out late last year, the only thing tech companies gained from this acquiescence was a reason to engage in proactive censorship.

Because if a week or so, often less, isn’t enough, what will be? You can bet that if these sites got it down to 3 days, the EU would demand it be done in 2. If 2, then 1. If 1? Well, then perhaps internet companies should become proficient in censoring speech the EU doesn’t like before it ever appears.

Even proactive censorship isn’t enough for the EU Commission. It has released a new set of recommendations [PDF] for social media companies that sharply increases mandated response time. The Commission believes so-called “terrorist” content should be so easy to spot, companies will have no problem staying in compliance.

Given that terrorist content is typically most harmful in the first hour of its appearance online and given the specific expertise and responsibilities of competent authorities and Europol, referrals should be assessed and, where appropriate, acted upon within one hour, as a general rule.

Yes, the EU Commission wants terrorist content vanished in under an hour and proclaims, without citing authorities, that the expertise of government agencies will make compliance un-impossible. The Commission also says it should be easy to keep removed content from popping up somewhere else, because it’s compiled a “Database of Hashes.”

Another bad idea that cropped up a few years ago makes a return in this Commission report. The EU wants to create intermediary liability for platforms under the concept of “duty of care.” It would hold platforms directly responsible for not preventing the dissemination of harmful content. This would subject social media platforms to a higher standard than that imposed on European law enforcement agencies involved in policing social media content.

In order to benefit from that liability exemption, hosting service providers are to act expeditiously to remove or disable access to illegal information that they store upon obtaining actual knowledge thereof and, as regards claims for damages, awareness of facts or circumstances from which the illegal activity or information is apparent. They can obtain such knowledge and awareness, inter alia, through notices submitted to them. As such, Directive 2000/31/EC constitutes the basis for the development of procedures for removing and disabling access to illegal information. That Directive also allows for the possibility for Member States of requiring the service providers concerned to apply a duty of care in respect of illegal content which they might store.

This would apply to any illegal content, from hate speech to pirated content to child porn. All of it is treated equally under certain portions of the Commission’s rules, even when there are clearly different levels of severity in the punishments applied to violators.

In accordance with the horizontal approach underlying the liability exemption laid down in Article 14 of Directive 2000/31/EC, this Recommendation should be applied to any type of content which is not in compliance with Union law or with the law of Member States, irrespective of the precise subject matter or nature of those laws…

The EU Commission not only demands the impossible with its one-hour takedowns, but holds social media companies to a standard they cannot possibly meet. On one hand, the Commission is clearly pushing for proactive removal of content. On the other hand, it wants tech companies to shoulder as much of the blame as possible when things go wrong.

Given that fast removal of or disabling of access to illegal content is often essential in order to limit wider dissemination and harm, those responsibilities imply inter alia that the service providers concerned should be able to take swift decisions as regards possible actions with respect to illegal content online. Those responsibilities also imply that they should put in place effective and appropriate safeguards, in particular with a view to ensuring that they act in a diligent and proportionate manner and to preventing [sic] the unintended removal of content which is not illegal.

The Commission follows this by saying over-censoring of content can be combated by allowing those targeted to object to a takedown by filing a counter-notice. It then undercuts this by suggesting certain government agency requests should never be questioned, but rather complied with immediately.

[G]iven the nature of the content at issue, the aim of such a counter-notice procedure and the additional burden it entails for hosting service providers, there is no justification for recommending to provide such information about that decision and that possibility to contest the decision where it is manifest that the content in question is illegal content and relates to serious criminal offences involving a threat to the life or safety of persons, such as offences specified in Directive (EU) 2017/541 and Directive 2011/93/EU. In addition, in certain cases, reasons of public policy and public security, and in particular reasons related to the prevention, investigation, detection and prosecution of criminal offences, may justify not directly providing that information to the content provider concerned. Therefore, hosting service providers should not do so where a competent authority has made a request to that effect, based on reasons of public policy and public security, for as long as that authority requested in light of those reasons.

These recommendations will definitely cause all kinds of collateral damage, mainly through proactive blocking of content that may not violate any EU law. It shifts all of the burden (and the blame) to tech companies with the added bonus of EU fining mechanisms kicking into gear 60 minutes after a takedown request is sent. The report basically says the EU Commission will never be satisfied by social media company moderation efforts. There will always be additional demands, no matter the level of compliance. And this is happening on a flattened playing field where all illegal content is pretty much treated as equally problematic, even if the one-hour response requirement is limited to “terrorist content” only at the moment.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EU Commission Says Social Media Companies Must Take Down 'Terrorist Content' Within One Hour”

Subscribe: RSS Leave a comment
66 Comments
Ninja (profile) says:

It will keep climbing down until the mandated removal interval is “instantly”. Some platforms folded to the whining because of the money involved but they should’ve expected that once they gave their hands it would not stop until the whiners got the entire body.

The whiners will keep whining or pushing for laws to the eternity. And everybody will lose, including those platforms.

That One Guy (profile) says:

Re: Re:

It will keep climbing down until the mandated removal interval is "instantly".

Depending on what you mean, it could be worse than that. The next step after ‘as soon as it’s put up it needs to be taken down’ is that it’s never allowed up in the first place.

Everything must be vetted before it’s allowed to be posted, no exceptions(well, other than ‘official’ content of course…), so that no ‘terroristic content’ can possible make it onto the platforms. That this will all but destroy them is a price the politicians involved are willing to have others pay.

Anonymous Anonymous Coward (profile) says:

Oh, but only if it were actually possible

I am waiting for the day when such governmental ‘requests’ are backed up with the personnel and protocols to actually execute their insane demands, at their own cost (feel bad about the taxpayers though). Then, when THEY find out it isn’t actually feasible, and we have the opportunity to enjoy the ensuing floor show, revel in the popcorn market spike.

Anonymous Coward says:

Re: Oh, but only if it were actually possible

Sometimes I wonder if it’s going to herald a return to the TV age. Maybe the telcos are hoping we’ll all give up on the Internet out of sheer frustration if it becomes that strict?

One-way communication with the people from “reliable sources” was the best thing, and regular plebs were tied to the “Letter to the Editor” or being interviewed on TV for expressing an outside (vetted) opinion.

By that logic, it’d be much safer for everybody if everyone’s words are checked over first, like they were back then. We can’t let “dangerous”/”offensive” content on the Internet, at any cost to speaking freely! /sarcasm

PaulT (profile) says:

Re: Re: Oh, but only if it were actually possible

“Maybe the telcos are hoping we’ll all give up on the Internet out of sheer frustration if it becomes that strict?”

Honestly, that is what they really want. Corporations tied closely to the way things happened pre-Internet would love it if a broadcast model was the only one that was left to be viable, while the governments would love it if they could control the narrative like they used to.

That’s why there’s such a divide between “silicon valley” and the telcos/government. The latter either don’t understand the modern world or profited far easier from the way things used to be, while the former both understand it and have worked out how to leverage it to their benefit.

Anonymous Coward says:

Currently I think it will be impossible to manage. But I do see a way the might expedite the process and get the speeds that the government wants? Couldn’t the government have a database of terroristic content that social media servers can reference to ban. Puts the work back on the government and will probably be much faster at pulling content down. You will still get false positives and I think the entire system is a dumb waste of time but makes the government do all the work which is how it should be in situations like this.

Wendy Cockcroft (user link) says:

Re: Re:

Oh dear…

First of all, how would they populate their database? This content is created by individuals and groups who then post it to social media sites using video-sharing services. Therefore, the government is not going to get this first, they are.

Due to the government’s own rules this stuff has to be taken down within the hour, which is not enough time for it to be caught by the government, unless they issue an order that copies must be made of all items taken down and sent to themselves, thereby raising the cost of compliance.

Therefore, there’s no way of shifting the burden to the government as you describe it; they’d be reliant on ISPs to find, identify, store, and send the dodgy content in order to set up and maintain their database.

Anonymous Coward says:

Re: So why is this only for "Social media content"?

People participating in social media are freer to dispute and/or break established narratives from larger players. But if they call it dangerous, remove it in an hour, and the information won’t spread that far. At least that’s what I’m guessing is what they think.

Keep the world moderately dangerous (or make words themselves “dangerous” under the law if your country is safer) and the censorship will make dumb people feel a false sense of security whilst losing their freedoms.

TheResidentSkeptic (profile) says:

Time Travel is NOT possible...

making all social media go away, and google stop indexing their sites, and all the rest of the internet go away is just NOT going to bring back the halcyon days of the 1950’s and the supremacy of newspapers. No matter how they wish it would – their time in the sun has ended.

The internet removed government control of the messengers; thus removing control of the message.

And *that* genie is NOT going back into the bottle…

So keep on trying to kill the internet one piece at a time… and it will keep up with its normal response:

“Damage Detected. Re-Routing”

Anonymous Coward says:

Especially interesting with respect to the new Polish law making it illegal to ‘accuse’ the Polish state of being complicit in Nazi war crimes. The law has taken effect despite heavy criticism from other EU countries. But any online discussion on the merits of this law would have to be shut down by social media companies if so directed by ‘competent (Polish) authorities’.

Anonymous Coward says:

Freedom of speech

Needs to be something the people believe in or this is what you get.

Keep trying to tell people that the harder you fight what you think is wrong, the more damage you wind up doing to yourself. By all means, go ahead and grab a pound of meat and then stick your hand in with it to make sure all of it has been ground up. That is how stupid people are, on all sides of this issue.

Right now, crazy has been called sane!

That One Guy (profile) says:

Pointy-haired boss syndrome...

Where everything is easy when you don’t have to do it.

Be interesting if the companies in question were to flat out call the politicians out on this. Demand that if it’s so easy to spot the content in question that they provide examples.

Out of these 10 videos, which should be removed?

How about out of 100?

1,000?

10,000?

Keep in mind the allowed time-frame isn’t any different from the first set to the last, you had one hour to go over ten videos, and you have one hour to go over ten thousand.

If it’s really so easy then they should have no problem keeping up with the task as it scales up to what they are demanding the companies manage.

If the response when they fail is ‘well hire more people!’ demand that they flat out state how many people they think the companies should be obligated to hire, how many extra bodies they want warming seats, checking through everything to ensure only approved content is allowed to be posted.

Add to the fact that the government is demanding direct, unquestionable censorship power in being able to state ‘this is to be taken down, and you can not contest that decision‘, and this goes from ‘really bad’ to ‘disastrously bad’ mighty quick.

Anonymous Coward says:

Re: Pointy-haired boss syndrome...

Some of the regulations around the internet are starting to look a lot like some of the law around copyrights with respect to libraries. Just as libraries would never be allowed to exist if they hadn’t already been a longstanding thing before much of the law was put into place, it seems we’re quickly reaching a point where the internet couldn’t have ever become a thing if it had started in a later era.

ECA (profile) says:

Who is this group??

I suggest some of you look this up..

THIS ISNT the EU..
This is the group that is Supposed to be responsible from interactions and Trade between the EU countries..
Its a BUNCH of WELL PAID, by Each of the EU states, persons that are Supposed to represent EACH EU state.(they love passing laws, for some stupid reason)

1. this is 1 step of the concept of controlling NEWS/INFORMATION/Distribution.
2. WHO THE F.. IS THIS??
3. wHO’s idea was this??

NOW for the fun part..
HOW BIG is google? Could we say that IF Google wanted to, that about 1/2 the internet would no longer be accessible to the EU??

DNY (profile) says:

Re: Re: Re:

Not likely. While Spain has some influence, Poland’s influence on the European Commission is just about nil. A proposal this sweeping isn’t going to come out of the Commission without France and Germany being behind it. The only authoritarian influence in play here is the EU itself.

Quite frankly this sort of thing should make everyone in the UK glad that “Leave” won, no matter how rocky the change to trading with the EU under WTO rules proves to be. (Yes, I think it will come to that, precisely because of the “we are to be obeyed” attitude of the European Commission.)

PaulT (profile) says:

Re: Re: Re: Re:

“Quite frankly this sort of thing should make everyone in the UK glad that “Leave” won”

Only if your delusional enough to think that similar and worse rulings won’t come out of Westminster. Frankly, if you look at the history of the EU and the Tories, we have actually been rescued from far worse things already being made law with fewer protections for the public. Things will get worse for us, the only thing that will change is that people like you no longer have the EI boogey man to blame (although, no doubt, your tabloids will find a way to tell you to blame them anyway).

tp (profile) says:

Companies themselves to blame

> And every time companies failed to do the impossible, the EU Commission would appear on their virtual doorsteps, demanding they be faster and more proactive.

The real reason is that these companies have taken responsibility of larger amount of content than they can actually handle properly. EU’s demands are perfectly valid for smaller/quicker companies who don’t have huge market reach. The large companies like facebook, google and twitter were just greedy when they spread their networks to the whole world. If they can’t handle their market reach, they have alternative to reduce their global reach or hire more people to handle the problems. But any idea that EU’s demands are somehow unreasonable simply because the large companies cannot fullfil the requirements is just crazy. It’s the companies own problem that they wanted the whole world to use their platforms.

Anonymous Coward says:

Re: Companies themselves to blame

Think of the social media sites as pubs and cafes, where there is no expectation that the owner controls what is being discussed by its patrons, and indeed most of the tine does not even hear those conversations. They are not newspapers where an editor content to be published.

Anonymous Coward says:

Re: Re: Re: Companies themselves to blame

Owner of the pub will still call police every time you bring your semiautomatic machine gun to the party.

That’s a great example. And in your example, the owner of the pub wouldn’t be held liable for the actions of the patron who brought the semiautomatic machine gun to the party. Even if they had a habit of kicking out other people who they’d noticed had guns before. And pub owners are also not responsible for finding all semiautomatic machine gun’s their patrons might bring within an hour. Which, if our analogy were to be made closer to the truth, the pub holds a few million people and thousands of them brought their black painted nerf guns.

tp (profile) says:

Re: Re: Re:2 Companies themselves to blame

And pub owners are also not responsible for finding all semiautomatic machine gun’s their patrons might bring within an hour.

Well, they actually are responsible if some lunatic shoots people in their premises. This is why there’s professional guards in the door, so that they detect the guns before letting people inside the party place. Some parties even check for knives and other useful gadgets before letting partygoers to the partyplace.

Organizers of large gatherings are obviously responsible if during the event, something bad happens and people get injured or dead.

Obviously the damage with social media sites are different nature, and terrorist material has tendency to be spammed to large area of the world, in order to find the people who are maniputable to work for the terrorists. This is why the platforms need to be careful with the content quality before spamming the material to large groups of people.

Anonymous Coward says:

Re: Re: Re:3 Companies themselves to blame

Well, they actually are responsible if some lunatic shoots people in their premises. This is why there’s professional guards in the door,

So would hold you the school staff, and the cops guarding the place responsible for deaths in the Florida school shooting, as they failed to keep the gunman out?

tp (profile) says:

Re: Re: Re:4 Companies themselves to blame

the cops guarding the place responsible for deaths in the Florida school shooting, as they failed to keep the gunman out?

Yes. If the professional security people can’t do the job, who can? Of course it’s a team effort in schools, so teachers can report if they detect people to go wrong direction, but there’s always people who are responsible when things don’t go as planned.

There’s a reason why security people get paid — so that everyone else can feel safe and secure against large masses of people and whatever can happen when people go broken. Detecting and fixing the problems is responsibility of the professional guards, teachers, police and everyone else who can detect the activity.

Social media companies are experts in social media’s effects to the world, so they ought be to controlling whatever is happening in that area.

Note that a pub will have one professional guard per 50 people visiting their premises, and I’m not sure if facebook and twitter has that many employees… Maybe they’re running out of resources to handle their impact to the world.

Anonymous Coward says:

Re: Re: Re:5 Companies themselves to blame

There is a reason security people get paid. So that when shit goes down, there is somebody on hand to deal with it. Not so they can minority report every possible instance of something going at all wrong.

And frankly this isn’t akin to the cops guarding the place being responsible for deaths in the Florida school shooting. This is cops guarding the place being responsible for not putting every single individual on campus through an x-ray machine, pat down and full search every hour they remain on the premesis, with every word spoken scrubbed for anything law enforcement might not like with actual context, illegality or threat to safety be damned.

Where two students talking about a movie can be taken out of context to make them the next shooters. Where expressing displeasure with the existing administration being treated as a terrorist. All because someone in charge is ‘just being careful’, lest they be held personally responsible for that one conversation being the one in a million statement that preceeds an attack.

I am not using an accurate statistic, but giving an indication of scale. It is Facebooks responsibility to become investigative officers. It is Twitter’s job to personally vet every insignificant cat-video and selfie. In the example of a bar, listening devices must be planted on every patron and every second of conversation listened to in real time to ensure that the patrons don’t say something that might annoy someone who’s ACTUAL JOB IT IS to do investigative work.

Would you want to see your bar have to hire enough people to put your patrons under more scrutiny than a max security prison? On your own dime? Because stopping one unacceptable comment or conversation is worth putting the dozens, hundreds, THOUSANDS of other conversations under the microscope?

In your example, the bar would shut down. The burden of liability to great to effectively police profitably, or frankly at all. You’d either see ‘No talking’ signs posted everywhere with demands every person entering the bar be strip searched, or you’d see the next bar fight land the bar owner in jail with assault charges for not pre-empting a confrontation. Because ‘they were responsible for making everyone feel safe and secure’.

One might say ‘Well they are being given an hour, it’s not instantaneous’ but (1) If companies bend to the 1 hour timeframe, you can bet the next step is for politicians to DEMAND everything be verified and vetted and approved before being put online and (2) if you compare the scale of conversations happening on social media to conversations happening at the bar or school in your example, the burden is… frankly still not equivocal. Companies are still given the more impossible task.

I’m sick of hearing governments demand everyone else do law enforcements job for it. Let’s flip the scenario here.

A murderer got away with a crime? A theft occurred? Law enforcement has one hour, hell let’s be generous, one day to solve the crime. Otherwise they are liable for the damage caused, especially if the same individual commits another crime. The burden is too great for law enforcement to keep up? Hire more people! Apparently it’s that simple if platform holders are being held to such a standard! I mean, that’s why we hire law enforcement right? If they can’t do the job, who can? There’s always people who are responsible when things don’t go as planned. There’s a reason why police get paid. So that everyone else can feel safe and secure against large masses of people and whatever can happen when people go broken. Detecting and fixing the problems is responsibility of law enforcement who cand etect the activity.

Law enforcement are experts in crime’s effects to the world, so they ought to be controlling whatever is happening in that area.

PaulT (profile) says:

Re: Re: Re:6 Companies themselves to blame

“Law enforcement has one hour, hell let’s be generous, one day to solve the crime. Otherwise they are liable for the damage caused, especially if the same individual commits another crime”

Unfortunately, history has proven that when authorities are under pressure to get results, all it means is that innocent people are railroaded for crimes they didn’t commit. Adding time pressure means they just try to get any conviction, not investigate the crime properly and find out who actually did it.

That’s actually one of the problems here – social media companies will be forced to suppress a huge amount of legitimate speech because they’re afraid of something slipping through without their control. It seems that the reality is beyond tp’s ability to comprehend, but it”s not a question of manpower or the site’s ability to process actual complaints.

Wendy Cockcroft (user link) says:

Re: Re: Re:7 Companies themselves to blame

In other words, it’s a demand-side problem, and if history proves anything, it’s that you can’t solve social problems by banning stuff.

While it is entirely reasonable to expect ISPs to remove questionable content in a timely fashion it takes human eyeballs to decide what is or isn’t acceptable within the company’s TOS.

They’re already frantically playing whack-a-mole and often deleting innocent content. It’s unreasonable to expect them to speed the process up as it’d mean more innocent content gets taken down with the bad stuff.

Banning stuff is easy. It’s a damn sight harder to tackle the social issues that drive people to terrorism, etc.

PaulT (profile) says:

Re: Re: Re:8 Companies themselves to blame

“While it is entirely reasonable to expect ISPs to remove questionable content in a timely fashion…”

Well, that’s really the issue, isn’t it? Saying “you should remove content in a timely manner after it’s been identified” is one thing, and an hour shouldn’t be too much to ask under those circumstances.

However, what they’re actually being asked to do is identify the content, then remove within an hour. That’s a completely different request. This is where the disconnect lies, I think – some people don’t realise that the first task takes a lot more than the second.

“Banning stuff is easy. It’s a damn sight harder to tackle the social issues that drive people to terrorism, etc.”

Which, frankly, is why we hear so much idiotic crap like this. It’s far easier to point to a 3rd party and blame them and/or pass the responsibility to them than it is to fix root causes. The real fixes are both very long-term actions and fairly boring. It’s much easier to get voted in based on easy “fixes” that do nothing than it is to tackle groundroots issues that will bear real fruit in a decade.

Anonymous Coward says:

Re: Re: Re:5 Companies themselves to blame

Thank goodness you are not in a position where you would ever be considered an expert in any law.

Especially since the entire reason for your belief that fair use doesn’t exist is because it takes too long to prove. You know what other bits of law take too long to prove? Try the whole damn thing…

PaulT (profile) says:

Re: Re: Re:3 Companies themselves to blame

“Well, they actually are responsible if some lunatic shoots people in their premises. This is why there’s professional guards in the door”

As usual, thank fuck I don’t live in a place where guns are so common that this is expected to be the norm. I’ll happily wander in and out of my local pubs without having to pass through armed guards or risk being shot by other people who go there, thanks.

JoeCool (profile) says:

Re: Re: Re:4 Companies themselves to blame

As usual, he doesn’t know what he’s talking about. I’ve never HEARD of a bar with guards, much less one per fifty customers. Dance clubs will have a screener at the door to make sure there aren’t too many people inside that it would violate fire regulations, or to check IDs if there is an age restriction to the club, but they aren’t guards and don’t protect anyone in the club. In a VERY dangerous neighborhood, the bartender will often be armed, but he’s usually the only one.

PaulT (profile) says:

Re: Re: Re:5 Companies themselves to blame

Oh, there are certainly bars with bouncers at the door and definitely clubs. Weekends on the early hours can get a little hairy, even in countries where nobody ever has a gun. I was in Glasgow this weekend just gone, and you had better believe that bouncers guard the city centre bars there, even with hotel bars, and often with good reason. But, no guns.

But, he said pubs. Even with cultural differences, if you’re going to a pub that can’t have more than 50 people there without people packing deadly weapons, you’re either in a very, very bad part of town, or you’re a lying asshole. I think I know which one he is.

Anonymous Coward says:

>Given that terrorist content is typically most harmful in the first hour of its appearance online…

Who, aside from a microcephalic lunatic with terminal hydrophobia would give that?

It’s most harmful in that brief period when only the people who already know about it can find it? Really? REALLY?

Citation needed: not that we would believe the citation, but at least we would know that the source of the citation was a professional long-nosed incendiary-trousers prevaricator.

Anonymous Coward says:

These politicians clearly aren’t familiar with the dickish traits of humanity. Put this into law and you will have at least thousands of people doing their best to post content that is so precariously on the fence, yet still perfectly legal, that the individual tasked with going over thousands of messages in the next hour just to keep their job won’t know what the hell to do. Then that content gets censored. Then they toe THAT line. Then that content gets censored. The corporation gradually forced to ratchet up pre-emptive censorship to the point where my cat video is considered anti-semetic war-mongering hate speach because my cat’s fur makes it look like it has a Hitler stache.

Wendy Cockcroft (user link) says:

Censorship by degrees

This is nothing but a power-grab, an effort to rule the internet itself. They know damn well it’s impossible to comply because this is a demand-side problem: people want to be gits online. How do you stop that at source, where the problem is?

And how long till it extends to copyrighted items? That’s where it’s headed, people. Historically, mission creep towards copyright enforcement has always been the case.

John85851 (profile) says:

What is terrorist content?

Classifying something as “terrorist content” sounds like classifying any adult image as “porn”: sure, there’s the obvious stuff, but what about the not-so-obvious stuff? Who sets the rules?

For example, if someone says “Death to all Christians”, then that could probably be a terrorist threat.
But if someone says “Death to all Muslims”, then they’re repeating what so many other people (and politicians) are thinking.
Yet saying “death to anyone” should be treated the same.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...