from the this-is-big-and-dangerous dept
Last year we wrote about a very dangerous case going to the European Court of Human Rights: Delfi AS v. Estonia, which threatened free expression across Europe. Today, the ruling came out and it’s a disaster. In short, websites can be declared liable for things people post in comments. As we explained last year, the details of the case were absolutely crazy. The court had found that even if a website took down comments after people complained, it could still be held liable because it should have anticipated bad comments in the first place. Seriously. In this case, the website had published what everyone agrees was a “balanced” article about “a matter of public interest” but that the website publisher should have known that people would post nasty comments, and therefore, even though it automated a system to remove comments that people complained about, it was still liable for the complaints.
The European Court of Human Rights agreed to rehear the case, and we hoped for a better outcome this time around — but those hopes have been dashed. The ruling is terrible through and through. First off, it insists that the comments on the news story were clearly “hate speech” and that, as such, “did not require any linguistic or legal analysis since the remarks were on their face manifestly unlawful.” To the court, this means that it’s obvious such comments should have been censored straight out. That’s troubling for a whole host of reasons at the outset, and highlights the problematic views of expressive freedom in Europe. Even worse, however, the Court then notes that freedom of expression is “interfered with” by this ruling, but it doesn’t seem to care — saying that it is deemed “necessary in a democratic society.”
Think about that for a second.
The Court tries to play down the impact of this ruling, by saying it doesn’t apply to any open forum, but does apply here because Delfi was a giant news portal, and thus (1) had the ability to check with lawyers about this and (2) was publishing the story and opening it up for comments.
The rest of the ruling is… horrific. It keeps going back to this “hate speech” v. “free speech” dichotomy as if it’s obvious, and even tries to balance the “right to protection of reputation” against the right of freedom of expression. In other words, it’s the kind of ridiculous ruling that will make true free expression advocates scream.
When examining whether there is a need for an interference with freedom of expression in a democratic society in the interests of the ?protection of the reputation or rights of others?, the Court may be required to ascertain whether the domestic authorities have struck a fair balance when protecting two values guaranteed by the Convention which may come into conflict with each other in certain cases, namely on the one hand freedom of expression protected by Article 10, and on the other the right to respect for private life enshrined in Article 8
And the court insists that the two things — reputation protection and free speech “deserve equal respect.” That’s bullshit, frankly. The whole concept of a right to a reputation makes no sense at all. Your reputation is based on what people think of you. You have no control over what other people think. You can certainly control your own actions, but what people think of you?
The court sets up a series of areas to explore in determining if Defli should be held liable for those comments. In the US, thanks to Section 230 of the CDA, we already know the answer here would be “hell no.” But without a Section 230 in Europe — and with the bizarre ideas mentioned above — things get tricky quickly. So even though the court readily agrees that the article Defli published “was a balanced one, contained no offensive language and gave rise to no arguments about unlawful statements” it still puts the liability on Delfi. Because the site wanted comments. It actually argues that because Delfi is a professional site and thus comments convey economic advantage, Delfi is liable:
As regards the context of the comments, the Court accepts that the news article about the ferry company, published on the Delfi news portal, was a balanced one, contained no offensive language and gave rise to no arguments about unlawful statements in the domestic proceedings. The Court is aware that even such a balanced article on a seemingly neutral topic may provoke fierce discussions on the Internet. Furthermore, it attaches particular weight, in this context, to the nature of the Delfi news portal. It reiterates that Delfi was a professionally managed Internet news portal run on a commercial basis which sought to attract a large number of comments on news articles published by it. The Court observes that the Supreme Court explicitly referred to the fact that the applicant company had integrated the comment environment into its news portal, inviting visitors to the website to complement the news with their own judgments and opinions (comments). According to the findings of the Supreme Court, in the comment environment, the applicant company actively called for comments on the news items appearing on the portal. The number of visits to the applicant company?s portal depended on the number of comments; the revenue earned from advertisements published on the portal, in turn, depended on the number of visits. Thus, the Supreme Court concluded that the applicant company had an economic interest in the posting of comments. In the view of the Supreme Court, the fact that the applicant company was not the writer of the comments did not mean that it had no control over the comment environment…
Also? Having “rules” posted for comments somehow increases the site’s liability, rather than lessens it as any sane person would expect:
The Court also notes in this regard that the ?Rules of comment? on the Delfi website stated that the applicant company prohibited the posting of comments that were without substance and/or off-topic, were contrary to good practice, contained threats, insults, obscene expressions or vulgarities, or incited hostility, violence or illegal activities. Such comments could be removed and their authors? ability to post comments could be restricted. Furthermore, the actual authors of the comments could not modify or delete their comments once they were posted on the applicant company?s news portal ? only the applicant company had the technical means to do this. In the light of the above and the Supreme Court?s reasoning, the Court agrees with the Chamber?s finding that the applicant company must be considered to have exercised a substantial degree of control over the comments published on its portal.
Yes, that’s right. They get in more trouble for posting rules saying behave. It’s incredible.
The next key finding: because commenters are anonymous and anonymity is important — and because it’s difficult to identify anonymous commenters — well, fuck it, just put the liability on the site instead. That really does seem to be the reasoning:
According to the Supreme Court?s judgment in the present case, the injured person had the choice of bringing a claim against the applicant company or the authors of the comments. The Court considers that the uncertain effectiveness of measures allowing the identity of the authors of the comments to be established, coupled with the lack of instruments put in place by the applicant company for the same purpose with a view to making it possible for a victim of hate speech to effectively bring a claim against the authors of the comments, are factors that support a finding that the Supreme Court based its judgment on relevant and sufficient grounds. The Court also refers, in this context, to the Krone Verlag (no. 4) judgment, where it found that shifting the risk of the defamed person obtaining redress in defamation proceedings to the media company, which was usually in a better financial position than the defamer, was not as such a disproportionate interference with the media company?s right to freedom of expression….
Further on the question of liability, the court finds that because Delfi’s filter wasn’t good enough, that exposes it to more liability. I wish I were making this up.
Thus, the Court notes that the applicant company cannot be said to have wholly neglected its duty to avoid causing harm to third parties. Nevertheless, and more importantly, the automatic word-based filter used by the applicant company failed to filter out odious hate speech and speech inciting violence posted by readers and thus limited its ability to expeditiously remove the offending comments. The Court reiterates that the majority of the words and expressions in question did not include sophisticated metaphors or contain hidden meanings or subtle threats. They were manifest expressions of hatred and blatant threats to the physical integrity of L. Thus, even if the automatic word-based filter may have been useful in some instances, the facts of the present case demonstrate that it was insufficient for detecting comments whose content did not constitute
protected speech under Article 10 of the Convention…. The Court notes that as a consequence of this failure of the filtering mechanism, such clearly unlawful comments remained online for six weeks….
Then the court says that because the “victims” of “hate speech” can’t police the interwebs, clearly it should be the big companies’ responsibility instead:
Moreover, depending on the circumstances, there may be no identifiable individual victim, for example in some cases of hate speech directed against a group of persons or speech directly inciting violence of the type manifested in several of the comments in the present case. In cases where an individual victim exists, he or she may be prevented from notifying an Internet service provider of the alleged violation of his or her rights. The Court attaches weight to the consideration that the ability of a potential victim of hate speech to continuously monitor the Internet is more limited than the ability of a large commercial Internet news portal to prevent or rapidly remove such comments.
Finally, the court says that since the company has stayed in business and is still publishing, despite the earlier ruling, it proves that this ruling is no big deal for free speech.
The Court also observes that it does not appear that the applicant company had to change its business model as a result of the domestic proceedings. According to the information available, the Delfi news portal has continued to be one of Estonia?s largest Internet publications and by far the most popular for posting comments, the number of which has continued to increase. Anonymous comments ? now existing alongside the possibility of posting registered comments, which are displayed to readers first ? are still predominant and the applicant company has set up a team of moderators carrying out follow-up moderation of comments posted on the portal (see paragraphs 32 and 83 above). In these circumstances, the Court cannot conclude that the interference with the applicant company?s freedom of expression was disproportionate on that account either.
The ruling is about as bad as you can imagine. It is absolutely going to chill free expression across Europe. Things are a bit confusing because the EU Court of Justice has actually been much more concerned about issues of intermediary liability, and this ruling contradicts some of those rulings, but since the two courts are separate and not even part of the same system, it’s not clear what jurisdiction prevails. It is quite likely, however, that many will seize upon this European Court of Human Rights ruling to go after many websites that allow comments and free expression in an attempt to block it. It is going to force many sites to either shut down open comments, curtail forums or moderate them much more seriously.
For a Europe that is supposedly trying to build up a bigger internet industry, this ruling is a complete disaster, considering just how much internet innovation is based on enabling and allowing free expression.
There is a dissenting opinion from two judges on the court, who note the “collateral censorship” that is likely to occur out of all of this.
In this judgment the Court has approved a liability system that imposes a requirement of constructive knowledge on active Internet intermediaries (that is, hosts who provide their own content and open their intermediary services for third parties to comment on that content). We find the potential consequences of this standard troubling. The consequences are easy to foresee. For the sake of preventing defamation of all kinds, and perhaps all ?illegal? activities, all comments will have to be monitored from the moment they are posted. As a consequence, active intermediaries and blog operators will have considerable incentives to discontinue offering a comments feature, and the fear of liability may lead to additional self-censorship by operators. This is an invitation to self-censorship at its worst.
It further notes how this works — in such a simple manner it’s disturbing that the court didn’t get it:
Governments may not always be directly censoring expression, but by putting pressure and imposing liability on those who control the technological infrastructure (ISPs, etc.), they create an environment in which collateral or private-party censorship is the inevitable result. Collateral censorship ?occurs when the state holds one private party A liable for the speech of another private party B, and A has the power to block, censor, or otherwise control access to B?s speech?. Because A is liable for someone else?s speech, A has strong incentives to over-censor, to limit access, and to deny B?s ability to communicate using the platform that A controls. In effect, the fear of liability causes A to impose prior restraints on B?s speech and to stifle even protected speech. ?What looks like a problem from the standpoint of free expression … may look like an opportunity from the standpoint of governments that cannot easily locate anonymous speakers and want to ensure that harmful or illegal speech does not propagate.? These technological tools for reviewing content before it is communicated online lead (among other things) to: deliberate overbreadth; limited procedural protections (the action is taken outside the context of a trial); and shifting of the burden of error costs (the entity in charge of filtering will err on the side of protecting its own liability, rather than protecting freedom of expression).
It’s disappointing they were unable to convince their colleagues on this issue. This ruling is going to cause serious problems in Europe.
Filed Under: cda 230, comments, defamation, estonia, europe, european court of human rights, free expression, free speech, hate speech, intermediary liability, liability, moderating comments, secondary liability