We've written many times about the importance of protection against secondary liability
for websites, such that they're not held liable for what their users do. In the US, thankfully, we have Section 230 of the CDA
, which clearly states that websites cannot be held liable for speech made by their users. Frankly, we shouldn't need
such a law, because it should be obvious: you don't blame the site for the comments made by others. That's just a basic question of properly placing liability on those responsible. But, in a world of Steve Dallas lawsuits
, in which people will always sue companies with deep pockets, it makes sense to have explicit safe harbors to stop bogus litigation.
Somehow, with so much focus on the importance of secondary liability, we happened to miss an absolutely insane ruling
that came out of the European Court of Human Rights last fall, in the case of Delfi AS v. Estonia, which basically said that any website that allows comments can be liable for those comments. In fact, it found that even when sites took down comments (automatically!) following complaints, they can still be liable, because they should have blocked those comments from going up in the first place. Bizarrely, the court basically says the site should have known that the article in question might lead to negative reactions, and therefor should have blocked comments:
In addressing this question, the Court first examined the context of the comments. Although the Court acknowledged that the news article itself was balanced and addressed a matter of public interest, it considered that Delfi “could have realised that it might cause negative reactions against the shipping company and its managers”. It also considered that there was “a higher-than-average risk that the negative comments could go beyond the boundaries of acceptable criticism and reach the level of gratuitous insult or hate speech.” Accordingly, the Court concluded that Delfi should have exercised particular caution in order to avoid liability.
Next, the Court examined the steps taken by Delfi to deal with readers’ comments. In particular, the Court noted that Delfi had put in place a notice-and-takedown system and an automatic filter based on certain ‘vulgar’ words. The Court concluded that the filter, in particular, was “insufficient for preventing harm being cause to third parties’. Although the notice-and-takedown system was easy to use - it did not require anything more than clicking on a reporting button – and the comments had been removed immediately notice had been received, the comments had been accessible to the public for six weeks.
The Court considered that the applicant company “was in a position to know about an article to be published, to predict the nature of the possible comments prompted by it and, above all, to take technical or manual measures to prevent defamatory statements from being made public”.
Even more troubling for those of us who believe in the importance and value of unregistered and anonymous commenting, the court found those features to be particularly problematic:
By allowing comments to be made by non-registered users, Delfi had assumed a certain responsibility for them. The Court further noted that “the spread of the Internet and the possibility – or for some purposes the danger – that information once made public will remain public and circulate forever, calls for caution”. In the Court’s view, it was a daunting task at the best of times – including for the applicant - to identify and remove defamatory comments. It would be even more onerous for a potentially injured person, “who would be less likely to possess resources for continual monitoring of the Internet”.
The reason that we're bringing this up now is because plenty of folks, quite rightly, freaked out about this ruling, and asked the European Court of Human Rights to reconsider. And that's now going to happen in early July. The Financial Times has a long and quite interesting look at the case and related issues
, including a discussion at the beginning about the nature of online comments. For many years we've talked up the value of anonymous comments
and how wonderful they've been for our community here
. We've always taken an exceptionally light touch to moderation, allowing anyone to comment, and just trying to weed out the spam. And it's worked well for us. A ruling like the one above doesn't directly impact us, seeing as we're an American company with all our servers here, but it's immensely troubling in general and could create widespread chilling effects on any site that relies on user generated content. But it goes beyond that:
For Eric Barendt, Goodman Professor of Media Law at University College London from 1990 until 2010, the ruling doesn’t adequately balance freedom of speech against an individual’s right to protect his or her reputation. “I wouldn’t stick my neck out to say the ECtHR’s judgment was ridiculous,” he tells me, “but I know many people who would. How bizarre that this case could be the straw that breaks the camel’s back.”
The judgment will not only affect whistleblowers, says Aidan Eardley, a London-based barrister specialising in data protection and media-related human rights law. “It’s also bad news for people who want to comment about sensitive personal issues such as domestic abuse, sexual identity, religious persecution, etc.”
As Sarah Laitner, the FT’s communities editor, says: “It’s important to remove any hurdles a reader may face to participation. Some people feel that they are able to comment more freely if they can use a pseudonym.”
On July 9th, the Court will reconsider its original ruling, and for the sake of free speech online, we hope it reverses its earlier ruling. Between this and the recent right to be forgotten
ruling in the EU Court of Justice, Europe is quickly becoming a dangerous free speech nightmare. While these rulings may have the best of intentions, the wider impact of both can do an astounding job in stifling public participation and comment.