Brazil’s Superior Court of Justice (STJ), the highest court for non-constitutional questions of federal law, has ruled that the “right to be forgotten” — strictly speaking, the right to be delisted from search results — cannot be imposed upon Google or other search engines. As a post on Global Voices explains:
According to judiciary rapporteur Nancy Andrighi, the ruling stated that forcing search engines to adjudicate removal requests and remove certain links from search results would give too much responsibility to search engines, effectively making them into digital censors.
We don’t know the details of the case, which was held “under secrecy of justice” according to the article. But the Global Voices post points out that there’s another important “right to be forgotten” decision coming up in Brazil, this time from the country’s top court:
Brazil’s Supreme Court — which is a higher court than the STJ — will soon hear a different case on the right to be forgotten involving TV Globo, Brazil’s largest TV network. The case is brought by relatives of Aida Cury, an 18-year-old girl who was brutally raped and assassinated in 1958, in a case that was never resolved. In 2008 TV Globo broadcasted a story on the case. The relatives sued the network, arguing that the story ‘unearthed a painful time for the family’ and their lawyers invoked the thesis of the “right to be forgotten”.
If Brazil’s Supreme Court joins the STJ in refusing to acknowledge a “right to be forgotten” here, this would place the country at odds with South Korea, which has decided to follow the EU in introducing this new right. If nothing else, that discrepancy would demonstrate that it is not a foregone conclusion that other jurisdictions will adopt this particular European innovation.
As you know, last week, large chunks of the internet spent hours writhing on the ground and totally inaccessible thanks to a giant DDoS attack that appears to have been launched via a botnet involving insecure DVR hardware (which can’t be patched — but that’s another post for later). Of course, whenever this kind of thing happens, you know that some people on the politics side of things are going to come up with dumb responses, but there were some real whoppers on Friday. I’m going to focus on just two, because I honestly can’t decide which one of these is dumber. I’ll discuss each of them, and then you guys can vote and let us know: which of these is dumber.
First up, we’ve got Marsha Blackburn, who is not just a member of Congress, but (incredibly) on the House Subcommittee on Communications and Technology, which is often considered to the subcommittee that handles internet related issues. We’ve written about her quite a few times before, highlighting her efforts to block broadband competition and gut net neutrality. She’s also argued that fair use is just a buzzword and we need stronger copyright laws. Not surprisingly, she was one of the most vocal supporters of SOPA who only finally agreed to dump the bill days after the giant online protest.
And apparently she’s still upset about all that.
On Friday she went on CNN to discuss a variety of things, and the first question from Wolf Blitzer was about the DDoS attacks, and her answer is the sort of nonsense word salad that is becoming all too common in politics these days, but where she appears to suggest that if we’d passed SOPA this kind of attack wouldn’t have happened. She’s not just wrong, she’s incredibly clueless.
Here’s what she said:
Wolf, you don’t know who is behind this, you do not know if it’s foreign or domestic. What I do know is over the years we have tried to pass a data security legislation. There’s been bipartisan agreement in the House. It has not moved forward in the Senate. We also know that a few years ago we tried to do a bill called SOPA in the House which would require the ISPs to do some governance on these networks and to block some of the bad actors.
And of course, there were all of the cyberbots that took out after us that were trying to say ‘no you can’t do that you’re going to impede our free speech.’ We said ‘no we’re trying to keep the roadway clear and to keep some of these bad actors out of the system.’
So, what you have now, whether it is foreign or domestic, no one knows. No one knows who has released some ransomware, spyware, malware into the system that is cau… and bear in mind also this malware can live on your system for a year or much longer before it is detected.
And that is how you’ve had some of these extensive data breaches because the malware gets into the system, it rests there, it is pulling information and at some point, it activates. And as I tell my constituents, be careful what websites you go to, be careful what emails you open because you may be unintendedly inviting that malware or spyware into your system.
Okay, so. Almost nothing that is said above has anything to do with the DDoS attack. Not at all. Not the “data protection” bill, which is basically about requiring companies to reveal breaches to those impacted. But most certainly not SOPA, which had nothing whatsoever to do with anything having to do with cybersecurity or online attacks or DDoS. And “cyberbots”? Is she implying that the millions of people who spoke out against SOPA were some sort of fake bots? SOPA wouldn’t have done anything to stop this kind of attack at all. It had nothing to do with this issue in any way shape or form. Not that Wolf Blitzer seems to know or care about any of that as he just accepts that answer and moves on.
So that’s the first dumb response. Now the second: the IANA transition. We’ve been discussing this for years, and as we’ve explained, the transition is a good thing in taking an argument away from countries like Russia and China who have been trying to get more control over internet governance, by dropping an almost entirely superficial connection between the fairly minor IANA function and the US Commerce Dept. The transition happened a few weeks ago and nothing on the internet has changed, nor will it, because of this transition. It’s a non-story. But, Ted Cruz tried to make it a story and now it’s become a partisan thing for no good reason at all. And thus, given an opportunity, partisan sites are blaming the IANA transition for the DDoS:
Today there was a major attack on a part of the Internet that few people pay any attention to. It?s critically important though, and any disruption threatens both our prosperity as Americans, but also our freedom to communicate with each other.
This is a great reminder of why President Obama?s Internet handover plans are so threatening to our way of life.
Probable foreign attackers effectively took thousands of companies off of the Internet today by attacking a major Domain Name Service (DNS) provider: Dyn. This two-hour outage surely cost many people, very much money.
What is DNS, and why is it so important? Put simply, DNS is the system that tells people how to find you online. It converts the names of servers and sites, into numbers that the Internet Protocol can find. It?s an essential service of the commercial Internet.
And yet Barack Obama is trying to hand control of DNS over to the Chinese and the Russians. Ted Cruz has been warning people about this, and so have I. People tend to tune it out, because it sounds like a very technical, obscure issue that isn?t very important.
Well, first of all, newsflash: the transition happened three weeks ago, and Neil Stevens at Red State is so concerned about this he didn’t even notice. Damn. Sneaky Obama. Second, the hand over of the IANA functions has absolutely nothing to do with a DDoS attack or what it would take to prevent it. Yes, there are some ridiculous aspects to the DNS system, some of which are managed by ICANN. But (1) the IANA transition has nothing to do with “handing control” over to the Chinese or Russians (in fact, it’s the opposite — it takes a big argument away from the Russians and Chinese that they had been using to try to seize more control, and actually makes it much more difficult for them to take control by making sure nationstates actually have very little say in internet governance). And (2) the IANA transition has fuck all to do with DDoS attacks.
Both of these examples seem to be completely clueless, technically illiterate people using real problems (the fragility of DNS systems, the massive unsecured bot-infested systems out there, the ease of taking down important systems, overly centralized critical systems), and using them to pitch some entirely separate personal pet complaint or project. But both are completely ignorant. The only question is which one is worse:
There have been plenty of really bad ideas coming out of France in the digital arena recently, as Techdirt has been reporting. So it makes a pleasant change to be able to write about a new law that has quite a few good ideas. One reason for that might be that the French government used a form of crowdsourcing to help shape what is known as the “Digital Republic Bill“:
The bill’s text was drafted following a ground-breaking co-construction process in the form of a massive nationwide consultation initiated by the Prime Minister in October 2014. In all, there were more than 4,000 contributions from businesses, government departments and individuals which were received, summarised and examined by the Conseil national du numérique (French Digital Council) which presented its findings and proposals to the government on 18 June 2015.
The first part of the bill is about data and knowledge dissemination, and aims to support openness in various forms. For example, it requires government departments to make information about their activities more easily accessible, and extends the scope of open government rules to include not just central and local government, but also public and private legal entities that operate under a public service mandate. The aim is to provide more government documents online as a matter of course, so that citizens don’t need to make formal requests for them — a sensible approach others should emulate. In addition, the French government will be:
expanding the open data policy to include public and private entities, public service concession holders or entities whose activities are subsidised by the public authorities.
The bill hopes to boost open access to research by creating a new “secondary exploitation right” for academics, under which:
the author may make his/her creation publicly available after an embargo period of 12 months from the first publication of scientific works, as long as this is not for commercial purposes. The timeframe will be 24 months for human and social science works as publishers in this sector do not have such a comfortable financial situation.
That’s rather disappointing, since it enshrines long embargo periods even for non-commercial use. While the open access provisions are timid, an attempt to prevent the digital commons from being enclosed is much bolder. The French government proposes to set up:
a common information domain extending to the intellectual public domain, which has not yet been clearly defined, and which is not restricted to intellectual property as it includes elements such as information and ideas that are not protected by intellectual property rights, but still require protection against other exclusive rights.
It’s not really clear what that will mean in practice, but it’s good the French government is working on this issue. Also welcome, but more conventional, are moves to enshrine net neutrality in French law, and to require data transferability so that customers can easily retrieve and transfer their data between competing online services. Another novelty concerns online reviews. The Digital Republic Bill:
obliges websites publishing online reviews to expressly state whether said reviews have been verified. It stipulates that if websites do verify reviews, they have to clearly specify the main procedures. This new obligation will allow consumers to decide for themselves how much trust they can place in the reviews available and, thus, in the website which publishes them.
A large portion of the new bill seeks to enhance the protection of personal data. It enshrines a general right of individuals to decide how their personal data is communicated and used, and strengthens the powers of France’s data protection agency, the CNIL. As a post on the Data Protection Report explains:
the bill would increase the amount of monetary sanctions that the CNIL can impose for privacy violations, which reflects the relevant sanction provisions of the future GDPR [the EU’s new data protection law]. The CNIL would thus be able to impose monetary sanctions on a data controller of up to 20 million euros or 4% of its worldwide turnover. The sanction would be limited to 10 million euros or 2% of the worldwide turnover for minor violations of the DPA.
In addition, the bill would authorize certain types of organizations to bring data protection class actions on behalf of consumers in the event of a data breach. This would apply to:
(a) associations protecting privacy and personal data; (b) consumer protection associations; (b) trade unions, when the processing affects employees; and (c) any association created for the sole purpose of filing the class action.
Finally — in all senses — one forward-thinking element of the new Digital Republic Bill is that it will give people the right to make arrangements for the storage and communication of their personal data after death:
People will be able to send instructions concerning the treatment of their personal data to the CNIL or to a data controller, and may appoint a person responsible for carrying out these instructions.
Moreover, ISPs will have to inform the user about what will happen to this data after his/her death and let him/her choose whether or not to transfer it to the third party of his/her choice.
As the population of Internet users ages, this is likely to become a major issue. It’s good to see France tackling it head-on with the Digital Republic Bill — one of the few countries to do so. The proposed law now passes to the French Senate, but is unlikely to undergo any major modifications there, not least because it has already been subject to unusually wide consultation thanks to the innovative approach used in drawing it up.
After the US/EU “safe harbor” on data protection was tossed out thanks to NSA spying being incompatible with EU rights, everyone had tried to patch things up with the so-called “Privacy Shield.” As we noted at the time, as long as the NSA’s mass surveillance remained in place, the Privacy Shield agreement would fail as well. This wasn’t that difficult to predict.
And there are already some challenges to the Privacy Shield underway, including by Max Schrems, who brought the original challenge that invalidated the old safe harbor. But things may have accelerated a bit this week with the story of Yahoo scanning all emails. This news has woken up a bunch of EU politicians and data protection officials, leading to some serious questions about whether it violates the Privacy Shield agreement.
Johannes Kleis, a spokesman with BEUC, an umbrella group for European consumer organisations, called on other EU data protection authorities to investigate Yahoo.
Fabio de Masi, a German member of the European parliament with the leftist Die Linke party called on the EU high representative for external affairs Federica Mogherini to seek clarification from US authorities about the treatment of EU data.
While some keep arguing that the whole idea of a safe harbor or privacy shield is a problem, that’s not really true. Enabling more easy data flows between countries on a borderless internet is really important for keeping the internet really global. This is a serious issue. The problem is the NSA’s surveillance activities undermining all of this, and continually (rightfully) freaking out people in other countries about what happens to data that flows into the US. The answer is not to dump agreements that enable the free flow of data, but to stop mass surveillance activities.
Once again, it appears that overly aggressive mass surveillance by the US intelligence community is creating massive headaches for American internet companies.
The EU’s “Cookie Law” is a complete joke and waste of time. An attempt to regulate privacy in the EU, all it’s really served to do is annoy millions of internet users with little pop up notices about cookie practices that everyone just clicks through to get to the content they want to read. The EU at least recognizes some of the problems with the law and is working on a rewrite… and apparently there’s an interesting element that may be included in it: banning encryption backdoors. That’s via a new report from European Data Protection Supervisor (EDPS) Giovanni Buttarelli, who was put in charge of reviewing the EU’s ePrivacy Directive to make it comply with the new General Data Protection Regulation (GDPR) that is set to go into effect in May of 2018. The key bit:
The new rules should also clearly allow users to use end-to-end encryption (without ‘backdoors’) to protect their electronic communications.
Decryption, reverse engineering or monitoring of communications protected by encryption should be prohibited.
In addition, the use of end-to-end encryption should also be encouraged and when necessary, mandated, in accordance with the principle of data protection by design.
To be clear, this actually seems like it may go too far. There are plenty of situations where it seems completely reasonable for law enforcement to use other means to figure out ways to decrypt encrypted communications. Arguing that it should be completely outlawed seems a bit extreme. But blocking backdoors does seem like a good idea. The report also says that the use of end-to-end encryption should be encouraged to the point of being mandated in some cases:
In addition, the use of end-to-end encryption should also be encouraged and when necessary, mandated, in accordance with the principle of data protection by design. In this context the EDPS also recommends that the Commission consider measures to encourage development of technical standards on encryption, also in support of the revised security requirements in the GDPR.
The EDPS further recommends that the new legal instrument for ePrivacy specifically prohibit encryption providers, communications service providers and all other organisations (at all levels of the supply chain) from allowing or facilitating ‘back-doors’.
Conceptually, this sounds good, but the implementation matters. Mandating encryption seems to be going a bit far. While I tend to think it makes sense for much more widespread use of encryption, it’s not clear why the government needs to get involved here at all. And that includes in the development of such standards. In fact, as we’ve seen in the past, when the government gets involved in creating encryption standards, that seems to be where the intelligence community can slip in their backdoors.
Still, this is certainly an interesting development. Of course, it would also conflict with the UK’s Snooper’s Charter (“Investigatory Powers Act”) which mandates backdoors for encryption. Though, to be fair, by the time the new rules go into practice, perhaps the UK will no longer be a part of the EU.
Back in October, we noted that it was a really big deal that the European Court of Justice had said that the EU/US Safe Harbor framework violated data protection rules, because it had become clear that the NSA was scooping up lots of the data. The issue, if you’re not aware of it, is that under the safe harbor framework, US internet companies could have European customers and users, with their information and data stored on US servers. Without the safe harbor framework, there are at least some cases where many companies would be forced to set up separate data centers in Europe, and make sure European information is kept there.
Many privacy activists are actually supportive of keeping the data in Europe altogether, but I still think that would be a disaster for lots of internet companies and services — especially smaller ones. The big guys — Google, Facebook, Microsoft, Yahoo, Twitter, etc. — can afford to have separate European data centers. A small company — like Techdirt — cannot. Requiring separate data centers and careful separation of the data would ensure less competition and fewer startups to take on the big guys. That’s a problem. Beyond that, having those separate data centers could actually lead to even less privacy in the long run, because having many jurisdictions in which data is kept means that, inevitably, some of those jurisdictions will fall into states that have even worse surveillance and fewer data protections — and also leaves open the opportunity for different data center setups, which may lead to more vulnerabilities. Remember, when the NSA broke into Google and Yahoo’s datacenters, they were the ones outside the US, which may have had weaker security. And, despite many Europeans not wishing to believe this, many European countries have many fewer restrictions on the kind of surveillance their intelligence agencies are able to do on local data and citizens.
The real issue here is mass surveillance overall. The only real way to fix this issue is to stop mass surveillance and go back to saying that intelligence agencies and law enforcement need to go back to doing targeted surveillance using warrants and true oversight. But, instead, the EU and the US keep trying to paper over this by coming up with a new agreement. That agreement was supposed to have been concluded by a fake “deadline” set for yesterday, but after missing that and claiming that progress had been made on a new agreement, a new deal was finally announced a few hours ago, with the ridiculous name “The EU-US Privacy Shield.”
Here’s the key part of the announcement:
Strong obligations on companies handling Europeans’ personal data and robust enforcement: U.S. companies wishing to import personal data from Europe will need to commit to robust obligations on how personal data is processed and individual rights are guaranteed. The Department of Commerce will monitor that companies publish their commitments, which makes them enforceable under U.S. law by the US. Federal Trade Commission. In addition, any company handling human resources data from Europe has to commit to comply with decisions by European DPAs.
Clear safeguards and transparency obligations on U.S. government access: For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it.
Effective protection of EU citizens’ rights with several redress possibilities: Any citizen who considers that their data has been misused under the new arrangement will have several redress possibilities. Companies have deadlines to reply to complaints. European DPAs can refer complaints to the Department of Commerce and the Federal Trade Commission. In addition, Alternative Dispute resolution will be free of charge. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be created.
The key thing here? The claim that the US “has ruled out indiscriminate mass surveillance on the personal data transferred to the US.” I’m curious about how much bullshit the NSA will be able to sneak under “indiscriminate.” I’m also curious as to what kind of real oversight there will be. The EU Commission and the Department of Commerce will be able to review, but we all know how good the NSA is at hiding what it’s actually doing from oversight bodies. Finally, the “ombudsperson” only matters if they have actual power, and that seems incredibly unlikely.
And as Max Schrems, who brought the original case that took down the safe harbors, is saying (over and over again), as it stands right now, it looks like this new deal will lose again in the EU courts.
And that brings us back to the underlying point. The effort to kill off the safe harbor agreement wasn’t really about the safe harbor agreement at all, but to force the hand of the US government (and hopefully European governments as well) to recognize that they need to stop doing mass surveillance. The claim above about no indiscriminate mass surveillance pays lip service to that idea, but there needs to be some real and concrete change to make that happen. And that’s going to take more than an “exchange of letters” between the EU and the US, as the basis of this deal. It’s going to need actual surveillance reform, not just the “surveillance reform lite” we saw with the USA FREEDOM Act.
Again, I think having the ability to transfer data from the EU to the US is hugely important — which not everyone agrees with. Fragmenting the internet by requiring that data stays in certain countries seems as silly to me as geoblocking content. But the underlying issue here is not about where the data is stored — it’s about mass surveillance. Focusing the agreement on how to allow data transfers without actually tackling how to stop mass surveillance is inevitably a fake solution.
We recently warned about how the new Data Protection Directive in the EU, while written with good intentions, unfortunately appears to both lock-in and expand the whole right to be forgotten idea in potentially dangerous ways. A big part of it is that the directive is just too vague, meaning that the RTBF may apply to all kinds of internet services, but we won’t know for certain until the lawsuits are all finally decided many years in the future. Also unclear are what sorts of safe harbors there may be and how the directive protects against abusing the right to be forgotten for out and out censorship. Unfortunately, many are simply celebrating these new rules for the fact that they do give end users some more power over their data and how it’s used.
But ignoring how these new rules will almost certainly be abused for censorship and to hold internet providers liable for the speech of others is a mistake. Thankfully, the NY Times has a good editorial warning about this very issue:
The most problematic measure would expand what is known as the right to be forgotten, which lets people request that businesses delete personal information that they believe is no longer relevant or is out of date.
It is reasonable to allow people to delete some information, like embarrassing photographs they posted on Facebook. But this right has been used to make it harder to find legitimate information, like old news articles. More than 350,000 Europeans have asked Google to remove links to 1.3 million web pages from search results since the European Court of Justice ruled in May 2014 that people have a right to request such deletions. (The company says it has complied with 42 percent of the requests it has received. People can appeal Google?s decision to privacy regulators and courts.)
The proposed law requires Internet companies like Google to immediately take down information while they decide whether a request for a permanent deletion is warranted. Disturbingly, news organizations and other websites would not have an opportunity to object to those immediate removals and might not even have a chance to protest permanent deletions.
The editorial also notes that the proposed rules don’t make it clear whether the EU expects these rules to apply globally or just in the EU, and that could make a huge difference. As we’ve noted France and Google are currently fighting this fight right now. And the new rules don’t provide any further clarity, which likely means people will push to use them as a sort of global censorship tool.
The end result is the removal of truthful information from the internet, as well as fewer incentives for companies to create useful platforms for free speech. Yes, we know that the standard line is that Europeans value privacy more than free speech, but that’s both too simplistic a response and doesn’t even address the real issue. This new directive is going to be a tool that is abused to silence free speech and punish innovation. That’s not about protecting privacy at all. It’s about out and out censorship.
A few months ago, we noted that the EU was working on its new General Data Protection Regulation and Data Protective Directive — and warned that it was putting free speech and privacy on a crash course. We also had a podcast about this with Daphne Keller, from Stanford’s Center for Internet and Society. While the intentions of the data protection efforts sound good, the actual impact could be quite devastating. The idea is that all these companies are collecting lots of data, and individuals should have more control over what’s collected and how it’s used (and abused). Conceptually, that sounds really valuable. But, in practice it can be a disaster — especially if the people who are focused on privacy/data protection don’t think about or understand the consequences of what they’re doing.
And now, the EU has announced that the new data protection rules have been finalized — and while there’s plenty in there that may be useful, these new rules are almost certainly going to create some new dangerous consequences. A notable concern is that it reinforces this ridiculous “right to be forgotten” concept, and allows for the “erasure” of information. The theory here is that it’s supposed to let you delete old data about you from databases, and you can see why that might make sense. People don’t feel comfortable with, say, old credit information hanging around when it’s no longer relevant. But, as we know, it’s also been interpreted to mean that search engines can be forced to memory hole totally accurate, public information about someone, and that’s now created a vast tool for suppressing free speech. But the new rules more or less double down on that right to be forgotten.
There were fairly simple ways in which the EU could have changed the rules to make them not so problematic, but it ignored those suggestions and kept things troublingly vague, which will almost certainly lead to abuse and the suppression of free speech. A big part of the problem is that it’s not even clear who really is required to obey these right to be forgotten requests, and that’s going to lead to a huge mess:
The GDPR doesn?t tell us whether hosting platforms like Facebook or Twitter are controllers with RTBF erasure obligations. We know that search engines are controllers and thus have RTBF obligations — that was a key holding in the Google Spain/Costeja case. The GDPR doesn?t tell us what other Internet intermediaries will fall in that category. Realistically, I find it hard to imagine DPAs excusing major social networks from erasure obligations, in the long run. But there will be a lot of arguing first.
There are some strong arguments against RTBF obligations for hosts ? for example, that they cannot be controllers because they only process content at the direction of a user, who is herself the controller. There are also some widely accepted legal arguments that will, if they prevail, lead to more complicated answers. Following one of them, RTBF would apply to hosts that are too ?active? in managing user content, but not to ?passive? hosts. Following another argument, hosts would have to erase some content, but not nearly as much as the content that search engines must de-index. (Example: Google may have to remove search results pointing to the Facebook page where I posted about my cousin, but Facebook still won?t have to remove the post from its platform.)
And, this is a big deal. A key part of the new rules is that failing to abide by them can mean fines up to 4% of global revenue. Notice that it’s not just EU revenue, and not profits. That gives the EU tremendous power to force companies to censor the internet globally. I understand that this is being done with good intentions and with privacy in mind, but the potential impact here on basic free speech should be a huge concern.
And while other rules concerning liability for internet services have some protections against abuse, it’s not at all clear if those kinds of protections apply here:
We still don?t know the answer to the €20 million question: Do intermediary liability laws under eCommerce Directive Articles 12-15 apply to RTBF erasure requests? Existing rules under the eCommerce Directive tell Internet companies how to handle removal requests for other legal claims, like defamation. Those rules have real flaws, but they at least build in some protections against legally groundless or abusive attempts to silence online expression. There is no reason to use a whole new process for RTBF claims, so the answer to the question should be yes: eCommerce procedural rules for notice and takedown apply to RTBF erasures. That would mean, among other things, that intermediaries don?t have to take down content until they know the removal request states a valid claim.
The GDPR?s plain language seems to support this answer, but has a loophole that will fuel argument for years. Both GDPR Recital 17 and Article 2.3 say the GDPR is ?without prejudice? to ?the liability rules of intermediary service providers in Articles 12 to 15? ? the eCommerce rules that govern notice and takedown. The problem is, many data protection experts say that the eCommerce ?liability rules? are irrelevant, because the GDPR doesn?t technically hold intermediaries liable for the speech of a third party. Following this argument, the ?without prejudice? language has no practical consequence. As long as this question is unresolved, intermediaries can?t be certain whether they can use existing eCommerce removal systems, or whether they must develop new tools to implement the troubling new removal process prescribed by the GDPR. Putting faith in the simpler interpretation of Article 2.3, and assuming it excuses an intermediary from following the specific rules described in the GDPR, is an expensive gamble.
Again, the intentions here are good, but the actual impact here may be devastating to the internet. I know it’s easy to dislike big internet companies (even as people make use of their services, often for free, every day), but attacking them in a way that harms their users and their free speech rights seems like a bad bet — and one that likely won’t help develop a next generation of internet services in Europe.
We recently wrote about some concerns about the new Data Protection Directive that is being set up in Europe. The law is driven by people with good intentions: looking to better protect the privacy of European citizens. Privacy protection is an important concept — but the current plans appear to be so focused on privacy protection that it gives very little regard for the unintended consequences of the way it’s been set up. As we wrote in our last post, Daphne Keller at Stanford’s Center for Internet and Society is writing a series of blog posts raising concerns about how the new rules clash with basic concepts of free speech. She’s now written one about the immensely troubling setup of the “notice and takedown” rules included in the General Data Protection Regulation (GDPR). For years, we’ve been concerned by problematic notice and takedown procedures — we’ve seen the DMCA frequently abused to stifle speech, rather than for genuine copyright challenges. But, for some reason, people often immediately leap to “notice and takedown solutions” for any kind of content they don’t like, they and the drafters of the GDPR are no different.
Except, it’s worse. Because whoever drafted the notice-and-takedown portion of the GDPR actually made the process worse than the notice and takedown rules found elsewhere. Here’s the GDPR process, as explained by Keller:
An individual submits a removal request, and perhaps communicates further with the intermediary to clarify what she is asking for.
In most cases, prior to assessing the request?s legal validity, the intermediary temporarily suspends or ?restricts? the content so it is no longer publicly available.
The intermediary reviews the legal claim made by the requester to decide if it is valid. For difficult questions, the intermediary may be allowed to consult with the user who posted the content.
For valid claims, the intermediary proceeds to fully erase the content. (Or probably, in the case of search engines, de-link it following guidelines of the Costeja ?Right to Be Forgotten? ruling.) For invalid claims, the intermediary is supposed to bring the content out of ?restriction? and reinstate it to public view — though it?s not clear what happens if it doesn?t bother to do so.
The intermediary informs the requester of the outcome, and communicates the removal request to any ?downstream? recipients who got the same data from the controller.
If the intermediary has additional contact details or identifying information about the user who posted the now-removed content, it may have to disclose them to the individual who asked for the removal, subject to possible but unclearly drafted exceptions. (Council draft, Art. 14a)
In most cases, the accused publisher receives no notice that her content has been removed, and no opportunity to object. The GDPR text does not spell out this prohibition, but does nothing to change the legal basis for the Article 29 Working Party?s conclusions on this point.
If you don’t see how that process is likely to lead to widespread abuse and the censorship of perfectly legal speech, you haven’t been paying much attention on the internet over the last decade plus. To be fair, you can understand why the drafters think this process makes sense. They’re thinking solely about truly problematic and embarrassing information. If, say, your personal medical records have been posted online, it makes sense to look for a way to have that info removed as quickly as possible. But, given how frequently people use these processes in the copyright context to takedown just content they “don’t like” (and how often people admit they do so because it’s the only way to get such content down), you know it’s going to get massively abused for issues that have nothing to do with privacy protection.
Once again, it seems like regulators focus solely on solving for the “worst case” scenario, with little thought towards how that will be applied in much more common cases, and what that means for free speech and society.
And that’s not all that’s dangerous about the current rules. They also deal a huge blow to anonymous speech and privacy:
A second glaring problem with the GDPR process is its requirement that companies disclose the identity of the person who posted the content, without any specified legal process or protection. This is completely out of line with existing intermediary liability laws. Some have provisions for disclosing user identity, but not without a prescribed legal process, and not as a tool available to anyone who merely alleges that an online speaker has violated the law. It?s also out of line with the general pro-privacy goals of the GDPR, and its specific articles governing disclosure of anyone?s personal information — including that of people who put content on the Internet.
Yes, that’s right. In an effort to protect privacy, the drafters are so focused on a single scenario, that they don’t consider how the process will be abused to weaken the privacy rights of others. Want to know who said something anonymously that you don’t like? File a privacy complaint and the service provider is just supposed to cough up their name. Again, given how often we’ve seen bogus defamation claims made solely for the purpose of trying to identify those who speak anonymously, this is a major concern.
There are ways to create a better process for the removal of truly illegal information, but the GDPR simply wipes most of those away in the interest of expediency in trying to prevent a worst case scenario. And the end result may be an even worse situation, in which free speech and privacy rights are broadly wiped away from many by handing a powerful censorship tool, with privacy-destroying elements, to anyone who wants to go after someone else’s speech. I’ve long been in favor of using “notice and notice” type systems that allow whoever posted content to protest before content is taken down, but even if they’re going with a notice and takedown system, there are much better ways to implement ones that at least include some semblance of due process. In addition, there should be strong penalties for those who abuse these notice and takedown procedures.
As Keller writes:
Notice and takedown laws also exist to protect people who are harmed by online content. But protecting those people does not require laws to prioritize removal with little concern for the rights of online speakers and publishers. A good notice and takedown process can help people with legitimate grievances while incorporating procedural checks to avoid disproportionate impact on expression and information rights. Valuable information that would be gone under existing laws but for these checks — importantly including transparency about what content has been removed — spans religious, political, and scientific content, along with consumer reviews. Crafting the law to better protect this kind of content from improper removal is both important and possible.
It would be nice to see the drafters of the GDPR at least recognize the harm they may be about to cause.
The whole right to be forgotten thing over in Europe continues to get more and more bizarre. Not too long ago, we wrote about one Thomas Goolnik, who had succeeded in getting an old NY Times story about him “delinked” from his name in Europe. The NY Times then wrote about that delinking, and we wrote about the NY Times article. Mr. Goolnik then succeeded in having our article about his successful right to be forgotten attempt also forgotten by Google. So we wrote about that too. And, once again, Goolnik succeeded in having that story forgotten. As of yet, it appears our final story on Goolnik has remained accessible on European searches for Goolnik’s name, but we have no idea if it’s because Google has realized that it should remain up or if Goolnik just hasn’t made a request.
Meanwhile, it appears that the guy who first convinced the European Court of Justice to enforce this right to be forgotten, Mario Costeja Gonzalez, may have run into a similar situation. As you probably remember, Costeja brought the original case that argued that Google should no longer show results on searches for his name that linked to some stories in the late 90s about his being forced to sell some land to cover debts. The Court eventually decided that since this information was no longer “relevant,” that under the data protection directive, it should be “delinked” in Google’s database as a “privacy” measure.
Of course, as many people pointed out, in bringing that very case, the details of Costeja’s financial transactions suddenly became relevant again. And, apparently that resulted in more people commenting on Costeja, including an article entitled “The unforgettable story of the seizure to the defaulter Mario Costeja Gonzalez that happened in 1998.” And, as you might imagine, he wasn’t too happy about some of the comments, and with this newfound power that he helped create in hand, he demanded that Google also take down links to such comments (most likely including that article linked in this paragraph).
And here’s where it gets fun: Google refused. And so Costeja went to the Spanish Data Protection Authority to complain… and the Spanish DPA rejected his claim, noting that this information is now relevant in part because Costeja himself made it relevant again.
Now the DPA finds that there is indeed a preponderant interest of the public in the comments about the famous case that gave rise to the CJEU judgment of May 13, 2014 ? and expressly reminds that the claimant itself went public about the details.
So, yes, the right to be forgotten has now made the story that was “successfully” forgotten originally so newsworthy that it may no longer be forgotten, and in fact is much more widely known. I think we’ve heard of some term for that before…