EU’s Top Court Just Made It Literally Impossible To Run A User-Generated Content Platform Legally
from the seems-like-a-problem dept
The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.
Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.
The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.
No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”
As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:
The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.
And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.
The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:
The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.
With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.
And out the window, right with it, is the ability to have a functioning open internet.
The court basically shreds basic intermediary liability principles here:
In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.
Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?
The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.
How would a platform do that?
¯\_(ツ)_/¯
There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains:
The Court said the host has to
- pre-check posts (i.e. do general monitoring)
- know who the posting user is (i.e. no anonymous speech)
- try to make sure the posts don’t get copied by third parties (um, like web search engines??)
Basically, all three of those are effectively impossible.
Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.
Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.
There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.
As the court noted:
In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.
Good luck figuring out how to do that with third-party content.
And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:
Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace, that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publication and thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.
No more anonymity allowed:
As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.
On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with, the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.
Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:
Thus, where sensitive data are published online, the controller is required, under Article 32 of the GDPR, to take all technical and organisational measures to ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.
To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content.
Again, the CJEU appears to be living in a fantasy land that doesn’t exist.
This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.
There is simply no way to comply with the law under this ruling.
In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.
There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.
That long run has arrived in the EU, and with it, quite the mess.
Filed Under: cjeu, controller, data protection, dsa, gdpr, intermediary liability, section 230, sensitive data, user generated content
Companies: russmedia


Comments on “EU’s Top Court Just Made It Literally Impossible To Run A User-Generated Content Platform Legally”
Not to mension this butts up against the EU’s own rules banning general monitoring requirements creating quite the ouroboros situation.
Good luck figuring that out EU.
Re: Every problem is easy when you're not the one who has to solve it
Oh not at all, they just need to say that online platforms must ID every user and scan every submission before allowing them to post and do so in a way that doesn’t involve general monitoring of users.
If the platforms just nerd harder I’m sure they can do it, the only reason they haven’t is because they’re lazy and greedy and need a little nudge/threat from the legal and/or political system(s).
Since git commits have emails (private info), and possible names, I would guess this means that git forges would be problematic in the EU too.
Especially since forging a git commit isn’t hard. And open source communities often applies patches from other people (where the committer is not the author).
Also The Linux’s kernel’s MAINTAINERS file has other contact info too.
Basically maybe the EU should just disconnect itself from the internet to ensure compliance.
This is mind-bendingly stupid. It burns. It burns so much i think a call to Red Adair is in order.
The rules for processing PII are for (checks notes)… processing! Some use posting something isn’t PII the site asked for and processed. And the further rules generated by this court in order to find a way to hold a company liable are somehow worse.
It’s like they are trying to make more Euroskeptics out of people who believed in the mission.
Re:
Spelling it “Euroskeptics” instead of “Eurosceptics” is amusing, in the same way that “Americanised” instead of “Americanized” is amusing. I’m not criticising, I just have an odd sense of humour.
The EU government doesn't know how the internet works.
If you are an artist or any content creator using a website based in the EU, or a worldwide international site like youtube using servers subject to their laws, you’ll experience:
These laws/bills might as well directly ban UGC.
Re: new found territory
Just remember German Chanellor Angela Merkel saying that ‘The internet is new territory to all of us.” in the 2010s(!)
She was and is not alone, it appears.
Re:
To be clear, it’s not a law or a bill ─ it’s a court ruling, which reinterprets the existing law (GDPR) to mean something it was never intended to.
Well this is a problem. I was going to move my infra to Scaleway’s AMS1 data center. Mainly because their elastic metal offerings are really good. But this ruling is a big problemo. 😛 EU, please stop making nonsensical rulings like this…
Simplicity is great if all you care about is protecting liability going overboard, and are willing to sacrifice the cases where the host contributes to the violative behavior. That’s also one hell of a way to run the internet.
Doing the law properly often isn’t simple, because life is messy. Determining who contributed to something is fundamentally a messy endeavor. The CJEU seems to have screwed up here, though.
Under the simplicity of 230, it wouldn’t have to take it down at all. (Never mind that an anonymous speaker probably wouldn’t be found, since they’re you know, anonymous)
It’s more likely to be the opposite problem. The GDPR defines personal data extremely broadly. If it’s an “off chance”, odds are it’s actually included under GDPR.
Re:
Section 230 doesn’t make platforms immune; it just makes quick(er) work of cases where they clearly aren’t liable for the content. Do you have any examples of cases where the platform contributed to the violative behaviour and wasn’t punished?
But it might have to under the platform’s own terms.
Re: Re:
It does both. Section 230 makes them immune, even if they take actions that would (without 230) otherwise lead to publisher liability. In cases where they would be immune anyway (due to 1A or whatever), it just acts to speed things along. But while there is a lot of overlap between alternate protections like 1A and 230, it isn’t a circle- which is why publisher liability exists at all in print.
You do not need to take my word for it. Here it is from Eric Goldman, a 230 expert that regularly defends 230 on Techdirt: The ruling also made clear that Section 230 protects websites’ decisions about publishing, editing, or removing third-party content-activities that, in the offline world, would ordinarily dictate a publisher’s liability for that content.”. Another: Thus, like Zeran, the opinion shows that Section 230 equally protects passive conduits and editorially controlled publications against claims based on third-party content.'” As the Blumenthal court said, “[P]laintiffs’ argument that the Washington Postwould be liable if it had done what AOL did here … has been rendered irrelevant by Congress.’. Mike (also here (and others, Cathy’s here) have explicitly said this, as well.
You can also read it directly in court decisions like Zeran v AOL (the very first case that Mike is referencing). It very explicitly says: Specifically, § 230 precludes courts from entertaining claims that would place a computer service provider in a publisher’s role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred.… Congress made a policy choice, however, not to deter harmful online speech through the
separate route of imposing tort liability on companies that serve as intermediaries. (there is also a longer quote i’m cutting for brevity, but it starts with The terms “publisher” and “distributor”…and ends with *AOL falls squarely within this traditional definition of a publisher and, therefore, is clearly protected by § 230’s immunity.)
Off the top of my head, Blockowicz v. Williams, Jones v. TheDirty.(summaries from Eric Goldman here. There are also some conflicting cases between circuits listed there, as well ). Of course, these cases also don’t include cases that weren’t brought because they were precluded by 230.
Now, they aren’t most cases. In order for it to be an edge case, it has to be acting as a traditional publisher, involved enough in the content to be similar to a print publisher (ie, it can’t just be moderating in general, or curating content via algorithm. It also has to have direct knowledge, etc), but not so much that it’s considered first party speech. (It also has to not be 1A protected). In most cases, most sites are not engaging as a normal publisher would, and so it’s ridiculous to hold them to publisher liability.
If you’re wondering what the distinction is, it’s whether the publisher contributed to the behavior (by e.g. knowingly spreading it as a publisher), but didn’t actually create it themselves.
Re: Re:
No, they have a long history of pathologically lying about it.
Re: Re: Re:
There are examples listed above. And this isn’t unique to me, people like Goldman/Mike also acknowledge those examples. Where we differ is whether it’s worth losing the simplicity of 230 in order to account for them or not. Not whether they exist.
Re:
“then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.”
“It’s more likely to be the opposite problem. The GDPR defines personal data extremely broadly. If it’s an “off chance”, odds are it’s actually included under GDPR.”
I think that’s exactly the point Mike was making. A ton of content is likely to be included under the GDPR, with the result that people running websites will block to to avoid potential legal troubles.
(Of course, as long as the EU has legal norms that require everyone who wants to say their opinion or sell stuff online to provide their personal contact information, it’s completely ridiculous for the EU to pretend to care the slightest bit about protecting people’s privacy. But I digress.)
This is definitely going to be weaponized!
So, can we expect the internet in the EU to cease existing soon?
How to kill social media platforms in your area in one simple step...
Well that’s certainly one way to ensure that no new online platforms for user-submitted content crop up in the EU, not to mention providing a very strong case for current platforms to just geo-block the entire area as too legally risky since ‘hope they don’t apply the new insanely stupid ruling to your platform’ is not a viable long-term strategy.
Re:
I’d imagine they’d rather comply then geo-block, sadly.
Re: Re:
They can’t really reliably comply if they want to keep allowing users to post stuff. Which they’ll have to, if that’s what their websites are all about.
So with this ruling, is the CJEU leaning in favor of allowing ChatControl in some fashion if they insist on having platforms pre-vet their users content?
Re:
No platform has the resources to scan literally everything all the time
Just noticed this part...
Just noticed this part, which I had overlooked at first:
“To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content.”
Are they going to make it mandatory to run your websites with the help of perpetual motion machines next? Provide the Elixir of Youth to all your users?
Re: this may present a challenge
Consider, if you will, a web browser. Its job is to obtain a copy of the online content and reproduce it on your screen.
I agree. But then section 230, according to this article, “the person posting is responsible for the content posted”.
How does anyone identify the person to assign responsibility, if content is posted anonymously?
While I agree with Section 230, absolutely there should better mechanisms to remove this type of content
Re:
That’s a problem for law enforcement. There are various ways to deanonymise users, e.g. subpoena the website operator for the user’s IP address from their access logs. Website operators don’t have any obligation to make law enforcement’s job easier, beyond those obligations defined by law, and banning anonymous content isn’t one of them.
Re: Re:
230 affects civil liabilty, not criminal. So it’s not a problem for law enforcement. But even if it were, that just kicks the can down the road. A problem for law enforcement is also a problem for society as a whole, if we want law enforcement to actually meaningfully enforce laws.
Saying “just charge the speaker”, while knowing law enforcement can’t find the speaker, is really just saying the speaker can’t be charged either, in practice. It just sounds more reasonable than saying it outright.
If those methods exist, they aren’t truly anonymous. But more practically, there are means to avoid those methods. For instance, using a VPN (one that doesn’t log) to hide someone’s real IP address.
Re: Re: Re:
Yep. The secret that supporters of current intermediary-reliability law don’t want to talk about is that the vast majority of victims have no true recourse. And the supporters think that it’s fine and dandy, a cost of doing business. And when even the most reasonable critics bristle at the state of affairs that’ve been created, the supporters go “Do you WANT the Free and Open Internet (always that term) to die?”
Re:
You don’t, if they’re actually anonymous. This is an intentional two-step.
I expect that websites hosting UGC will start adding a checkbox to each submission, to confirm that the user promises that if their content includes anybody’s sensitive personal data, then they have that person’s consent to post it to the site. The fact of that checkbox being ticked is then possibly enough evidence for the website owner to show that they have a legal basis for “processing” that data.
Well, I expect that, although it isn’t quite consistent with this ruling, as it says:
Which is absolutely flat-out bonkers, because it implies, for instance, that a porn site may only host videos starring one actor, and the video cannot be uploaded by anyone other than the actor themselves.
If it causes enough of a mess, the GDPR might be amended or repealed, as they’re already discussing doing (although, I’m not sure to deal with this).
How does the victim find the person who posted the fake ad. How do they do it. No, seriously, what do you actually want victims to do in response to stuff like this where finding the person ranges from difficult to impossible?
Because it really feels like y’all are fine with victims being unable to find legal recourse, bad stuff like this slipping through the cracks, as long as your preferred intermediary liability standards stay in place.
Re:
There are methods to seek to de-anonymize someone through legal due process. In this case, you could have a subpoena issued for the identity, which the other side could oppose and let a court sort it out if they should be revealed.
People claiming it’s impossible are lying.
Re: Re:
The pay-to-play nature of the system is another facet of what makes it difficult-to-impossible. You only get justice if you have the money to bring that court challenge to its conclusion and the court rules in your favor at the end of it.
Re:
Anonymous Coward:
Exactly what happened in this case. They complain to the host, who takes it down in less than an hour. The whole point of safe harbour protections for ISPs is to protect their ability to do moderation like this, without taking on legal responsibility for other people’s speech via their service.
Despite all the pearl-clutching, there isn’t actually a problem to be solved. Let alone one that justifies breaking the practicality of hosting third-party postings, and making it impossible to post anonymously, a freedom you just used but clearly take for granted.
Oh dear, again? I knew I shouldn’t have bothered getting a 2 for my “Days since Mike says someone has destroyed the internet” sign.