Applying (Artificial) Intelligence To The Copyright Directive’s Stupid Idea Of Upload Filters
from the no-intelligence-here dept
Last week the European Union’s top court, the Court of Justice of the European Union (CJEU), handed down its judgment on whether upload filters should be allowed as part of the EU Copyright Directive. The answer turned out to be a rather unclear “yes, but…“. Martin Husovec, an assistant professor of law at the London School of Economics, has published an opinion piece exploring the ruling, which he sums up as follows:
The Court ruled this week that filtering as such is compatible with freedom of expression. However, it must meet certain conditions. Filtering must be able to “adequately distinguish” when users’ content infringes a copyright and when it does not. If a machine can’t do that with sufficient precision, it shouldn’t be trusted to do it at all.
The problem is deciding whether implementations of the upload filters do indeed “adequately distinguish” between legal and infringing material. As Husovec notes, both the CJEU and the EU Member States have tried to make this tricky problem someone else’s. That’s hardly surprising, since it is far from obvious how to resolve the issue of allowing filtering but only if it respects legal use of copyright material. However, Husovec offers a way forward with some concrete proposals:
Filters should be subjected to testing and auditing. Statistics on the use of filters and a description of how they work should be made public.
Consumer associations should have the right to sue platforms for using poorly designed filters. Some authorities should have oversight of how the systems work and issue fines in the event of shortcomings.
Husovec notes a neat way to bring in those requirements without wading back into the swamp that is the Copyright Directive. He suggests using the EU’s new AI Act, currently under discussion, as a vehicle to impose safeguards on upload filters, which will inevitably be based on algorithms, and could thus be subject to the artificial intelligence legislation if policymakers added them.
It’s a good approach. Given that the CJEU has approved the stupid idea of upload filters, the least we should do is to apply a little (artificial) intelligence to how they will operate.
Originally published to WalledCulture.
Filed Under: ai, article 17, copyright directive, eu, upload filters
Comments on “Applying (Artificial) Intelligence To The Copyright Directive’s Stupid Idea Of Upload Filters”
Will the filters also be expected to protect users copyrights, or will they only be a tool to protect the copyrights held by labels, studios and publishers? If the latter then the filters will be unbalanced in favor of corporate interests.
Re: AIs improving copyright filters
The most efficient copyright filter is one that denies everything.
Because they will be used in the EU, there is not necessarily law preventing an AI from holding copyright on a work. And given that an AI will operate at a speed much greater than you or I, they will be “first to file” at the various copyright offices. We’re about to see a rush of AI-owned copyrights based on upload filter misses … if it isn’t copyrighted already, the AI will file the copyright itself. And because it then is copyrighted, the AI will refuse to allow it to be stored.
Even if the AI is terminated immediately, copyright will last 90 years or so past that termination, and will pass on to the successors of that AI.
So we’re also on the verge of AI “children” litigating over their “parents” copyright portfolios…
Re: Re:
The only copyrights in that scenario are on works output by a computer in response to instructions input by a human, in which case the copyright is held by that human or the organisation for which they work (depending on circumstances). If a computer generates a work without direct input, there is no copyright on it.
We have seen with the use of dmca the programs used seem to favor big media corporations and have been used to take down videos that should be fair use eg reviews of films and TV programs and any videos that feature classical music that is in the public domain, its unlikely that filters will be any better especially theres millions of people making videos and audio podcasts using basic equipment like pcs laptops eg it’s no longer just large media corporations making media that is seen by millions of people
Even if someone makes a super effecient workable filter will it be avaidable to use by websites company’s that are smaller than YouTube Facebook or tik Tok
The only winning move is not to play
*Consumer associations should have the right to sue platforms for using poorly designed filters. *
I guess it’s a reasonable solution if the goal is to remove content entirely.
Crippling liability if they do it wrong and allow copyrighted material through, and now additional crippling liability if it’s done wrong and doesn’t let other material through.
Re:
And thus their real agenda is revealed. The old salt of their irrelevant but connected old publishers. The EU as a whole (or at least its most prominent members) are infamous for their “felony interference with a business model” claims as politically they appear to honestly believe in such asinine entitlements.
Perhaps suing the governments would be a better option, dor demanding filters that are never going to work.
Re:
It has been tried and the effort was rebuffed by the highest EU court.
So we’re going back to letting the bots decide, and insisting that in the event someone innocent gets nailed, give the copyright enforcement people full license to the excuse “but the bot made us do it!” Because the ContentID approach clearly never backfired or anything.
The single upside to all this is that maybe, enough corporations or celebrities might be inconvenienced that they speak out against this, but it won’t happen for at least a while and even then, I wouldn’t hold my breath.
Great!
I’ll never be able to upload video of white noise ever again.
Consumer associations should have the right to sue platforms for using poorly designed filters.
Which might be the only way to bring sanity back to copyright.
Re:
The filters aren’t necessarily poorly designed; it’s just a fact of life that moderation at scale isn’t possible to do well.
The copyright maximalists are the ones who should get sued for abusing the filters.
Re:
Define “poorly designed”
If you mean “unable to magically determine the correct copyright status of any given file, including legal exceptions for fair use, etc., with zero mistakes”, the problem isn’t design, it’s that literal magic does not exist and there is no reliable central source for the information they’re meant to be using to filter. They are forced to guess based on incomplete information, so if the guess is inaccurate that’s not the fault of the person performing the guess.
If you mean that a filter is actually badly designed and unable to serve its purpose, then fine, but the fundamental problem is that they’re expected to work miracles then get blamed when the task they’re legally obliged to perform is in reality impossible at any kind of scale.
Re: Re:
Maybe he meant that Article 17 was poorly designed. 😉
Re: Re: Re:
Oh not at all, it’s doing exactly what the people who bought it planned it to.
Re: Re: Re:2
Exactly my point. 😉
Re:
Sanity and reasonableness aren’t the same thing. The EU is sane, but it’s still highly unreasonable.
No one show the court the billions of incorrect “AI” driven DMCA notices…
It’ll be fun when they manage to blackhole their own websites again.
Re:
Platforms just need to stop protecting the big players and let them get hit just like anyone else. Content flagged as infringing? Down it goes, whether it’s from a random youtuber or a label. They don’t like it, the platform just needs to point to the law that penalizes ignoring penalties or underfiltering while encouraging overfiltering and tell them to take it up with the politicians.