Applying (Artificial) Intelligence To The Copyright Directive’s Stupid Idea Of Upload Filters

from the no-intelligence-here dept

Last week the European Union’s top court, the Court of Justice of the European Union (CJEU), handed down its judgment on whether upload filters should be allowed as part of the EU Copyright Directive. The answer turned out to be a rather unclear “yes, but…“. Martin Husovec, an assistant professor of law at the London School of Economics, has published an opinion piece exploring the ruling, which he sums up as follows:

The Court ruled this week that filtering as such is compatible with freedom of expression. However, it must meet certain conditions. Filtering must be able to “adequately distinguish” when users’ content infringes a copyright and when it does not. If a machine can’t do that with sufficient precision, it shouldn’t be trusted to do it at all.

The problem is deciding whether implementations of the upload filters do indeed “adequately distinguish” between legal and infringing material. As Husovec notes, both the CJEU and the EU Member States have tried to make this tricky problem someone else’s. That’s hardly surprising, since it is far from obvious how to resolve the issue of allowing filtering but only if it respects legal use of copyright material. However, Husovec offers a way forward with some concrete proposals:

Filters should be subjected to testing and auditing. Statistics on the use of filters and a description of how they work should be made public.

Consumer associations should have the right to sue platforms for using poorly designed filters. Some authorities should have oversight of how the systems work and issue fines in the event of shortcomings.

Husovec notes a neat way to bring in those requirements without wading back into the swamp that is the Copyright Directive. He suggests using the EU’s new AI Act, currently under discussion, as a vehicle to impose safeguards on upload filters, which will inevitably be based on algorithms, and could thus be subject to the artificial intelligence legislation if policymakers added them.

It’s a good approach. Given that the CJEU has approved the stupid idea of upload filters, the least we should do is to apply a little (artificial) intelligence to how they will operate.

Originally published to WalledCulture.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Applying (Artificial) Intelligence To The Copyright Directive’s Stupid Idea Of Upload Filters”

Subscribe: RSS Leave a comment
19 Comments
Anonymous Coward says:

Re: AIs improving copyright filters

The most efficient copyright filter is one that denies everything.

Because they will be used in the EU, there is not necessarily law preventing an AI from holding copyright on a work. And given that an AI will operate at a speed much greater than you or I, they will be “first to file” at the various copyright offices. We’re about to see a rush of AI-owned copyrights based on upload filter misses … if it isn’t copyrighted already, the AI will file the copyright itself. And because it then is copyrighted, the AI will refuse to allow it to be stored.

Even if the AI is terminated immediately, copyright will last 90 years or so past that termination, and will pass on to the successors of that AI.

So we’re also on the verge of AI “children” litigating over their “parents” copyright portfolios…

Naughty Autie says:

Re: Re:

The only copyrights in that scenario are on works output by a computer in response to instructions input by a human, in which case the copyright is held by that human or the organisation for which they work (depending on circumstances). If a computer generates a work without direct input, there is no copyright on it.

Anonymous Coward says:

We have seen with the use of dmca the programs used seem to favor big media corporations and have been used to take down videos that should be fair use eg reviews of films and TV programs and any videos that feature classical music that is in the public domain, its unlikely that filters will be any better especially theres millions of people making videos and audio podcasts using basic equipment like pcs laptops eg it’s no longer just large media corporations making media that is seen by millions of people
Even if someone makes a super effecient workable filter will it be avaidable to use by websites company’s that are smaller than YouTube Facebook or tik Tok

This comment has been deemed insightful by the community.
Anonymous Coward says:

The only winning move is not to play

*Consumer associations should have the right to sue platforms for using poorly designed filters. *

I guess it’s a reasonable solution if the goal is to remove content entirely.

Crippling liability if they do it wrong and allow copyrighted material through, and now additional crippling liability if it’s done wrong and doesn’t let other material through.

Anonymous Coward says:

Re:

And thus their real agenda is revealed. The old salt of their irrelevant but connected old publishers. The EU as a whole (or at least its most prominent members) are infamous for their “felony interference with a business model” claims as politically they appear to honestly believe in such asinine entitlements.

Anonymous Coward says:

the least we should do is to apply a little (artificial) intelligence to how they will operate

So we’re going back to letting the bots decide, and insisting that in the event someone innocent gets nailed, give the copyright enforcement people full license to the excuse “but the bot made us do it!” Because the ContentID approach clearly never backfired or anything.

The single upside to all this is that maybe, enough corporations or celebrities might be inconvenienced that they speak out against this, but it won’t happen for at least a while and even then, I wouldn’t hold my breath.

PaulT (profile) says:

Re:

Define “poorly designed”

If you mean “unable to magically determine the correct copyright status of any given file, including legal exceptions for fair use, etc., with zero mistakes”, the problem isn’t design, it’s that literal magic does not exist and there is no reliable central source for the information they’re meant to be using to filter. They are forced to guess based on incomplete information, so if the guess is inaccurate that’s not the fault of the person performing the guess.

If you mean that a filter is actually badly designed and unable to serve its purpose, then fine, but the fundamental problem is that they’re expected to work miracles then get blamed when the task they’re legally obliged to perform is in reality impossible at any kind of scale.

That One Guy (profile) says:

Re:

Platforms just need to stop protecting the big players and let them get hit just like anyone else. Content flagged as infringing? Down it goes, whether it’s from a random youtuber or a label. They don’t like it, the platform just needs to point to the law that penalizes ignoring penalties or underfiltering while encouraging overfiltering and tell them to take it up with the politicians.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...