EU’s New AI Law Targets Big Tech Companies But Is Probably Only Going To Harm The Smallest Ones

from the ask-gpt-to-write-better-laws dept

The EU Parliament is looking to regulate AI. That, in itself, isn’t necessarily a bad idea. But the EU’s proposal — the AI Act — is pretty much bad all over, given that it’s vague, broad, and would allow pretty much any citizen of any EU nation to wield the government’s power to shut down services they personally don’t care for.

But let’s start with the positive aspects of the proposal. The EU does want to take steps to protect citizens from the sort of AI law enforcement tends to wield indiscriminately. The proposal would actually result in privacy protections in public spaces. This isn’t because the EU is creating new rights. It’s just placing enough limits on surveillance of public areas that privacy expectations will sort of naturally arise.

James Vincent’s report for The Verge highlights the better aspects of the AI Act, which is going to make a bunch of European cops upset if it passes intact:

The main changes to the act approved today are a series of bans on what the European Parliament describes as “intrusive and discriminatory uses of AI systems.” As per the Parliament, the prohibitions — expanded from an original list of four — affect the following use cases:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

That’s the good stuff: a near-complete ban on facial recognition tech in public areas. Even better, the sidelining of predictive policing programs which, as the EU Parliament already knows, is little more than garbage “predictions” generated by bias-tainted garbage data supplied by law enforcement.

Here’s what’s terrible about the AI Act, which covers far more than the government’s own use of AI tech. This analysis by Technomancers roots out what’s incredibly wrong about the proposal. And there’s a lot to complain about, starting with the Act’s ability to solidly entrench tech incumbents.

In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models.  The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe.  While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.

Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue.  Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.

When you make the fines big enough and the mandates restrictive enough, only the most well-funded companies will feel comfortable doing business in areas covered by the AI Act.

In addition to making things miserable for AI developers in Europe, the Act is extraterritorial, potentially subjecting any developer located anywhere in the world to the same restrictions and fines as those actually located in the EU.

And good luck figuring out how to comply with the law. The acts (or non-acts) capable of triggering fines and bans are equally vague. AI providers must engage in extensive risk-testing to ensure they comply with the law. But the list of “risks” they must foresee and prevent are little more than a stack of government buzzwords that can easily be converted into actionable claims against tech companies.

The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy?  Could this act itself be a risk to democracy?

In addition, the restrictions on API use by third parties would put US companies in direct conflict with US laws if they attempt to comply with the EU’s proposed restrictions.

The top problem is the API restrictions.  Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill.  A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price.  Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.

What the EU wants runs contrary to what the FTC is demanding.  For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems.

While some US companies will welcome the opportunity to derail their smaller competitors and lock in large contracts with their wealthiest customers, one of the biggest tech companies in the world is signaling it wants no part of the EU Parliament’s AI proposal. The proposal may not be law yet, but as Morgan Meaker and Matt Burgess report for Wired, Google is already engaging in some very selective distribution of its AI products.

[G]oogle has made its generative AI services available in a small number of territories of European countries, including the Norwegian dependency of Bouvet Island, an uninhabited island in the South Atlantic Ocean that’s home to 50,000 penguins. Bard is also available in the Åland Islands, an autonomous region of Finland, as well as the Norwegian territories of Jan Mayen and Svalbard.

This looks like Google is sending a bit of a subtle hint to EU lawmakers, letting them know that if they want more than European penguins to have access to Google’s AI products, they’re going to have to do a bit of a rewrite before passing the AI Act.

The EU Parliament is right to be concerned about the misuse of AI tech. But this isn’t the solution, at least not in this form. The proposal needs to be far less broad, way less vague, and more aware of the collateral damage this grab bag of good intentions might cause.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EU’s New AI Law Targets Big Tech Companies But Is Probably Only Going To Harm The Smallest Ones”

Subscribe: RSS Leave a comment
4 Comments
Anonymous Coward says:

since when has the EU lawmakers, just like the USA lawmakers ever done anything that is detrimental to ‘BIG COMPANIES/INDUSTRIES but can or does devastate the smaller companies? it happens with continued regularity! even worse, customers/ordinary people, a lot of whom rely on the smaller companies, get kicked in the nuts time and time again! has anyone ever actually noticed that it’s always the big companies that get away with blue- murder, one way or another and various governments are always falling over backwards to please them?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...