We’re Going Down A Very Dangerous Path With AI Regulation, Despite Better Options

from the do-you-want-anti-competitive-nonsense,-this-is-how-you-get-anti-competitive-nonsense? dept

It appears the race is on to see whether it will be the EU or the US in promulgating worse regulations around generative AI tools. The EU (as has been its MO over the last few years) is taking the lead. A few weeks back the EU Parliament passed a draft regulation for AI. There’s still a lot more that needs to happen before something like it becomes law, but the process is moving forward. The draft (like many such laws) has a mix of good and bad ideas. Putting some limits on facial recognition, for example, seems like it would be a good idea.

However, much of the rest of it is an unworkable mess of technological ignorance:

Under the latest version of Europe’s bill passed on Wednesday, generative A.I. would face new transparency requirements. That includes publishing summaries of copyrighted material used for training the system, a proposal supported by the publishing industry but opposed by tech developers as technically infeasible. Makers of generative A.I. systems would also have to put safeguards in place to prevent them from generating illegal content.

There’s also a ton of the standard bureaucratic solution to the internet these days: requiring regular “risk assessments.” Now, don’t get me wrong, I’ve spent decades calling for tech companies to be more cognizant and thoughtful about the risks that their technology creates, but once that becomes mandated by government what it turns into is not an actual useful analysis of risks and how to mitigate them, but a bunch of lawyers doing a ton of useless busy work to cover the asses of the companies.

And, in practice, what that means for the larger industry is that the biggest companies, with the most money to set up a building full of lawyers as their “compliance department,” dominate the market, and the more creative and innovative startups are shut out of the market.

Is it any wonder that the big companies are already declaring their love for this approach? We noted a few weeks ago that OpenAI’s Sam Altman’s comments to US regulators made it pretty damn clear he was hoping that regulators would build his company a regulatory moat to protect him from upstart competitors. And, as if to put an exclamation point out that, Time reported that the EU’s draft AI regulations include bits written by OpenAI’s lobbyists. And, both Meta and OpenAI have said they support the EU’s approach.

So while these American tech giants are embracing these new regulations, tons of European companies are highlighting how much damage they’ll do for local industry. They’re saying straight up that these rules will hand AI leadership over to non-European companies and making it impossible for European companies to keep up. They’re basically putting the exclamation point on exactly what those US AI companies are saying. These plans are designed to lock in a few favored players and exclude everyone else.

Meanwhile, though the US is a step behind, it’s not clear that the US approach will be any more thoughtful. Congress has basically handed over the process to Senator Chuck Schumer, who recently revealed his “SAFE Innovation Framework” to describe how he views the policy objectives around regulating AI.

Reading through the “SAFE Innovation Framework” is like a masterclass in the political game of making it sound like you’re saying something, while actually saying nothing at all:

  1. Security: Safeguard our national security with AI and determine how adversaries use it, and ensure economic security for workers by mitigating and responding to job loss;
  2. Accountability: Support the deployment of responsible systems to address concerns around misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability;
  3. Foundations: Require that AI systems align with our democratic values at their core, protect our elections, promote AI’s societal benefits while avoiding the potential harms, and stop the Chinese Government from writing the rules of the road on AI;
  4. Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content.
  5. Innovation: Support US-led innovation in AI technologies – including innovation in security, transparency and accountability – that focuses on unlocking the immense potential of AI and maintaining U.S. leadership in the technology.

I mean, let’s face it: this is the kind of thing you get your policy person to write if you want to make it look like you’re doing something, but don’t really want to piss off anyone. It’s mostly vague platitudes that sound fine in a vacuum, but when you start thinking about the actual details, you realize could be extremely problematic.

The biggest problem here, Adam Thierer rightly points out in his analysis of these rules, is that the one thing that is implicit in that entire list is that innovation in the AI space (under Schumer’s view of regulations) is going to have to be permissioned, rather than permissionless.

And, yes, I am well aware that there are some people who use the “permissioned vs. permissionless” dichotomy to try to excuse any bad behavior by companies. But it is undeniable that areas of industry that require getting government approval for each new thing are… not particularly innovative. They’re stagnant. So, there’s a real concern here that the structure being discussed would greatly limit innovation, which again, works to the big company’s advantage.

As Thierer notes, this would be a huge shift in regulatory frameworks for the US when looking at how a rapidly changing, innovative industry is regulated:

Schumer’s address is a major moment in the growing battle over AI policy because it represents a push from the top ranks of Congress for a broad-based legislative framework for algorithmic systems and applications. His proposed policies make it clear that the United States might be abandoning the “permissionless innovation” policy vision that made America a global digital powerhouse.

He also said that the traditional legislative policymaking process is incapable of crafting law for fast-moving emerging tech like AI, meaning that “Congress will also need to invent a new process to develop the right policies to implement our framework.” Schumer aims to address this problem through “AI Insight Forums,” which will bring together “the top minds in artificial intelligence” to do “years of work in a matter of months,” and then advise Congress how to proceed.

With this speech, Schumer has signaled a potential sea change in the way the United States will regulate AI and perhaps many other emerging technologies going forward.

As Thierer also highlights, what Schumer is describing seems antithetical to not just the way that innovation actually works, but also to the way in which Congress actually works, which is… slowly and not very well. And that’s kind of a problem when we’re talking about a rapidly changing space.

Regarding the “AI Insight Forums,” Thierer points out that this is just a version of multistakehodlerism:

What Schumer is describing is a variant of what is often called multistakeholderism, which is a collaborative governance model that has been used widely within information technology sectors. Multistakeholder efforts have been a central feature of internet governance from the start, with a wide variety of institutions working together to create standards, norms, and best practices for various digital systems and applications. While government bodies sometimes play a role in multistakeholder processes, it has typically been focused more on helping to convene dialogues in the hope that the various parties hammer out agreements and standards in a collaborative, flexible and mostly voluntary fashion. This is also sometimes referred to as “soft law” governance.

In practice, these AI Insight Forums represent something quite different than what Schumer has proposed. They are more akin to congressionally appointed expert advisory committees created with the express intent of formulating formal legislation and filling in the details for how Congress should regulate specific technological systems and applications. Perhaps some consensus will come out of this process, but these new Insight Forums are not going to make traditional policymaking problems go away. Many different special interest groups and regulatory advocates will be clamoring for a seat at the table. Meanwhile, many other AI bills have already been introduced in this session and more are likely coming as almost every congressional committee lines up to take a stab at AI policy.

I’d argue that this approach is even worse than what Thierer describes. Multistakeholderism is useful for slow-moving or slow-changing systems. Indeed, lots of people engaged in multistakeholder processes complain about how slow they go. But, often it’s for good reason. Because when you’re doing things like managing internet governance, you kinda want changes to be very carefully thought out and to build a consensus approach.

Because things like that are centralized protocols on which more decentralized innovation is built. And lots of people and services and organizations are relying on those centralized systems and protocols to remain stable and consistent, such that changes to them should be carefully considered, because they’re changing entire ecosystems that lots of other stuff is built on.

AI, on the other hand, is not that. It’s part of those decentralized concepts that is being innovated on by tons of different players at rapid speed. In that scenario, innovation drives innovation. Breakthroughs drive other breakthroughs. Having to go through a “multistakeholder process” for each new form of AI innovation is the equivalent of grinding the gears of innovation to a screeching halt.

At least in the US.

Again, none of this is to say that we should have a total free for all. Thierer suggests focusing on smaller, more targeted regulations for specific areas of concern, rather than overreaching rules regarding all AI:

If Congress hopes to get anything done at all on AI policy, lawmakers will have to be willing to break the issue down into much smaller components and focus on tractable objectives. It would be easier for lawmakers to address more targeted goals in stand-alone bills, such as proposals to keep AI away from nuclear weapons launch systems or other critical public infrastructure; disclosure for AI-generated political advertising; limits on so-called “predictive policing” algorithmic applications or the use of facial recognition tools by law enforcement bodies; or even measures to promote more robust supercomputing labs and systems and other research and development efforts.

There are many good ideas in there, but just looking over that list should help you realize that a one-size fits all approach to regulating AI just doesn’t make any sense at all.

As I’ve said before, there are many good aspects to the fact that we’re having this discussion. It’s miles away from the traditional way that new technology grows, in which no one thinks about regulatory questions until much later. But that doesn’t mean that all regulation is magically good regulation. There are so many ways this could go wrong — including in writing legislation that just locks in the big players and stifles the smaller, more innovative ones.

Unfortunately, from what I’ve seen so far, I have little faith that the regulation is going to play out in a manner that is helpful, and fear that it will be actively harmful to developing AI tools in a useful and helpful manner.

Filed Under: , , , , , ,
Companies: meta, openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “We’re Going Down A Very Dangerous Path With AI Regulation, Despite Better Options”

Subscribe: RSS Leave a comment
8 Comments
ECA (profile) says:

Challenges.

Lets see what could happen.
AI used to spot people, Kinda hidden, and Never ‘SAID’ to be used. Spot someone Then monitor them, or if they were kicked off the premises, Kick them again. Never say how you discovered them.
As easy as ID the cellphone they use, Facial ID as you pass a Display.

Replace certain workers. Payroll, most money management, AFTER the cashier.
Replace the cashier/self checkout.They have actually done this in old/new ways. They can Count from the shelf or even scan at the register(no human needed) and charge you. Only need 1 person for 6-8 registers, At most.
New Automated Farming, harvesters. YEP GEO location to the max, with Builtin ability to avoid problems on the ground, and after 1-2 times the Whole field is MAPPED and everything known. Im waiting for Automatic weeding, insted of Chemicals(wont happen).

We aint upto Star trek or many other Scifi, for automated doctors. Automated Diagnosis, Could be done, but Listening to the Patient WORKS better.

A few janitorial jobs, could be done. Automatic sweepers and mopping.

Already tried and died, was taking orders remotely, for meals. DOnt expect tips. Automating this isnt good either, as special orders wont work very well.

Anyone wish to add?

The regulation will come, that using TECH will cost more as they have to pay into the system for this Knocked out of work.
But who is liable? Is the Big Question.

Anonymous Coward says:

such as proposals to keep AI away from nuclear weapons launch systems …

  • Humans are in control of launch systems currently, and so should be considered “part of the launch system” as a whole.
  • Humans operate off of the information they receive.
  • Therefore, AI should not be used as information sources…

Think of how many people believe that cell phone towers cause cancer (even if the tower wasn’t in operation), or believe that vaccines include microchips.

Now imagine policy makers who get news based on GPT4 hallucinations….

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...