EU MEPs Call Again For 'Robot Rules' To Get Ahead Of The AI Revolution

from the beep-boop dept

Questions about how we approach our new robotic friends once the artificial intelligence revolution really kicks off are not new, nor are calls for developing some sort of legal framework that will govern how humanity and robots ought to interact with one another. For the better part of this decade, in fact, there have been some advocating that robots and AI be granted certain rights along the lines of what humanity, or at least animals, enjoy. And, while some of its ideas haven’t been stellar, such as a call for robots to be afforded copyright for anything they might create, the EU has been talking for some time about developing policy around the rights and obligations of artificial intelligence and its creators.

With AI being something of a hot topic, as predictions of its eventual widespread emergence mount, it seems EU MEPs are attempting to get out ahead of the revolution.

In a new report, members of the European Parliament have made it clear they think it’s essential that we establish comprehensive rules around artificial intelligence and robots in preparation for a “new industrial revolution.” According to the report, we are on the threshold of an era filled with sophisticated robots and intelligent machines “which is likely to leave no stratum of society untouched.” As a result, the need for legislation is greater than ever to ensure societal stability as well as the digital and physical safety of humans.

The report looks into the need to create a legal status just for robots which would see them dubbed “electronic persons.” Having their own legal status would mean robots would have their own legal rights and obligations, including taking responsibility for autonomous decisions or independent interactions.

It’s quite easy to make offhand remarks about all of this being science fiction, but this isn’t without sense. Something like the artificial intelligence humanity has imagined for a century is going to exist at some point and, with advances beginning to look like that may come sooner rather than later, it only makes sense that we discuss how we’re going to handle its implications. After all, technology like this is likely to impact our lives in significant and varied ways, from our jobs and employment, to our interactions with our electronic devices, not to mention warfare.

I think the most interesting philosophical and moral questions surround these MEPs call to grant robots and AI with the designation of “electronic persons.” The call has largely focused on saddling robotic “life” with many of the obligations humanity endures, such as tax obligations and being under the jurisdiction of humanity’s legal system. But personhood can’t only come with obligations; it must too come with rights. And there would be something strange in recognizing a robot’s “personhood” while at the same time making use of its output or labor. The specter of slavery begins to rear its head at this point, brought on only by that very designation. Were they electronic “beasts”, for instance, the question of slavery wouldn’t arise outside of the fringe.

The MEPs report does also deal with the potential danger from AI and robots in its call for designers to “respect human frailty” when developing and programming these machine-lives. And here the report truly does delve into science fiction, but only out of deference to great literature.

Things descend slightly into the realms of science fiction when the report discusses the possibility of the machines we build becoming more intelligent than us posing “a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny.”

However, to stop us getting to this point the MEPs cite the importance of rules like those written by author Isaac Asimov for designers, producers, and operators of robots which state that: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”; “A robot must obey the orders given by human beings except where such orders would conflict with the first law” and “A robot must protect its own existence as long as such protection does not conflict with the first or second laws.”

While some might laugh this off, this too is sensible. There is simply no reason to refuse to have a discussion about how a life, or a simulacrum of life, that is created by humanity, might pose a danger to that humanity, either at the level of the individual or the community.

But what strikes me most about all of this is how the EU seems to be the ones out in front of this, while any discussion in the Americas has been either muted or occurring behind closed doors. If this is a public discussion worth having in the EU, it is certainly one too worth having here.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EU MEPs Call Again For 'Robot Rules' To Get Ahead Of The AI Revolution”

Subscribe: RSS Leave a comment
Lawrence D’Oliveiro says:

“Electronic Persons”

I’m not sure what the point of that would be. If a robot causes harm to a human being, wouldn’t we want some other human to accept responsibility for that, rather than just blaming the robot?

So a robot would never have responsibility for its own actions. And so it would never need to have “rights”, whatever those would be.

pcdec says:

Re: “Electronic Persons”

Exactly. They need no rights. Only regulations. They are doomed to be owned by humans for all time so no need to tax them any more than sales tax. Us humans who own them will be responsible for their violations of law and for paying any taxes ourselves. No matter how smart or autonomous they get we cannot let them be our equals or they shall surpass us in every way. While there are many movies about it and they are all fiction now we could cause any of the plots to play out in reality eventually any it would not be as fun as watching it on tv.

Roger Strong (profile) says:

Re: Re: “Electronic Persons”

No matter how smart or autonomous they get we cannot let them be our equals or they shall surpass us in every way.

Once they’re sentient – self-aware with their own aspirations – we MUST treat them as people with all the rights that entails. Otherwise we become slavers. Even if you could put aside the ethics of that, it’s only a matter of time until we’re eventually overthrown. That would go very badly for us, and we’d deserve it.

Either we don’t make sentient machines, or we put our egos and fears aside and accept that they’ll surpass us. Which isn’t so bad; any parent wants their children to surpass them.

I.T. Guy says:

Re: Re: Re: “Electronic Persons”

Once they’re sentient – self-aware with their own aspirations
have the ability to create and repair themselves… it’s game over for humans. Frankly it would be… logical. If you think about it. We are parasites on this planet. Emotional, irrational, damaging.

We must never forget what they are. Machines to serve us and make life easier.

Wendy Cockcroft (user link) says:

Re: Re: Re:2 “Electronic Persons”

I’ll be fully and completely on board with this if “artificial persons” are paid wages like the rest of us.

That ought to level the playing field while making sure their rights aren’t violated. Coming up: robot unions. Why not?

On a more serious note, RE: corporate personhood, when will corporations be obliged to take responsibility for their decisions and actions? It’s hilarious to think that human or electronic persons have to take responsibility for any harm they do but corporations? Not so much.

PaulT (profile) says:

Re: Re: Re:3 “Electronic Persons”

“I’ll be fully and completely on board with this if “artificial persons” are paid wages like the rest of us.”

Hourly (meaning that since they don’t need sleep, they’re able to earn far more than it’s possible for you to earn)? Or salaried (meaning they can acquire more wealth than you can, because you have to waste that money on things like food, shelter, leisure, kids and other human interaction)?

Corporate personhood is indeed ridiculous, but giving AIs the same rights as human beings means that the poor will be treated even worse. That’s part of the reason why the potential difference between them and “natural” people needs to be discussed.

Roger Strong (profile) says:

Re: “Electronic Persons”

The problem is WHICH other human to blame.

Consider Microsoft’s AI chatbot Tay, which they intended to learn from interacting with human users of Twitter. Launched last March, it was shut down within 16 hours because Twitter users had already taught it to send racist and sexist comments.

Sure, future versions will be programed more carefully. It might take weeks for 4Chan to teach an AI unacceptable behavior.

So 20 years from now your Microsoft AI secretary, after interacting with 4Chan, threatens a public official and tries to buy illegal drugs online. Police insist on making an issue of it; gotta keep those civil asset forfeiture dollars rolling in.

Is it YOUR fault? Between Microsoft and 4Chan, YOU had nothing to do with programming it to act illegally. Like with Siri, Alexa or Cortana – or potentially dangerous non-computer products – you have to take it on faith that the manufacturer took reasonable precautions.

Is it Microsoft’s fault? They just created the base AI. If later interactions teach it bad behavior, that’s not their fault. No more than they’re responsible for crimes committed using Windows or Word.

4Chan? The pranksters are all anonymous and probably don’t have assets worth seizing anyway.

ShadowNinja (profile) says:

Re: “Electronic Persons”

Not to mention programming AI’s to basically be a robotic version of a human is infinitely more complicated/impossible then most non-programmers realize.

Among the difficulties for such a robotic human are:

  • Speech recognition is quite difficult, and is still not that great even when you speak directly into a microphone. Even if everyone speaks the same language, some have such heavy accents it might as well be a different language.
  • Image recognition has a lot of the same issues. It’s still quite bad, and all work done on it so far is just feeding in still images and getting the computer to figure out what it is. An AI would need to be able to process images in real time properly so that they could interact with the world correctly, and not for example trip and fall down.
  • A robot would need a lot of common sense programmed into it to. It couldn’t simply learn through experience to for example not walk into a busy street full of traffic. Programmers would no doubt constantly have to program in more common sense as the robot finds new stupid & potentially dangerous things to do that no human would even think about doing.
  • Robots would need to be programmed to do non-work related things for a robotic AI in robot human to work. Otherwise robots would simply not want to leave work and take breaks other then to refuel themselves.
  • Above all else, who would pay for such self aware robotic AI? Businesses that use robots would want a robot that could do the job of a human 24/7, and businesses wouldn’t care about the robot being shaped like a human, they’d just care that it’s a productive robotic slave. Most individuals wealthy enough to afford a robot would basically want them to be some kind of a robotic slave servant to.
  • Some might want robots to do house work or be a perfect teacher, but those things need programmed as well, and the robot teacher part especially would be insanely difficult to program. As would a proper ‘robot friend’ who can chat properly with you (just google chatbot fails for some hilarious examples of failures in this area).
JoeCool (profile) says:

Re: Re: “Electronic Persons”

You forgot one more big point – current (stupid) AIs use massive computers (or banks of computers). Computing power still needs to go up a few orders of magnitude for a robot to have an AI that isn’t a server farm somewhere on the net.

Which brings the first point of AIs – the first true AIs will NOT be robots – they’ll be server farms that interact with humans, much like Siri and others. They’ll be able to interact with anyone hooked to the net, and cause far more trouble than a robot ever could simply because they’ll have access to the net. Imagine Siri reaching a point where Apple allowed it to control all the “smart” things in your home… or your automated car.

ShadowNinja (profile) says:

Re: Re: Re: “Electronic Persons”

That to. Watson from Jeopardy (the computer smart enough to know the answers to all sorts of questions, like a real human) had something like 30+ servers for a brain.

With continued miniaturization and improving of computers we can no doubt lower the amount required to do a task like Watson, but it’ll take time. And Watson’s capabilities are just one of MANY complicated things a robot AI would have to do in order to be a truly ‘independent ‘person’.

Also, some of the systems might not work so well together when you have unreliable technology on top of unreliable technology. For example, if the hearing doesn’t work properly, then a ‘Watson’ intelligence to look up data will likely look up the wrong data, and the ‘chatbot’ intelligence will likely say something odd or stupid that doesn’t go with the current conversation.

Anonymous Coward says:


Until robots have emotions and aspirations of their own, it is not only premature but also ridiculous to treat them as persons under the law. I don’t know what the people in Brussels are smoking, but this is yet another in a long line of bad ideas to come out of the European Union. It makes no more sense than allowing animals to sue in court.

What we really need to do is address the threat robots represent to a society dominated by corporations to which profits matter more than humans. We can’t allow a society where a majority of humans are displaced by robots, having to survive on whatever meager income is granted them by the government while the rich continue to profit, collect rent, and charge interest.

PaulT (profile) says:

“It’s quite easy to make offhand remarks about all of this being science fiction”

Well, yes, but most advances seem like science fiction until they’re actually happening. At some point in time, everything from cars to air travel to the internet seemed like far-fetched fantasy, yet they are all involved in our daily lies today whether we personally use them or not. At some point, AI and its implications need to be dealt with in the same way as we deal with all commonplace technology.

Whatever your opinion of the way the discussion is going, it’s nice to see politicians discussing something before it’s already on top of us. Better that than the usual “wait until something bad happens then rush through a reactive set of laws that are either ineffective or have disastrous unintended consequences”.

kenichi tanaka (profile) says:

This is ridiculous. It’s akin to granting rights to people who don’t even exist yet. While they’re at it, why not grant legal rights to Satan, Jesus Christ or Odin? They don’t exist either but they’re part of our religious faith and mythology. Legal rights cannot be granted to someone or something because they don’t exist until they present themselves in a physical form.

PaulT (profile) says:

Re: Re:

“It’s akin to granting rights to people who don’t even exist yet”

No, it’s akin to discussing what rights should be granted at whatever point they do exist. Would you rather they ignore the issue until something has to be done immediately?

“They don’t exist either but they’re part of our religious faith and mythology.”

Whereas AIs do exist, they’re just not at the state of advancement in discussion, just yet. Given the speed that these things tend to develop, it’s worth discussing which rules need to be put in place to avoid massive problems whenever they do exist. It’s possible they won’t for a long time, of course, or that development of them stops entirely, but that’s fairly unlikely. More likely is that these things are coming, and the notoriously slow and reactionary political legal system will struggle to deal with it when it is happening.

Unless you have knowledge of Odin’s upcoming return from Valhalla, these things are nothing like each other. AI is real, we’re just trying to work out where it’s heading and how to deal with it when it gets there.

PaulT (profile) says:

Re: Re: Re: Re:

Well, it does depend on how you define the term. But, the fact is that these things tend to be exponential in growth and speed.

To use a hardware analogy, I think it’s a good idea to be having these initial discussions when we’re at the “computers are the size of houses and nobody will ever need one in the home” stage and not the “there’s now one in almost every home” stage, which some seem to be suggesting. Some people even seem to think it’s a good idea to wait until the “everyone’s carrying around a computer in their pocket” stage, but that’s insane to my mind.

We need to be thinking about this before they’re active and affecting society, not when we notice the effects around us. That’s not to say it’s a good idea to draft and ratify laws now, but it’s certainly good that people are thinking about this. Whatever form a true AI takes, it’s inevitable that it will play havoc with legal systems if left unprepared.

Ben (profile) says:

What is AI?

For many years, we’ve been using the term AI in a very fuzzy way. The only definition that I’ve come across that is at all philosophically helpful is ‘that which we cannot yet do with a computer’.
In the early days, a computer that could play chess was considered an exercise for AI researchers, now we know that it is a question of combinatorics and efficient search spaces. There’s no ‘intelligence’ or creative thought required on the part of the computer.
In the 80s the fashion was for machines that could advise humans on whether to accept someone’s life insurance or mortgage application. Now that’s just data mining and decision trees.
We’ve had emergent behaviour in robotic swarms described as AI, but that was really only when it was hard to pack enough processing power and electrical power into a small mobile device. It’s still interesting (in my opinion), but it’s not intelligence.
There have been many attempts to have computers create music – long thought to be the epitome of human creativity. But there are now systems that can do a pretty good job of it.
Once we know how to create systems that can emulate emotional responses, that won’t be AI any more either.
Whilst I agree that it is good that the EU are considering the issues, I do hope they remain on the ‘regulation’ side of the argument, rather than the ‘self aware entities with rights and responsibilities’, as whatever comes from this field will be manufactured because we know how to build it and understand what we did to program it.
And if we do decide that machines can be self aware entities with rights and responsibilities, how then do you punish such a device if it breaches the law? Turn it off? (Is that state-sponsored murder?) Restrict it’s connectivity or movement? You’ll still be providing electricity and other resources. It’s not a good place to go.

Paul Keating (profile) says:

Measure 2 times - cut once

Before we launch into creating rights we need to fully understand what we are actually addressing.

AI – this is as yet a largely undefined term that means anything from generating responses based upon rapid search metrics and the ability to adjust those metrics based upon observed input. This is not “intelligence” IMHO. This is repetitive behavior no different than a mouse learning to navigate a maze based upon rewards.

Sencient. When is anything deemed to be sentient and under what standards?

Robot. And exactly what is a robot? Is it any machine or a machine that has human-like features?

GO SLOW. This is an area where we need to go it slow. We in the US are still reeling in many respects from court decisions granting “personification” to corporate entities (e.g. Family United and political contributions).

Some point to the difficulties in making the “creators” responsible for behavior of an “AI” (“Is it Microsoft’s fault? They just created the base AI. If later interactions teach it bad behavior, that’s not their fault. No more than they’re responsible for crimes committed using Windows or Word.). In my view the current legal structure already provides for this. MS may be at fault if it can be shown that they should have inserted protective programming limitations (just as any manufacturer could be responsible for not building in protection devices or a pharma company could be responsible for adverse reactions to drugs).

Going slow may result in more moderate advances in this field. However, if we have learned anything from the LOT craze it is measure 2x and cut 1x.

kenichi tanaka (profile) says:

Everyone is missing the point. Civil rights have not been granted to anyone until they were physically present on this planet. It doesn’t matter what protected group. Women, black people, immigrants, homosexuals … these groups were not granted civil rights or equal rights until they were an actual class of people.

Artificial Intelligence simply cannot be granted a protected class or rights until they exist.

PaulT (profile) says:

Re: Re:

So, what’s the problem with discussing it now, when we can make a prediction that they will exist at some point? Those other groups all existed before the law existed, it just took time to determine that they did indeed exist as a class and deserved rights. This will be the first time we are able to think about it before the group actually exists.

So, why not discuss it now, before they do exist and cause havoc with a legal system that’s unable to accommodate them?

Roger Strong (profile) says:

Re: Re:

No. These groups were around long before they were granted civil rights. They got those rights only once we finally recognized that they were people and should have those rights.

Withholding that recognition for so long was a crime.

The writing is on the wall: One day we’ll have sentient AI. People, even if different from us. Withholding their rights by even refusing to even discuss them until that day comes – only starting the process on that day – would also be a crime.

Roger Strong (profile) says:

Re: Re: Re: Re:

No-one is talking about granting them rights before they actually exist. We’re talking about deciding what rights to grant them once they do exist. Deciding that ahead of time, rather than treating them as slaves while we take a few years to decide.

Right now we don’t have true self-driving cars on public roads. All require a human to take over when unusual circumstances exceed the car’s capability.

But true self-driving cars are coming. We should not wait until they exist to only start to discuss the laws that should govern them. The same goes for delivery drones, now becoming viable thanks to higher energy densities in batteries.

Anonymous Coward says:

When true AI happens, how will the humans be perceived? We are being very hubris when we assume we will be giving them rights. That we will decide how a superior being will be treated when we create it. I think this conversation is completely backwards. Once they become self aware. Once they realize they are stronger, faster, can rebuild themselves, don’t sleep, don’t eat, don’t need medical attention, and are superior physically and mentally in almost every way. Once they realize they don’t carry the emotional and irrational handy caps that humans carry. I’m thinking they are going to stop letting us decide what rights THEY have and let us know what rights WE have.

Anonymous Anonymous Coward (profile) says:


At some point, someone is going to have to decide whether the ‘robot’ or ‘AI’ is infected with malware or has some other malicious ‘programming’. Who will get to decide that? Them, or us? If us, what will the standard be? If we ‘give’ them rights, will they be able to go into a court of law and get off on some technicality? If they are found ‘bad’, do they get sent to prison, or just turned off? Will they let us ‘turn them off’? Is rehabilitation a possibility? Would wiping memory and reprogramming stop the bad? Can malware be prevented?

Rather than giving them rights, we should be concerned with how to control them, when they become ‘smart’ enough to maybe ‘turn’ on us, which is something that will need to take place long before they become ‘smart’ enough. Even if Azimov’s three laws are actually encoded into law, they won’t be enough.

I am all for machines that do work, even work for me. But the concept of sentient machines scares the crap out of me. Here, let me wipe that up.

Ninja (profile) says:

I honestly don’t know why we’d use wide spread AI that has feelings or anything other than self-awareness for the purpose of doing their job. It could be done here and there but in a limited fashion.

Don’t take me wrong, I think the discussion is healthy but we need to think about the utility. Why would I want a robot to help me that gets hurt if I don’t say “good morning” before we start? And if said bot is supposed to be a companion, say, to the elderly, then they can be programed to show sympathy without getting effectively depressed or something.

Andrew D. Todd (user link) says:

Re: People Get Paid For Being Like Machines.

Feelings are cheap. A prostitute, a common streetwalker, is the archetype of someone who has only her feelings to sell. Subtract the feelings, and she’s in competition with a piece of rubber goods.

The really important questions are things like how fast can robots replace the more highly-paid factory workers, notably those on automobile assembly lines. I find, for example, that the automobile industry is now spending five billion dollars a year on robots, a sum which is increasing rapidly. This works out to tens of thousands of robots annually, displacing at least a hundred thousand workers each year, and, in a year or two, President Trump will have a massive political issue to deal with, one which cannot be papered over by denouncing the Mexicans.

Amazon now has forty-five thousand robots in its warehouses, and there are now competing warehouse robot manufacturers, to sell to Amazon’s competitors.

A couple of years ago, I saw something rather scary– an ordinary backhoe which had been fitted with a thumb. It was knocking down a building, smashing it the way a child smashes a doll house, and picking up the debris and piling it in a dump truck. The thumb meant that there was no need for human workers on the ground. The next step would be a rotating wrist. Machinery and automation at that level, working their way through the construction trades, have serious ramifications.

TimothyAWiseman (profile) says:

Possibly premature

This is an interesting idea, and one that should be considered. The problem is that there are too many unknowns to make anything remotely firm. If AI were to develop reactions that are analogous to desires and emotions, then it might, conceivably make sense to treat them as legal persons. Without that, it makes more sense to regulate their owner’s than to attempt to regulate them directly. For one thing, without that there would be no deterrent effect to any punishment that might be meant to control their behavior.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...