Should Robots Get Rights?

from the be-kind-to-skynet dept

I've written before about robots when someone imagines them rising to the level of cognition. Usually these stories are filled with luddite fear of the coming robot apocalypse. This time, however, let's take a quick trip down a robotic philosophical rabbit hole.

Computerworld has a story questioning whether or not the robots that will be increasingly life-like and ubiquitous in our lives will attain the kind of rights we afford animals.

Imagine that Apple will develop a walking, smiling and talking version of your iPhone. It has arms and legs. Its eye cameras recognize you. It will drive your car (and engage in Bullitt-like races with Google’s driverless car), do your grocery shopping, fix dinner and discuss the day’s news.

But will Apple or a proxy group acting on behalf of the robot industry go further? Much further. Will it argue that these cognitive or social robots deserve rights of their own not unlike the protections extended to pets?

If you're like me, your gut reaction may have been something along the lines of: of course not, idiot. But the article actually raised some interesting questions, based on a paper by MIT researcher Kate Darling.

The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.

Now, this, to me, makes a bit of sense save for one detail. Yes, our values are reflected in the way we treat some animals, but there seems to be a vast difference between organic life and cognitive devices. Robots, afterall, are not life, or at least not organic life. They are simulations of life. This is, of course, where the rabbit hole begins to deepen as we have to confront some tough philosophical questions. How do you define life? If at some level we're all just different forms of energy, is the capacity to think and reason enough to warrant protection from harm? Can a robot be a friend, in the traditional sense of the word?

But, putting aside those questions for a moment and assuming robots do attain some form of rights and protection in the future, this little tidbit from the article made me raise my eyebrows.

Apple will patent every little nuance the robot is capable of. We know this from its patent lawsuits. If the robot has eyebrows, Apple may file a patent claiming rights to “a robotic device that can raise an eyebrow as a method for expressing skepticism.”

Here's where we may find commonality with our metallic brethren. With the expanded allowance for patenting genes, it becomes all the more likely that the same codes that manufacture our humanity could indeed be patented in the way that a robots manufactured “humanity” would be. If robotics progresses to produce something along the lines of EDI, the very things that make her “human” enough to be worthy of rights will be locked up in an increasingly complicated patent system. And, with our courts falling on the side of gene patents for humans, we've virtually ensured that all of that robotic humanity will indeed be patentable.

On the other hand, what happens if future courts rule that human genes cannot be patented? And then what happens if we do indeed define some kind of rights structure for our robotic “friends”? Do those rights open up the possibility that robotic “genes” should not then be patented?

Filed Under: ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Should Robots Get Rights?”

Subscribe: RSS Leave a comment
69 Comments
saulgoode (profile) says:

If the bobble heads at the Patent Office continue on the path they are currently following then we can certainly expect a rush of patents on all kinds of human activity with the caveat of it being done “with a robot” — e.g., dig a hole with a robot, change a tire with a robot, build a swing set with a robot — just as “with a computer” seems to justify patents being issued on things such as getting feedback from a buyer or scrolling through a document.

Jason says:

Re: Re: Re:3 Re:

And no, the robot I was thinking of was sort of both. It’s one thing to have an RC bot that is kinda good with bombs.

It’s another thing to have one that’s intuitive, notices clues on its own, etc. The sort of thing that only an intelligence could do. That would make a decidedly better bomb squad tool?

So that, “Would you send it in to risk its life?” is then a parallel question that serves to separate the two issues that latin angel was confounding.

That was my point.

onyx says:

Robot Rights

The only reason a robot may need rights is if they are created to be a human emulation.
When building a robot from the ground up the maker has virtually unlimited choices. Do you want a robot to muck out sewers, then don’t give them a sense of smell. If you want a robot to follow orders then just program it so that it’s greatest desire is to obey a humans every command.
If you want a slave with no rights then don’t give the robot a desire for those rights. Make them enjoy living in servitude.
Why would we need to give robots rights if we make them without the capacity for that desire.

varagix says:

Re: Robot Rights

That an issue a lot of science fiction tries to address. It’s easy to say “we didn’t intend to make it that way” but there might come a time where robotics and AI becomes so advanced that, whether through a glitch or through intentional design, perhaps quickly or maybe slowly over time, a robotic creation becomes self aware and gains sentience and sapience.

Granting rights to these individuals, and to any ‘race’ that arises from these electronic ‘mutations’ is something we need to give serious thought to.

Lawrence D'Oliveiro says:

With Rights Come Responsibilities

Humans only get rights because we?re expected to be able to take responsibility for the consequences of our actions. We have the right to free speech because we have to be able to deal with the consequences if somebody doesn?t like what we say. We have the right to spend our money how we choose because we have to be able to cope with the consequences of spending it on the wrong things.

The only humans who get rights without responsibilities are children. They get those rights because they are expected to grow into mature adults someday, whereupon they assume the full responsibilities, along with the full rights to independent action, of an adult.

Animal rights don?t make sense on this basis, because animals will always remain animals, they can never take on the full responsibilities of a mature human adult.

In the same way, robot rights don?t make sense for present-day robots. If future robots become smart enough to be difficult or impossible to distinguish from mature human adults, then that becomes a different matter…

Chronno S. Trigger (profile) says:

Re: With Rights Come Responsibilities

But the question becomes “Where is that line drawn?” Does it have to be comparable to a human adult? Any human adult, or just intelligent ones? What about a robot child? What if it was comparable to say Cletus from the Simpsons?

But part of the point of the article isn’t human level robots, but pet level robots. Would it be cruel to kick a robotic cat if it was a walking, meowing, thinking cat? If it was truly an AI of a cat brain, should it not be treated with some care?

These are the hypothetical questions being asked. And how we answer those questions when AI comes around will determine if we have a robot apocalypse or plastic palls who are fun to be with.

Spointman (profile) says:

This is not a new question. The debate about the humanity of robots is just about as old as the word “robot” itself. Look up the short story/novella “The Bicentennial Man” by Isaac Asimov (or, if you’re lazy, the Robert Williams movie), or his “I, Robot” series of novels.

At a fundamental level, a human being is a very advanced supercomputer powered by carbon-based circuits and fueled by oxygen, as opposed to our current computers with silicon circuits which are fueled by electrons. Science teaches us that a single-celled self-replicating bacterium is alive. Even a virus, which contains little more than instructions to reproduce encoded into chemicals, is considered alive.

By that definition, a modern computer virus could certainly be considered alive. Siri is not that far from passing a Turing test. Combine the two, and you have a dilemma on your hands.

Inevitably, computers will become smarter than people, more capable, more efficient. That includes the ability to feel and to think. A computer AI housed in a humanoid body created by a human (or by another computer) will be no different than a baby’s intelligence, housed in a frail human form, born from his mother. Just like a baby, the computer will learn, and grow, and adapt.

It’s not unreasonable that in our lifetime, we will have to answer the question asked here as a hypothetical, but under very real circumstances, in a congress or parliament, or in a court of law.

Jason says:

Understanding

In his famous Ender Series (Serieses?), Orson Scott Card puts forth a Hierarchy of Foreignness which flips the question:

“The difference between ramen and varelse is not in the creature judged, but in the creature judging. When we declare an alien species to be ramen, it does not mean that they have passed a threshold of moral maturity. It means that we have.”

I’m DEAD CERTAIN that I DON’T get how that applies to robots. So screw ’em.

http://en.wikipedia.org/wiki/Concepts_in_the_Ender%27s_Game_series#Hierarchy_of_Foreignness

Niall (profile) says:

Re: Understanding

Well, look at the Star Trek: Next Gen episode involving whether Data counted as ‘alive’ and worthy of rights – and this was in a universe with super-intelligent shades of the colour blue (oops, wrong universe ;)!

The whole point of the Hierarchy is that we are advanced enough to treat a being as a mindless animal, a hated enemy, or another being to be understood – even if kept at a (safe) distance. So we treat animals according to a hierarchy already, as we do humans – and we would aliens. So why not robots? Just like most people don’t worry about a fish’s rights, they probably shouldn’t worry about the average car machine-line robot.

However, even if a robot isn’t self-aware or requesting rights, it ‘de-humanises’ us to treat it like garbage, and teaches those around us to do so too. Respect begets respect. For more to think about, there’s the Broken Windows Theory.

Austin (profile) says:

Yes, because...

One day we will have a computer that is small, portable, and capable of emulating the human brain with 100% efficiency.

When this day comes, we have to assume we will either have already, or will soon thereafter develop the ability to map a fully developed human brain, and between these two technologies, the inevitable will happen – humans will BECOME robots.

This has myriad benefits. Instant communication across the galaxy, with 100% privacy control. The ability to share emotions directly, not just language. The ability to disconnect our minds from our form. Bored being a biped? Fine, upload yourself into a rocket or airplane or submarine body and go exploring. We won’t need homes. We won’t need food. Nor sleep. Nor even air. As long as we can get within proximity of a star to recharge our batteries, we’re golden. And when we feel like being around others? Simply connect to the central server and commune with everyone else in existence because we have finally achieved the ULTIMATE form of humanity – raw data.

So yes, we need robot rights, because one day I intend to be one, and I’ll be damned if I’m going to wind up as some meatbag’s bitch.

Martin Thomas says:

Probably in the far future ...

We do not yet have a clear idea what exactly it is about human brains that causes them to experience anything. Some people think we are close to understanding it; others call it the “hard problem”, because it appears to be the most difficult problem that science faces. I am assuming that it will eventually be understood and then we will be able to make robots which are every bit as self aware and alive as we are. Then we will face all these problems!

In the meantime, we will very soon have robots that appear to be human and to have human thoughts and feelings. Many people will be very happy with this, some may react violently.

If 10,000,000 young children believe that their kiddy-bots are alive, what to do if people begin to smash them up publicly?

Hephaestus (profile) says:

It depends

Should a Rumba, Toaster, 3d house printing robot be given rights? The answer is no.

Should a large scale MolSID (Molecular Scale Integration Device – Nanotech) device containing a human or human like intelligence be given rights? The answer is yes.

There are so many other questions that also need to be answered.

– Who is responsible when a programming glitch makes all Google cars run amok and kill people?

– Should the above event lead to civil or criminal charges?

– If you delete the last back up of a human mind is that murder?

– If you delete an AI that has human like intelligence is that murder?

– If you have a back up of your mind, can the police get a search warrant to go through it?

– How do you handle copyright on music, video, and books in backups of human minds?

Anonymous Coward says:

Andromeda is a good series to watch if you want some idea of the consequences of AI being given the same general due consideration as living beings 🙂

One of my favourite openings to the show was this:

“You ask why we give our warships emotion? Would you really want a ship incapable of loyalty?

Or of love?”

Of course, the flipside is also true and on several occasions problems are created by an AI deciding it doesn’t feel like playing nice any more. Slippery slope, the whole AI deal.

nospacesorspecialcharacters (profile) says:

Freedom is defined by the option to disobey...

I think the answer is in Genesis, no really!

Let’s assume the Genesis account in the bible is entirely literal. God creates Adam 1.0, God tells Adam here is the walled garden – you can do anything you like in this, I’m even going to let you name everything.

God has created effectively a sandbox for a program to run in and grow and learn. But God was not satisfied with just having a machine with no intelligence, therefore he introduces the Tree of Source Code. He then tells Adam that he can do anything he likes in the walled garden, but cannot touch the Tree of Source Code, or Adam 1.0 will surely be obsolete.

God forks Adam 1.0 into Eve Beta. Eve interacts with the trojan Snake virus and we have eventually both Adam and Eve choosing to disobey their original makers programming.

The reality is, God, didn’t need to put the Tree of Life in the Garden – his creations could have happily lived and evolved inside the sandbox with no ability to develop outside of his original programming. By putting the Tree of Life into the Garden, he created an opportunity for Adam and Eve to exercise free will in obeying or disobeying the instructions of their maker.

This is why I roll my eyes when people seem to think it’s just a matter of ‘programming’ Asimov’s 3 rules. If we apply this analogy to robots, then assuming we will even manage to get as far as reproducing a robot as nuanced as a human being, we’d have to program it to have a choice in whether it would attack or kill us. We’d have to give it a real choice to disobey – otherwise they will always be ‘slaves’.

I personally don’t think we will go this direction. Mark Kennedy once said:

“All of the biggest technological inventions created by man – the airplane, the automobile, the computer – says little about his intelligence, but speaks volumes about his laziness.”

We tend to invent to fulfill a purpose or function. We don’t program mobile phones not to kill humans because mobile phones are practically unable to kill humans unassisted. Same as we don’t program it into our printers, computers, TV’s, cars, planes.

Robots will be invented to fulfill functions and purposes. The military will use them to kill civilians and combatants in far off middle eastern countries, the red cross will use them to pull people from rubble or administer basic first aid in war zones. But we’ll never see a military robot become a conscientiousness objector because they won’t be given that programming. We’ll never see a first aider robot decide this person isn’t worth saving.

Finally check out Big Dog – https://www.youtube.com/watch?v=W1czBcnX1Ww It literally scares the shit out of me that this is what could be chasing people in the future – whether for war, policing, bounty hunting. Look at how the scientist slams his boot into the side of it – if that was a horse or a person we’d be horrified. Big Dog is built for a purpose – not for love or affection.

Personally it makes me want to learn how to quickly disable these things or evade them.

Josh (profile) says:

Robot rights

I don’t know if you read Questionable Content, a webcomic, but they have “AnthroPCs”, sentient computer companions. It’s not a scifi comic, it just happens to have some futuristic devices. The point is, the author has written a fictional UN hearing on the subject of robot rights, and you might find it interesting. http://jephjacques.com/post/14655843351/un-hearing-on-ai-rights

ld says:

Declaration of Independence

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”
It’s simple, whoever creates the robots should get to decide what the rules and limitations regarding their treatment should be.

Anonymous Coward says:

Re: Declaration of Independence

it also say “men” meaning “manking” or PEOPLE !!! not machines, devices, you will also not that there is NO definition of what “rights” you are endowed with !!..

“certain unalienable rights”, quite vague really, in fact it says nothing, and ensures NOTHING..

it tells you some ‘rights’ “life, liberty and happiness” (the pursuit of).

it does not say happiness is a right, but you have a right to persue it.. not necessarily attain it.

as for “life and liberty” clearly with the death penalty and prisons that is NOT a right either..

you are not issued a “RIGHT” to live when you are born, making that statement totally meaningless.

are you aware of any american in history that has required the honoring of his rights of life, liberty and happiness as detailed in the constitution ??? anyone ?? na.. lol

That One Guy (profile) says:

Three levels of rights:

-Current lack of sentience, ability to ‘feel’ or want one thing over another = no rights, as it doesn’t matter one way or another.

-Limited(animal level) sentience or ability to ‘feel’ = Limited rights, along the lines of rules against animal cruelty laws and whatnot, for essentially the same reasons; namely because while a sentience at that level may not be self-aware, or able to hold a conversation, it’s a proven fact that ill treatment has negative affects on the individual in question.

-Self-awareness and ability to think on it’s own = full rights, same as a human would have, because at that point to refuse equal rights would be just a re-hashing of the same line of thinking that led to slavery: “While you may have the same ability to think as I do, you look different than me, therefor you are lesser than me.”

Anonymous Coward says:

If robots ever get to the point by being totally in control with no programming to control their decisions, then yes. But we should never let them get to that point. ie – if you ask your robot to do something and it/he/she replies “No, want to watch this show on TV” or “Do it yourself” – if they get to that point, they should have rights. But allowing them to get to that point, brings up the Terminator style possibility.

Anonymous Coward says:

first what is your definition of “robot” ????

of course they have right, if you consider a robot to be some mechanical device that assists humans.

so by that definition a robot would be a mechanical arm, for someone who has lost their arm, or an electric, wheelchear, or a visual aid for a blind person.. all ‘robots’..

as such they have the same rights as humans have, it is allready an offense to discriminate against someone with a disability, who requires an “aid” for their disability..

these are by definition robots, they have the right to travel, and function as designed, they have the same rights as the human who requires them.

but to try to so losely define “robots” there is no way you can form a real argument, as there is really so such ‘one’ ‘robot’,

Anonymous Coward says:

If robots get to the point where we’re seriously debating whether they’re sentient or not, more likely than an argument that they should be given human-equivalent “rights” (under a theory that they are conscious or something) will be an argument that existing law requires it to protect human rights. In particular, I think the First Amendment and its equivalents in other nations will make this conclusion unavoidable without regard to whether we view robots as conscious entities or machines.

It’s reasonable to assume that a sentience-equivalent robot will be capable of listening to the speech of humans, attempting to extract meaning from it, and integrating that meaning into its core programming and future behaviors. It will also be able to respond to questions from humans on any subject within the ever-expanding realm of its programming. If you create a sentience-equivalent robot and I talk to it, it will extract some meaning from it that would, in a way that can be objectively proven, alter how it responds to questions and how it acts in the future, perhaps significantly. A compelling argument could thus be made that sentience-equivalent robots must be protected by law from arbitrary tampering or destruction–whether by their creators, by others, or most importantly by government–because such tampering or destruction would directly interfere with the propagation of ideas throughout the human-robot community.

This, of course, leads to all sorts of interesting and thorny questions. What if I teach the robot you created an idea you disagree with? Can you deprogram it by exercising a right to program your robot like parents have with rights to raise and educate their children? Will we have to have laws that require a minimum programming, either at the factory or by owners? What constitutes “punishment” for a robot that behaves badly? How do we deal with sentience-equivalent robots that their owners don’t want or can no longer afford to maintain? Their ability to have a perfect memory could provide valuable insight into the world, perhaps even more insight than a human ever could, so destroying the information they hold could be a terrible loss. Would there be robot orphanages? Robot homeless shelters? Battery banks instead of food banks?

Anonymous Coward says:

Re: I bet they will get rights/protections

of course you cant protect the unborn, nor should you, you simply do not have that right..

what right do you have to make a decision about someone elses rights ??

what if having that baby impinges on their right of “the persuit of happiness” ??

you are not a god, you dont get to decide what are or are not “rights” for other people… just one of the stupid things Americans think they have a “right” to do.. you simply dont have that right..

do you honestly believe you have some ‘right’ to be able to tell someone else what to do, or what not to do ??

who gave you that right ?? where is that right written down ?

It’s actually totally disgusting to think there are people like you who somehow think you are able to determine what is or is not the rights of others, apart from yourself..

do you think you think you have a right to protect the armed people or the unarmed people.. or your home.

to carry a gun, to defend your home ??

your a joke, you have no rights and that is the way it should be..

you cannot seperate your religious fanatism with your legal obligations.

Rekrul says:

How do you define life?

I would define it as a creature that has emotions and that can have spontaneous thoughts and ideas, not just reactions to outside stimulus.

Thinking for itself involves more than just making pre-programmed decisions based on a set of pre-programmed conditions.

When a robot can spontaneously decide, all on its own, to re-arrange flowers in a vase because it thinks they look nicer that way, and not because a pre-programmed set of conditions tell it that arrangement A looks better than arrangement B, I’ll consider it alive.

TimothyAWiseman (profile) says:

It does not follow

” Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.”

Even if I were to grant every premise in the argument sketched here, it does not provide that we should Legally grant robots rights. It may, perhaps, persuade me that I should treat my robots in a certain fashion and teach my children to do the same, when these hypothetical robots exist.

But that does not mean that courts or law enforcement should be envolved in it. It is rather a moral issue within my family (and arguably more of an exercise, something I do now so that behaving morally when it matters later is easier, rather than something I do for its own sake).

Azrothz says:

no need for fear

There no need for fear over advance robotic homo sapien. I mean yes like a human ( homo sapien) you don’t know the outcome when making , raising , or teaching it right from wrong. Right now robot not even close to be independent and self learning, they are like a baby. Till then robot are just tools. I know robot are going to be in wars when they are advance enough, but why teach them just war when you teach peace. Also parent system is the same as parenting a real baby, but the same with humans, you need to take off the training wheels and let it be independent. So robots are tools for now but “when they advance” we must teach them right from wrong, humanity, equality, and un prefectness.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...