from the be-kind-to-skynet dept
Computerworld has a story questioning whether or not the robots that will be increasingly life-like and ubiquitous in our lives will attain the kind of rights we afford animals.
Imagine that Apple will develop a walking, smiling and talking version of your iPhone. It has arms and legs. Its eye cameras recognize you. It will drive your car (and engage in Bullitt-like races with Google’s driverless car), do your grocery shopping, fix dinner and discuss the day’s news.
But will Apple or a proxy group acting on behalf of the robot industry go further? Much further. Will it argue that these cognitive or social robots deserve rights of their own not unlike the protections extended to pets?If you're like me, your gut reaction may have been something along the lines of: of course not, idiot. But the article actually raised some interesting questions, based on a paper by MIT researcher Kate Darling.
The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.Now, this, to me, makes a bit of sense save for one detail. Yes, our values are reflected in the way we treat some animals, but there seems to be a vast difference between organic life and cognitive devices. Robots, afterall, are not life, or at least not organic life. They are simulations of life. This is, of course, where the rabbit hole begins to deepen as we have to confront some tough philosophical questions. How do you define life? If at some level we're all just different forms of energy, is the capacity to think and reason enough to warrant protection from harm? Can a robot be a friend, in the traditional sense of the word?
But, putting aside those questions for a moment and assuming robots do attain some form of rights and protection in the future, this little tidbit from the article made me raise my eyebrows.
Apple will patent every little nuance the robot is capable of. We know this from its patent lawsuits. If the robot has eyebrows, Apple may file a patent claiming rights to “a robotic device that can raise an eyebrow as a method for expressing skepticism.”Here's where we may find commonality with our metallic brethren. With the expanded allowance for patenting genes, it becomes all the more likely that the same codes that manufacture our humanity could indeed be patented in the way that a robots manufactured "humanity" would be. If robotics progresses to produce something along the lines of EDI, the very things that make her "human" enough to be worthy of rights will be locked up in an increasingly complicated patent system. And, with our courts falling on the side of gene patents for humans, we've virtually ensured that all of that robotic humanity will indeed be patentable.
On the other hand, what happens if future courts rule that human genes cannot be patented? And then what happens if we do indeed define some kind of rights structure for our robotic "friends"? Do those rights open up the possibility that robotic "genes" should not then be patented?