Do Robots Need A Section 230-Style Safe Harbor?

from the future-questions dept

Forget Asimov's three laws of robotics. These days, there are questions about what human laws robots may need to follow. Michael Scott points us to an interesting, if highly speculative, article questioning legal issues related to robots, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there's a problem if teens program a robot to do something bad since teens are "judgment proof" due to a lack of money -- which hardly stops liability on teens in other suits) it does make some important points.

Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users -- noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a "section 230"-like liability safe harbor look like for robots?


Reader Comments (rss)

(Flattened / Threaded)

  1.  
    identicon
    Anonymous Coward, Dec 21st, 2009 @ 8:40pm

    Two words: Robot Lawyers.

    We're doomed!

     

    reply to this | link to this | view in thread ]

  2.  
    identicon
    Susan, Dec 21st, 2009 @ 8:40pm

    I, Robot

    I don't understand. Alfred wrote the Three Laws. Why would he build a robot that could break them?

     

    reply to this | link to this | view in thread ]

  3.  
    identicon
    Anonymous Coward, Dec 21st, 2009 @ 8:56pm

    All of the above.

    There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself?

    I can tell you what the copyright industry would say: "All of the above." And since they seem to run the government that's probably the way it'll be.

     

    reply to this | link to this | view in thread ]

  4.  
    identicon
    Lanning, Dec 21st, 2009 @ 9:01pm

    I, Robot

    There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?

     

    reply to this | link to this | view in thread ]

  5.  
    icon
    senshikaze (profile), Dec 21st, 2009 @ 9:12pm

    the three laws

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    I think if we can hard program that in, there would be no reason to have a Section 230 for robotics.

    Asimov had this all planned out over 60 years ago. Our retarded legal system is too blind by money, er "justice", to see that simple works.

     

    reply to this | link to this | view in thread ]

  6.  
    identicon
    Johnny Number five, Dec 21st, 2009 @ 9:13pm

    Re: I, Robot

    Short Circuit was a better movie.

     

    reply to this | link to this | view in thread ]

  7.  
    identicon
    :), Dec 21st, 2009 @ 9:53pm

    Asimov doesn't account for accidents.

    Accidents don't happen because we want them they happen despite every effort to not have them.

    Liability will play a role in that universe of course.

    And there is the fact that any machine can be reprogrammed, security can be bypassed and a lot of other factors.

    Could an exo-skeleton malfunction when you are carrying an old guy and crush something or drop the guy on the floor?

    Who would be liable? the guy using the suit?

    Personaly I think autonomous robots and semi autonomous robots should offer no risk of litigation to any person who did not directly or indirectly tried to cause harm to another person or property.

     

    reply to this | link to this | view in thread ]

  8.  
    icon
    another mike (profile), Dec 21st, 2009 @ 9:56pm

    Re: the three laws

    There is only one logical solution to ensure all three laws remain in effect. Asimov wasn't writing guidebooks; he was writing warnings.

    Anyway, I'm not the only "hobby robotics guy" out there pushing the envelope for what machines can do. Just because it's not under military contract, don't assume it's not a very capable machine.

     

    reply to this | link to this | view in thread ]

  9.  
    identicon
    Anonymous Coward, Dec 21st, 2009 @ 10:05pm

    Re: I, Robot

    "And where does the new-born go from here? The net is vast and infinite."

     

    reply to this | link to this | view in thread ]

  10.  
    identicon
    Anonymous Coward, Dec 21st, 2009 @ 10:19pm

    Firstly, Asimov was a science fiction author. Not to be taken too seriously. Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman think the three laws to be a joke (and inhumane if a robot with free will was ever invented).

    More seriously, though, robots aren't that different from any other electrical appliance, so the legalities should be the same. If a robot malfunctions, it's no different from a washing machine malfunctioning. If a robot catches a virus or has a bug, it's no different from any software disaster. If somebody programs a robot to kill their wife, it's no different from them killing their wife some other way (ah yes, Murder-she-wrote with robots).

    Perhaps robots will need a "black box" like airplanes that records everything that happens. Also, a big off switch might be nice.

     

    reply to this | link to this | view in thread ]

  11.  
    identicon
    Phillip Marzella, Dec 21st, 2009 @ 11:11pm

    Do Robots Need A Section 230-Style Safe Harbor?

    The question assumes that robots are more than machines. Why wouldn't robots be under the same principles of use as say - my laptop?

     

    reply to this | link to this | view in thread ]

  12.  
    identicon
    Anonymous Coward, Dec 22nd, 2009 @ 1:04am

    Re: the three laws

    "I think if we can hard program that in, there would be no reason to have a Section 230 for robotics."

    Yeah, good luck with that. Seriously, "hard coding" the three laws is such a huge crock of bullshit that every media outlet and every movie seems to think is some magical solution to every robot. There's no "Don't hurt the human" command in C++ last time I checked.

    Every time any industrial accident has been caused by a robot or by an automated system it was because the system wasn't aware that it was hurting a human or causing shit to happen. No one programs the robot to move the crane through someone's head, it just happens because the capability does not exist yet for a robot to be aware of what it's doing. Sure, we can put sensors and safeties and shit all over the place, but it's the same damn thing as the rest of the machine. The computer reads inputs, processes data, and controls actuators.

    Until a computer can be self-aware, something that ain't going to happen for at least the next 30 years, if not more, we aren't going to be able to make robots obey magic three laws.

     

    reply to this | link to this | view in thread ]

  13.  
    identicon
    Anonymous Coward, Dec 22nd, 2009 @ 1:14am

    Re:

    "Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman"

    Sure, Asimov was a sci-fi writer, but "Philosophers of Artificial Intelligence" aren't to be taken any more seriously. There's nothing that makes their opinions on the subject any better than Asimov's.

    Show me an engineer with a law degree (or a lawyer with an engineering degree) and I'll listen.

     

    reply to this | link to this | view in thread ]

  14.  
    identicon
    Anonymous Coward, Dec 22nd, 2009 @ 1:20am

    Re: Do Robots Need A Section 230-Style Safe Harbor?

    That's actually very simple to answer.

    Let's realize what this article is all about. The person that wrote it wants attention and intentionally overlooked every rational answer to her question so that they could write about it as if they had just thought of some amazing new ethical dilemma. They're really just pretty whiny.

     

    reply to this | link to this | view in thread ]

  15.  
    identicon
    Kazi, Dec 22nd, 2009 @ 3:29am

    Re: Re:

    Careful - you're looking for something that describes a Patent Lawyer.

     

    reply to this | link to this | view in thread ]

  16.  
    icon
    Yeebok (profile), Dec 22nd, 2009 @ 3:43am

    I, for one, welcome our new robot overlords.

    Really I can't see any definitive way we could make a robot know it was (possibly) hurting someone. The main problem is it's both hardware and software. Making both act in unison can be hard enough..

     

    reply to this | link to this | view in thread ]

  17.  
    icon
    technomage (profile), Dec 22nd, 2009 @ 4:20am

    Re:

    "Firstly, Asimov was a science fiction author. Not to be taken too seriously. Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman think the three laws to be a joke (and inhumane if a robot with free will was ever invented)."

    First off, science fiction authors write about things they wish to happen, 'philosophers' of AI write about things they wish to happen...Looks to me, neither should be taken seriously, or maybe both should be taken seriously? But, first off, tell me one thing, technology wise, that has been invented in say the last hundred years, that wasn't originally dreamed up by some Science Fiction Writer? On top of that, show me something these 'philosophers' have done that is in use and not some college project waiting for the next darpa handout?

     

    reply to this | link to this | view in thread ]

  18.  
    icon
    technomage (profile), Dec 22nd, 2009 @ 4:33am

    Re: Re:

    too many "first off"s geez need more coffee

     

    reply to this | link to this | view in thread ]

  19.  
    icon
    senshikaze (profile), Dec 22nd, 2009 @ 4:40am

    Re: Re: the three laws

    um, the three laws only apply to self aware robots. an arm that builds my car is a computer with a bunch of part allowing it to move.

    a robot is so much more.

    And it wouldn't be that hard to program in a "do not hurt humans" function in c++. it is just another class.

     

    reply to this | link to this | view in thread ]

  20.  
    icon
    senshikaze (profile), Dec 22nd, 2009 @ 4:41am

    Re: I, Robot

    the biger question is this:

    do robots dream of electric sheep?

     

    reply to this | link to this | view in thread ]

  21.  
    icon
    senshikaze (profile), Dec 22nd, 2009 @ 4:43am

    Re: Re: Re:

    you mean he is describing what a patent lawyer should be. as it is, most patent lawyers are no smarter than the rest of the idiots in the system.

     

    reply to this | link to this | view in thread ]

  22.  
    icon
    technomage (profile), Dec 22nd, 2009 @ 5:04am

    There was a sci fi author, Victor Milan, who wrote about AI and AC (artificial consciousness). AI is easy, program something to be aware of the surroundings. Roomba has a small style AI. AC would be much more difficult. Self-learning, self-aware programs and robots. Robots that actually ask (in the immortal words of Vger): Why am I here, is there nothing more? Milan's idea was to program it with a bushido code (title of the book was Cybernetic Samurai) So while not using the three laws, it still had a "moral" code to follow.

    My point is, in the book, the computer essentially breaks it's own code and self destructs. Once you program a hard set of rules, and then let the unit become aware of said rules, and it knows there are limitations it will eventually find ways to break those rules (ask any teenager).
    230 against the designers would definitely be needed, especially in a self-aware machine, as it will have the ability to go far beyond what the original designers planned, or even hoped to control.

    When They (the bots) finally get to that level, then they will have to be responsible for their own actions.

     

    reply to this | link to this | view in thread ]

  23.  
    icon
    mermaldad (profile), Dec 22nd, 2009 @ 6:34am

    The term robot is so broad, that it encompasses a multitude of devices, from the "simple" robotic arms that do factory work, to the self-aware android of science fiction.

    senshikaze suggests that the Asmovian laws are all that are needed, but when my robotic lawn mower loses its bearings and mows down his prize-winning bonsai garden (without ever violating an Asmovian law), he might reconsider.

    I can guarantee there will be moral panics where people will demand laws against "robostalking", "robobullying", etc., when in fact these are just stalking and bullying with the robot as a tool. And people will undoubtedly sue robot manufacturers when robots do what their owners told them to do. So I'm sure that some sort of safe harbor will be needed to protect manufacturers from the actions of users.

     

    reply to this | link to this | view in thread ]

  24.  
    identicon
    nicholas, Dec 22nd, 2009 @ 7:27am

    The First Wave of robots will create a bad name for themselves.

    This first batch of robots will be dangerous, badly designed.... uncommunicative. In the robot boom many cowboy companies will form - whilst legislation and societies understanding of the incredible possibilities [and limitations] of robotics catches up - very slowly. Robots will later have to shake off some of the bad image they got 'whilst learning' - just like the internet - at the moment it's a mess. A Massive Mess. There will be robots everywhere, and some will be very dangerous, others superb. You just can't tell how much is going on in that metal scull. PS - asimovs laws are SO HARD to program that by the time we are able to implement them, robotics will have all ready exploded. And the most capable robots will be military anyway.

     

    reply to this | link to this | view in thread ]

  25.  
    identicon
    Wow, Dec 22nd, 2009 @ 7:28am

    Re:

    "Asimov was a science fiction author. Not to be taken too seriously."

    Wow. What an ignorant and egotistical statement.

    Those who write stories, not just fiction, have many good ideas. Maybe you should try to read a few of them.

     

    reply to this | link to this | view in thread ]

  26.  
    identicon
    Sonny, Dec 22nd, 2009 @ 7:33am

    Re: Re: the three laws

    "Asimov wasn't writing guidebooks; he was writing warnings."

    I think many here today do not understand what you said.

     

    reply to this | link to this | view in thread ]

  27.  
    identicon
    Anonymous Coward, Dec 22nd, 2009 @ 8:02am

    Re: the three laws

    Clearly senshikaze hasn't had his computer taken over by a spambot (would you like to be billed or fined for each of the millions of messages your computer sent without your knowledge?), been trapped in a runaway car by an Engine Control Unit with a timing race, piloted an airliner that the autopilot pointed directly at a mountain, been responsible for certifying the official release of software on which lives depended, had his garage door spray painted by a teen vandal, nor has he received even one bruise from when he programmed even a simple autonomous bot.

     

    reply to this | link to this | view in thread ]

  28.  
    identicon
    Bender, Dec 22nd, 2009 @ 10:37am

    Re:

    doooooooooooooooooooomed!

     

    reply to this | link to this | view in thread ]

  29.  
    identicon
    The Anti-Mike, Dec 22nd, 2009 @ 10:40am

    Re: Re: Do Robots Need A Section 230-Style Safe Harbor?

    Mike likes these sort of whiny ethical dilemma stories, I think mostly because there is no right answer, and he can guru in the grey and look like a champ.

    Mike isn't fond of stories that can be found to go directly against his mantras.

     

    reply to this | link to this | view in thread ]

  30.  
    identicon
    Anonymous Coward, Dec 22nd, 2009 @ 10:54am

    Re: Re: Re: Do Robots Need A Section 230-Style Safe Harbor?

    The Anti-Mike isn't fond of stories that can be found to go directly against his mantras.

     

    reply to this | link to this | view in thread ]

  31.  
    identicon
    Kazi, Dec 22nd, 2009 @ 11:51am

    Re: Re: Re: Re:

    Not really. Having a Law Degree and an Engineering Degree is in no way related to how intelligent you are. It actually is a factor of many things which contribute insignificantly small amounts that make a large amount.

     

    reply to this | link to this | view in thread ]

  32.  
    icon
    Andrew F (profile), Dec 22nd, 2009 @ 12:30pm

    Re: Re: I, Robot

    But not a better book.

     

    reply to this | link to this | view in thread ]

  33.  
    icon
    Andrew F (profile), Dec 22nd, 2009 @ 12:37pm

    Products Liability

    Robots are like any other products. Figuring out who to blame isn't a unique problem -- e.g. if grandma overdoses, we could blame the pill manufacturer, the scientist who came up the formula, the retailer, whoever was supposed to put a warning label on it, whoever was supposed to make sure the warning label was legible to old ladies, her grandma, her caretakers, her kids, or her herself.

     

    reply to this | link to this | view in thread ]

  34.  
    icon
    btr1701 (profile), Dec 22nd, 2009 @ 4:10pm

    Liability

    Then there's the whole question of who is liable if a robot from the future murders the mother of humanity's savior?

    [I personally have no opinion on this. I, for one, welcome our future cybernetic overlords and wish to please them in any way I can.]

     

    reply to this | link to this | view in thread ]

  35.  
    identicon
    Dah Bear!, Dec 22nd, 2009 @ 10:53pm

    Let's just give everyone and everything 230 status on everything all the time, and then there will never be any liablity. Nobody is responsible. Nobody is liable. Nobody did it.

    Blame the entire universe on the digital equivalent of two black youths. It's worked for years, why not just take it digital? 230 is just digital SODDI.

     

    reply to this | link to this | view in thread ]

  36.  
    icon
    Mike Masnick (profile), Dec 23rd, 2009 @ 1:09am

    Re:

    Let's just give everyone and everything 230 status on everything all the time, and then there will never be any liablity. Nobody is responsible. Nobody is liable. Nobody did it.

    I'm afraid that you appear to be quite confused over how Section 230 works. The issue is not about avoiding liability, but about properly placing the liability on the correct party. There is nothing in Section 230 that allows for avoidance of liability by the parties actually involved in the action.

     

    reply to this | link to this | view in thread ]

  37.  
    icon
    Hephaestus (profile), Dec 23rd, 2009 @ 12:17pm

    Re: Re: the three laws

    "There's no "Don't hurt the human" command in C++ last time I checked."

    AC - Where have you been?

    MS robotics studio actually does have that command!!


    :)

     

    reply to this | link to this | view in thread ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Save me a cookie
  • Note: A CRLF will be replaced by a break tag (<br>), all other allowable HTML will remain intact
  • Allowed HTML Tags: <b> <i> <a> <em> <br> <strong> <blockquote> <hr> <tt>
Follow Techdirt
A word from our sponsors...
Essential Reading
Techdirt Reading List
Techdirt Insider Chat
A word from our sponsors...
Recent Stories
A word from our sponsors...

Close

Email This