Forget Asimov's three laws of robotics
. These days, there are questions about what human laws robots may need to follow. Michael Scott
points us to an interesting, if highly speculative, article questioning legal issues related to robots
, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there's a problem if teens program a robot to do something bad since teens are "judgment proof" due to a lack of money -- which hardly stops liability on teens in other suits) it does make some important points.
Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users -- noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a "section 230"-like liability safe harbor look like for robots?