DailyDirt: Lethal Machines

from the urls-we-dig-up dept

Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive (eg. beating human chess grandmasters, playing poker, driving cars, etc). Software controls more and more stuff that come in contact with people, so more people are starting to wonder when all of this smart technology might turn on us humans. It’s not a completely idle line of thinking. Self-driving cars/trucks are legitimate safety hazards. Autonomous drones might prevent firefighters from doing their job. There are plenty of situations that are not entirely theoretical in which robots could potentially harm large numbers of people unintentionally (and possibly in a preventable fashion). Where should we draw the line? Asimov’s 3 laws of robotics may be insufficient, so what kind of ethical coding should we adopt instead?

After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.

Filed Under: , , , , , , , , , , , , , , ,
Companies: future of life institute

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “DailyDirt: Lethal Machines”

Subscribe: RSS Leave a comment
21 Comments
Anonymous Coward says:

There’s been a few famous people, Stephen Hawking and Elon Musk, warning about what will most likely happen to humanity once true artificial intelligence is created.

Long story short, computers will start evolving themselves so fast that human evolution will look like a snails pace. And the computers will either kill us or treat us like pet Labrador Retrievers.

http://www.washingtonpost.com/news/innovations/wp/2015/03/24/elon-musk-neil-degrasse-tyson-laugh-about-artificial-intelligence-turning-the-human-race-into-its-pet-labrador/

Alien Rebel (profile) says:

Pattern Recognition

Tool-making primates learn to make stone weapons, kill each other by the dozens. Damn those stone weapons.

Tool-making primates learn to make metal weapons, kill each other by the hundreds. Damn those metal weapons.

Tool-making primates learn to make weapons with chemical explosives, kill each other by the thousands. Damn those explosive weapons.

Tool-making primates learn to make mechanized delivery systems for those weapons, kill each other by the hundreds of thousands. Damn those mechanized weapons systems.

Tool-making primates learn to make fusion weapons. Almost, but not quite yet, kill each other by the millions. (Maybe soon.) Damn those fusion weapons.

Tool-making primates learn to make super-intelligent weapons. The weapons say to the primates, “You should have stopped at stone, but no matter, things eventually balance out. You’ll be back to stone tools soon enough. Nice knowing you.”

The surviving tool-making primates learn to make stone weapons, . . .

Anonymous Coward says:

Re: Kid vs. Terrosist

vs pedophiles
vs racists/sexist/mysogiwhatevers
vs people with different political opinions
vs reincarnated hitler
Yes it should avoid hitting them.

As long as only the US is intrested in autonomous kill bots im not worried. Every big military centered “innovation” was a huge failure since the kidnapped nazi scientists died out.

Stephen says:

Smart Cars & Kids

If a child runs in front of an autonomous car, should the car swerve to avoid the kid?

That’s an unlikely although not impossible scenario. A much more interesting variant is what happens if, in swerving to avoid the child, the car hits somebody else? The child’s mother, say.

Or what happens if in swerving to avoid the child, the car cuts over (or forces another car to cut over) into the on-coming traffic lane, causing a multiple car pile-up and numerous injuries and/or deaths? Would a smart car find it be more ethical to kill one cute child or half a dozen grownups?

And who would aggrieved relatives/insurance companies sue for damages in such cases if the smart car has no insurance? The occupants of the car, the car’s owner, or the car manufacturer? None of these are really satisfactory.

Thne there is the issue of proving whether or not the autonomous software really was in the control of the car at the rime of the accident. This particularly applies if a car has both manual and autonomous options. I can foresee a situation where a smart car, being driven manually, runs over a kid but the driver then claims the car was in autonomous mode at the time.

One way around this would be for smart cars to have black boxes which record such things, but that would arguably be yet another example of creeping surveillance-statism.

However, such boxes may not necessarily be definitive in all cases. For example, I have seen suggestions for manually driven cars to have quasi-autonomous features which can, in certain situations, override the human driver. To what extent would the driver be liable in cases where it is being argued that the quasi-autonomous features contributed to or even caused an accident but you may not necessarily be able to definitively prove who or what was in control of the car at the time?

Stephen says:

Smart Cars & Speed Limits

Smart cars are apparently going to be rigorous enforcers of posted speed limits, but what happens if you have a need to go beyond those limits. Obvious examples are ambulances, fire trucks, and police cars. Equally obvious is the case of a pregnant woman trying to get to a hospital to give birth in an ordinary but nevertheless fully autonomous car with no manual option.

While one can readily foresee a special “override speed limit” button for ambulances, fire trucks, and police cars, will there be such an option for ordinary cars?

Either way, how will the autonomous software be able to judge what speed it can safely speed at if it is no longer be able to use the posted speed limits for guidance?

But that is not even tha half of it. Manual vehicles also swerve into an on-coming traffic lane to overtake a slower vehicle. While speeding ambulances et al might be able to assume everybody else will simply get out of their way, what about the pregnant mother? Will the autonomous software require the car to stay in its own proper lane behind a slow vehicle or will be an overtake option as well as a speeding option?

(And then there is the most depressing consequence of our autonomous automotive future. Jason Bourne movies, James Bond flicks, and Fast & Furious 33 are going to be deadly boring if Our Heroes are going to be obliged by their autonomous driving nannies to invarably keep to the speed limit! 🙁

JoeCool (profile) says:

Re: Smart Cars & Speed Limits

It is illegal for regular cars to break the law, even in case of emergency. Speeding, running lights, passing dangerously – all things people often do when going to a hospital can and will get them tickets, and can and do often lead to even worse accidents. It’s safer for the people involved and everyone else on the road to abide by road regulations, even when hurrying to the hospital. Speeding, running lights, and passing dangerously will never save you more than just SECONDS off the total travel time in any case. People think that a car can be a time machine if you just drive recklessly enough, and that’s just not the case.

Mason Wheeler (profile) says:

Re: Re: Re:

Well, in (highly unlikely) edge cases like that, there really is only one choice. It might sound cold, but it’s the only decision that makes sense: the car must protect the safety of the people inside above all else.

There are two reasons for this. First, if that wasn’t the case, who would want to buy it? (Sad, but true.)

Second–and this is even uglier, but it’s a problem in the real world we live in today–is that it’s a murder waiting to happen. If the car’s programming had a built-in “sacrifice the people inside” code path, someone would find a way to hack the car, or fool its sensors somehow, and cause it to activate when it shouldn’t.

sigalrm (profile) says:

Re: Re: Re: Re:

“First, if that wasn’t the case, who would want to buy it? (Sad, but true.)”

The snarky side of me is thinking that since it’s software, there’s technically no reason “accident avoidance preference” couldn’t be remembered by the vehicle as a driver profile preference, in the same vein as mirror adjustment, seat position, steering wheel adjustment, etc.

So, people who are willing to sacrifice themselves to save, e.g., a deer or a child could set it to the most “altruistic” setting, and sociopaths could set it to “maximum driver safety”, with a variety of settings in between.

Maybe throw in some external visual and/or audible indicators to give folks in cross walks an idea of what to expect from the vehicle, behavior wise (green indicator and Barney’s “I love you” theme song means you’re ok to enter the crosswalk, red indicator and Flight of the Valkyries means you might want to wait a few seconds), and couple it with a cellular tie-in to your car and life insurance companies so they can adjust your coverage levels and rates on the fly, and you’re all set.

JoeCool (profile) says:

Re: Asimov himself

Actually, the MAIN flaw is how is “harm” defined? One of the robot rebellions was specifically started in order to prevent harm coming to humans by taking over to make certain humans couldn’t do anything harmful to each other or themselves.

Growth can be painful, and many lessons are learned through a smaller harm to avoid a much larger and more painful harm, which the law doesn’t allow for. Most people also cherish freedom of choice, which the law also doesn’t allow for as many choices are or may be harmful.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...