Dave, We Need To Bomb More, Dave

from the conundrums dept

With all the talk lately about creating fighting robots and other autonomous military equipment, the NY Times is looking at questions raised by autonomous fighting machines and whether or not it eventually reaches the classic science fiction premise where the machines take over. While these fighting machines are still supposed to be under human control, the article points out that the whole purpose of automation is that it shouldn’t need human control, and if the humans that program it make mistakes then the fighting machines are likely to make mistakes as well. Another, more interesting, question is that even when humans do have the final word, do they become complacent in the face of computer generated information? If it’s right most of the time, they tend to trust it, over their own instincts and are likely to miss the small percentage of times where a human is likely to make a better decision than the machine.


Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Dave, We Need To Bomb More, Dave”

Subscribe: RSS Leave a comment
6 Comments
dorpus says:

Shift the battlefield

It’s already clear to everyone that opposing the US military on conventional-warfare terms is hopeless. Tactics like mass infiltration of the USA will work better.

I found out over the weekend that kidney beans, an extremely common American food, is a deadly poison when under-cooked. One could theoretically commit mass murder by sneaking undercooked beans into a salad bar.

From http://www.foodreference.com/html/artredkidneybeanpoisoning.html:

Red Kidney Bean Poisoning is an illness caused by a toxic agent, Phytohaemagglutnin (Kidney Bean Lectin). This toxic agent is found in many species of beans, but it is in highest concentration in red kidney beans (Phaseolus vulgaris). The unit of toxin measure is the hemagglutinating unit (hau). Raw kidney beans contain from 20,000 to 70,000 hau, while fully cooked beans contain from 200 to 400 hau. White kidney beans, another variety of Phaseolus vulgaris, contain about one-third the amount of toxin as the red variety; broad beans (Vicia faba) contain 5 to 10% the amount that red kidney beans contain.

As few as 4 or 5 beans can bring on symptoms. Onset of symptoms varies from between 1 to 3 hours. Onset is usually marked by extreme nausea, followed by vomiting, which may be very severe. Diarrhea develops somewhat later (from one to a few hours), and some persons report abdominal pain. Some persons have been hospitalized, but recovery is usually rapid (3 – 4 h after onset of symptoms) and spontaneous.

The syndrome is usually caused by the ingestion of raw, soaked kidney beans, either alone or in salads or casseroles. As few as four or five raw beans can trigger symptoms. Several outbreaks have been associated with “slow cookers” or crock pots, or in casseroles which had not reached a high enough internal temperature to destroy the glycoprotein lectin. It has been shown that heating to 80 degrees C. may potentiate the toxicity five-fold, so that these beans are more toxic than if eaten raw. In studies of casseroles cooked in slow cookers, internal temperatures often did not exceed 75 degrees C..

All persons, regardless of age or gender, appear to be equally susceptible; the severity is related only to the dose ingested.

No major outbreaks have occurred in the U.S. Outbreaks in the U.K. are far more common, and may be attributed to greater use of dried kidney beans in the U.K., or better physician awareness and reporting.

NOTE: The following procedure has been recommended by the PHLS (Public Health Laboratory Services, Colindale, U.K.) to render kidney, and other, beans safe for consumption:
*Soak in water for at least 5 hours.
*Pour away the water.
*Boil briskly in fresh water for at least 10 minutes.
*Undercooked beans may be more toxic than raw beans.

alternatives says:

Re: Re: Shift the battlefield

” likely to miss the small percentage of times where a human is likely to make a better decision than the machine. “

What a load of shit Mike … I have yet to see a computer that can think or outperform the human mind … ( unless its a Walmart employee that is )

Ok, how about a pox on BOTH your houses?

Making a better or worse decision doesn’t matter if there is no accountability for the actions.

And I bet “friend computer” could do a ‘better job’ than a human, because ‘friend computer’ doesn’t “fear” death. Where better job is defined as not shooting the non-combatant or being willing to roll into a firefight.

There are plenty of problems with this robot idea. When they get deployed AND they are “taken over” by ‘the enemy’ and kill some Americans, the realization of these problems will happen.

Mike (profile) says:

Re: Re: Shift the battlefield

What a load of shit Mike … I have yet to see a computer that can think or outperform the human mind … ( unless its a Walmart employee that is )

I think you missed my point… I’m talking about situations where the computers are designed to make suggestions based on information. In those cases, they usually do outperform humans, at least in terms of speed. It happens all the time.

Think of it like a chess game. A good computer program will generally suggest a better move than an average chess player. So, say you have a game set up and the player is asking the computer for help on each move. Since the computer GENERALLY gives better advice, the player is going to start relying on that advice, rather than seeing if the move really makes sense. That means, when a move comes up that doesn’t make sense, the player is less likely to scrutinize it.

That was the point… It wasn’t about the relative “abilities” of the human mind vs. a computer — but how people tend to react when they have a computer to help them make decisions, but with human approval.

Reddog says:

Robot Warriors

Here’s what scares me about this.

1. Lets assume we reach the point where we most of America’s fighting is done by machines. Let’s also assume that less advanced countries won’t reach that state nearly as quickly. Do we then become overly aggressive? (If we’re not already) After all, the risks WE would incur would be decreased.

2. Couldn’t this increase the odds of terrorist attacks at home? Let’s face it, our future opponents aren’t gonna be satisfied with destroying the machines we send to fight them. And, they’re not gonna have as many human targets availible on their own soil. Doesn’t that make our homeland the best target?

3. Will our troops be ready to fight if there’s a situation where the machines can’t?

Reddog

alternatives says:

Another data point

Gordan Johnson, who “led robotics efforts at the Pentagon’s Joint Forces Command research center in Suffolk in 2003” exclaims,

“‘They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot,’ Johnson said. ‘Will they do a better job than humans? Yes.'”

The article concludes,

“Money, in fact, may matter more than morals. The Pentagon today owes its soldiers $653 billion in future retirement benefits that it cannot currently pay…”

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...