I saw an article “Human rights experts, activists push for ban on ‘killer robots’” (read this and this) and although its a good subject to consider, its sort of a moot point so far as I am concerned. First let’s talk about killer robots or soldier robots, if you will.
Define Killer Robots
First, let me define what I am thinking about when I say killer robots for this post. I imagine robots developed to be sent into an area and to operate without human intervention. For example, a group of such robots may be dropped into an enemy complex for the task of clearing out hostiles to rescue captive friendlies. Let’s say in this example that US hostages have been taken in Iraq by an extremist militant group in a heavily fortified militarized complex of some sort. Now we’re talking automated robots soldiers so they may send back video feeds and perhaps we could turn them off remotely after the mission has been completed – but we don’t want humans controlling them because one of the things they bring to the battlefield is faster reflexes and human control makes them ineffective.
You would expect any robots deployed into this complex would be able to identify hostile forces, friendly forces and captives. So let’s throw the first monkey wrench into this scenario. Let’s say one of the Iraqi extremists is caught trying to free captives and is lucky enough to be simply thrown in with the captives rather than being killed. If soldier robots can properly identify the Iraqi extremists in this case, then extremist turned friendly is dead when our robot soldiers arrive. I don’e expect a soldier robot would be able to evaluate the Americans explanations that this person was trying to help them. Robots would storm the room and begin shooting all targets they have pre-tagged as unfriendly. I doubt any artificial intelligence would allow them to properly evaluate this extremist turned ally as anything other than an enemy among the captives. I realize human troops may also react the same way, but with human troops there is a chance this guy lives. But against robotic troops I think this guy is dead.
Now, as the second monkey wrench in this scenario consider a nearby country out of good will toward the US and perhaps as an excuse to live fire train their troops sends in a rescue unit and for whatever reason this news is not relayed in a timely fashion to the caretakers of the US robot solider force – maybe Turkeys sends the Maroon Berets in to free the captives. Maybe the US robot soldiers are deployed after the Turkish soldiers hit the ground and their recognition of friend or foe identifies the Turkish forces erroneously as foes.
Why Do I Say Moot Point?
Why do I say this is a moot point? First off I believe the US still considers itself as a major policing force in the world. This means that the US and likely close allies would be opposed to the use of such weapons by other countries that desire to develop such fully automated machines of destructio. Sure, the US may be the world’s foremost developer and user of automated battlefield technologies right now, but from what I’ve read they have human operators controlling the decision to fire or not. This is my opinion and is based on NO research whatsoever.
Also, I do not believe any nation that is capable of developing fully autonomous robot soldiers capable of killing people would actually use any such weapons in the field as I’ve described above. Unless you are talking about dropping said machines far behind enemy lines and in the case that said nation a) does not care about US-lead retribution and b) does not care about civilian casualties.
More Opinion on Fully Automated Robotics
See, here’s the thing. Although I can obviously see that we have developed “robots” that operate on Mars, as well as many advanced “robots” I don’t think there are any nations or groups that have the intelligence enough to develop an “autonomous robot” for war. We can make systems to turn on lights when someone enters a room, and we can certainly develop machines that can recognize someone approaching and automatically open a door as well. However, in each case human intervention in different forms is required.
Unlike a human, an automatic system cannot recognize when a blind person enters a room and lighting is not necessary, or differentiate between humans and other similar-sized animals. Said system certainly cannot ascertain when someone entering the room is angry or depressed – which is not the job of a light anyway. Some people may actually turn on a light for someone who does not want the light to be turned on … but the fallibility of people is not the subject of this post.
Unlike a human, a door cannot recognize when someone is simply walking past, perhaps to use a trash can near the door, as opposed someone who actually is wants to walk through the door. Maybe someone walks into a doorway for shelter from the rain – there are many reasons someone may appear to be approaching a door for entry when they may not actually want to enter. At the same time, I’m also not saying that a person could determine when to open a door for someone else 100% of the time without error … but the fallibility of humans is not the subject of this post.
The only way a light can be turned on and off automatically 100% of the time at the right time as expected or a door can open 100% of the time automatically as expected is when intelligent placement is used and leaves 0% chance for error. Even then you have to consider in the case of the light that there may be times someone enters a room and does not desire or need a light to be turned on.
I’ve been talking here about automated lights and doors and obviously in these case there is very little to no worry about harm simply because a light is turned on or a door is opened. However, take these two small examples that seem rather simple to ask of a robot, and now imagine the different scenarios where killer robot
Still More Opinion on The Case Against Killer Robots
Killer robots must always be controlled by a person, and in real-time. I do not even believe a robotic system is capable of “locking on” to a human target in ALL cases and awaiting a kill approval for several hours, or acting on a kill order several hours later. Can a killer robot determine when to accept unexpected surrender or evaluate the offer of exchange of military intelligence in exchange for a person’s life?