Currently there are no fully autonomous weapons. Soon however that statement will cease to be true. Autonomous weapons are weapons that can choose and fire on targets without human intervention. Some have affectionately called these weapons “killer robots”. The United States, U.K., South Korea, Germany and other countries are all involved in the research for autonomous weapons. Many experts believe that the use of fully autonomous weapons can happen in less than 20 years. Recently, Human Rights Watch and the Harvard Law School International Human Rights Clinic published a 50 page report entitled “Losing Humanity: The Case Against Killer Robots”. This report is extensively researched and outlines the legal and non-legal concerns with “killer robots”. They range from the lack of human accountability with the law when using deadly force if autonomous weapons are used, to the undermining of non-legal checks on the killing of civilians such as compassion.
There are arguments made that the use of autonomous weapons will save the lives of our military service men and women. In a country that has been at war for over ten years with thousands of lives lost, a technology that could help reduce this number for the next war should not be easily over looked.
Human Rights Watch and the International Human Rights Clinic have called for a treaty that would ban the development, production and use of these weapons. They have made it very clear, and I am in agreement with this belief, that if this weapon system is to be seriously banned from use, then it must be prohibited now, before the technology exists or its use will become too likely.
My question posed to you is:
1) If you agree with the use of technology, who and through what currently enacted legal treatises or law are you going to hold accountable anyone for crimes against humanities if one of these machines kills a civilian? The computer programmer? If the answer is none, what solution would you offer?
Sources:
Human Rights Watch
This is an issue that has its obvious pros and cons. These weapons have the potential of saving lives of our military service men and women. Thousands of lives have been lost in the war in Iraq, and any new development that can decrease this number would seem to be beneficial to our country. I think that most people in this country would agree that this is at least a good motive for creating autonomous weapons. On the other hand, the idea of using a weapon without any human intervention is highly dangerous. If something goes wrong, there is the potential for disaster. More importantly, who do we blame? Do we simply blame the machine? The people who built it? The people who programmed it? It may not even be “human error”. Technology has its quirks, and the weapons can simply malfunction. But, can’t humans make errors too? They do all the time. I do not think this idea should be completely ruled out, but at the same time, serious precautions should be taken before moving forward.
I believe that the use of weapons controlled by computers or “robots” is the future of warfare and will ultimately eliminate the need for human soldiers. However, I agree with Pierre that these systems should be banned before they are capable of being created. Allowing this technology to exist will desensitize the military leaders to the effects of war and will essentially turn it into some sort of video game. When there is no actual risk of losing human soldiers, countries that employ these “killer robots” will ultimately create more destruction than they would without them.
The biggest concern in my mind that Pierre touched upon is the lack of control over whom the robot targets. What is to stop the machine from inadvertently targeting civilians? I think that the computer programmer who designs the system should be held accountable, however, it is his employer (most likely the government) who is really the one to blame. If this technology is to be created, there needs to be the strictest of regulations to ensure that Civilians will not be harmed otherwise those responsible for its creation should be held accountable for the inevitable disasters that it causes.
I would like to clear up a point I made earlier. I am fully in support of the development and use of these autonomous weapon systems. The closest weapon system to the autonomous weapons that I have had experience with is the remotely operated machine gun used by the U.S. Army in Iraq 2009 on their Mine Resistant Ambush Protected (MRAP) vehicles. (http://www.army-technology.com/projects/cougar-mrap/, see Armament of the MRAP). This weapon system allowed a gunner to remain completely inside of the armored vehicle thereby not directly exposing himself to enemy fire while still being able to return fire via remote control. The gunner is usually the most dangerous job to have while operating an MRAP but also the most important during an attack. This system is a break through in gunner safety and has saved American lives. It had however, numerous draw backs that I’ve seen and heard of to include limited field of vision, remedial action problems for a runaway gun and an inability for the gunner to hear clearly what is going on outside to get a proper assessment of the situation. (http://soldiersystems.net/2012/06/06/crows-ii-eof-kit/, see comments)
Making a mistake was a lot more likely but far safer for the soldiers. Despite these drawbacks, the safety of the serviceman in my opinion comes before anyone else.
I do not think autonomous weapons should be used at all. The potential for abuse, I think, is too great to allow the technology to be developed. Even with an extremely comprehensive international treaty with explicit, direct and strict penalties there are just too many things that can go wrong. For example, what is to prevent terrorists groups from attack assembly/ programming facilities to steal the weapons and robots? And, what is to prevent other governments from paying top dollar to hire hackers to reprogram robots? Not to mention the potential for increased collateral damage or the risk of civilian causalities like Dan pointed out.
On the other hand, I do not see how we can afford to not have the technology. This may be an overly cynical view of the world but even if there was an agreement by all nations not to develop the technology how many of them would actually adhere to the agreement when a government knew the technology was within their reach? I mean the reason nuclear disarmament does not work is because each country will always ask how can we be 100% sure ALL of their nukes are gone? So, can we really think that it would be any different when the potential gain is to have an entire robot army? The situation is a lose-lose because of how much a robotic army could change the global power structure. Idealistically, everyone would ban the technology and that would be the end of it. However, my more realistic, if somewhat pessimistic, view is that the technology will come about and it will be abused.