August 06, 2009

Slip the robots of war

ArmOn practical ethics I blog about military robotics: Four... three... two... one... I am now authorized to use physical force!

As I see it, there is a serious risk that increasingly autonomous and widespread military robotic systems will reduce their "controllers" to merely rubberstamping machine decisions - and take the blame for them.

The irony is that smarter, more morally responsible machines don't solve the problem. The more autonomy the machines have and the more complex they become, the harder it will be to assign responsibility. Eventually they might become moral agents themselves, able to take responsibility for their actions. But a big reason to have military automation is to prevent harm to persons (at least on one's own side). But personhood is usually taken as synonymous with being a rightsholder and a moral agent. The only way around it might be if the machines are not rightsholders/persons, yet moral agents. But while there are some who think that non-moral agents and non-persons (e.g. animals) can be rightsholders, I do not know if there is anybody arguing that there could even in principle be a moral agent or person that is not a rightsholder. Most philosophers tend to think that moral agency implies holding rights.

We could imagine somehow building a machine that strives to uphold the laws of war and act morally in respect to them, yet does not desire to be treated as a person or rightsholder. But we might still have obligations to such complex machines even if they do not desire it. Deliberately making them, placing them in a situation where they will occasionally commit war crimes or tragic mistakes and subsequently desire proper punishment, that would seem to be morally dubious.

We do not just have command responsibility for our machines, but we have creator responsibility.

Posted by Anders3 at August 6, 2009 07:13 PM
Comments