June 04, 2013

When should robots have the power of life and death?

Tracked robotCry havoc and let slip the robots of war? - I blog on Practical Ethics about the report to the UN by Christof Heyns about autonomous killing machines. He thinks robots should not have the power of life and death over human beings, since they are not moral agents and they lack human conscience.

A lot of non-human non-agents hold power of life and death over us. My office building could kill me if it collapsed, there are automated machines that can be dangerous, and many medical software systems are essentially making medical decisions - or kill, as in the case of Therac-25. In most of these cases there is not much autonomy, but the "actions" may be unpredictable.

I think the real problem is that we get unpredictable risks, risks that the victims cannot judge or do anything about. The arbitrariness of a machine's behaviour is much bigger than the arbitrariness of a human, since humans are constrained to have human-like motivations, intentions and behaviour. We can judge who is trustworthy or not, despite humans typically having much larger potential behavioural repertoire. Meanwhile a machine has opaque intentions, and in the case of a more autonomous machine these intentions are not going to be purely the intentions of its creators.

I agree with Heyns insofar that being a moral agent does constrain behavior and being a human-like agent allows others to infer how to behave around it. But I see no reason to expect a non-human moral agent to be particularly safe: it might well decide on a moral position that is opaque and dangerous to others. After all, even very moral humans sometimes appear like that to children and animals. They can't figure out the rules or even that these rules are good ones.

In any case, killer robots should as far as possible be moral proxies of somebody: it should be possible to hold that person or group accountable for what their extended agency does. This is equally true for what governments and companies do using their software as for armies deploying attack drones. Diffusing responsibility is a bad thing, including internally: if there is no way of ensuring that a decision is correctly implemented and mistakes can get corrected, the organisation will soon have internal trouble too.

Posted by Anders3 at June 4, 2013 03:57 PM
Comments