April 09, 2009

Cyborg morality as usual

Self portraitMind Hacks: The unclear boundary between human and robot mentions a letter to Nature that is concerned with blurring the boundary between man and machine. Blanke and Aspell write:

It may sound like science fiction, but if human brain regions involved in bodily self-consciousness were to be monitored and manipulated online via a machine, then not only will the boundary between user and robot become unclear, but human identity may change, as such bodily signals are crucial for the self and the 'I' of conscious experience.

Maybe I'm just a bad philosopher, but I do not see that much problem with the blurring per se. Is there a problem if my sense of where my agency is located is affected? If I think "I" exist within a telepresence robot rather than seated in the control chair I might behave in somewhat irrational ways to protect the "life" of my (actually disposable) robot body since it feels like "my" body, but that is unlikely to be a major ethical problem. Similarly if I regard myself being distributed over systems beside my biological body, it might require extending the concept of privacy and integrity to encompass my extended self - but this is not too different from what we already do with our houses and files.

Jeroen van den Hoven did a great lecture here a while ago about ethics of technology from a design perspective. He discussed "wideware engineering": Clark & Chalmer's extended mind view says that if a part of the world functions as a process which we would call cognitive if it occurred in a head, then it is (at least for the moment) a cognitive process. This means that when we design things, we actually design epistemic and cognitive environments.

This in turn leads to various moral responsibilities of the designer - including a responsibility for allowing the user of the system to be able to be a good moral agent. Systems should not force them into moral dilemmas, moral overload or regret. If the system forces them to trust what the sensors are saying (because not trusting them would be a moral risk that cannot be justified at the moment) then the user will have reduced responsibility - yet if a disaster occurs, the user will feel regret and blame over not checking the sensors, although it was not feasible. Conversely, a system that allows users to check the quality of the wideware and "make it theirs".

These considerations seem to meld into the concerns in the letter above. Uncritically using BCI-robotics might indeed impair our ability to act as responsible, autonomous agents because too much is taken for granted - aspects of our choices are run by systems we do not understand, have control over or any reason to trust. But the problem is not worse for neurorobotics than it already is for airline pilots or people running complex industrial systems. They already have extended bodies and agencies where the available choice architecture impinges on their moral responsibility. The real issue is if they will be given the chance to make their extended bodies and minds "theirs".

Even the issue of "self" might be similar: during flow states people forget about self-consciousness, and no doubt the pilots and process engineers do experience that from time to time. Drivers of cars often project their self onto the car rather than their body inside, yet we do not view this as a terrifying cyborgisation.

Posted by Anders3 at April 9, 2009 05:00 PM
Comments