June 10, 2004

Smurfy Nanoethics

I attended a conference on nanoethics arranged by the Swedish Research Council. On the whole it was an interesting and constructive day, but the smurfese problem bothered me.

Smurfese, the language of the Smurfs (where some smurfs are replaced by derivatives of 'smurf') is usually understandable thanks to word context. But what about the language of the Nanos, where the word 'nanotechnology' can mean everything from nuclear physics to microelectronics to magic?

When originally coined by Eric K. Drexler in 1986 it referred to molecular nanotechnology, systems on or under the nanometer scale. Later, as funding appeared and researchers flocked under the banner of nanotechnology it came to mean far larger systems. A bit cynically, one could say that any kind of material technology, chemistry or physics can be called nanotechnology if at least some part can be measured in nanometers - even if it is hundreds of them. And if funding is made more likely by adding the prefix nano- to one's research, the temptation is great to do it. There is also the positive side: by seeing many diverse fields as converging into nanoscience, great synergies and interdisciplinary adventures can be started. But the N-word still gets diluted.

Then the opponents joined the game, doing their best to tar nanotech with the same brush as biotech. Perhaps the most extreme example is the ETC group that deliberately calls it "atomical modification" to give the impression that nanotechnology, biotechnology and nuclear technology are all the same (and of course, bad). In response to fears of such negative word slips, the glass manufacturer Pilkington decided to avoid mentioning the N-word in marketing of its self-cleaning windows (based on free-radical producing 15 nm titanium dioxide particles).

But for the nanoethical discussion maybe this semantic confusion does not matter much. Maybe because there is no need for the word "nanoethics" in the first place. At the same time the ethical discussion we are getting into is of extreme importance, a smurfy paradox.

Bioethics is in many respects a fruitful philosophical (and rhetorical) research area since biotechnology and medicine poses many new issues and possibilities not previously found in ethical debates (definition of death, modification of life, changes to human nature etc). These new issues stand on their own and make bioethics a somewhat independent field (it is still part of ethics, of course). One could imagine a field of 'autoethics' that studies the ethics of cars and traffic, but that field would likely just be an application of already studied ethical principles. In the same way nanoethics might not be a real field that can stand on its own, but rather the application of ethics to nanotechnology. Perhaps an important application, but hardly an independent field.

This claim would be challenged if there were any truly different issues discussed in respect to nanoethics. At least at this conference (as well as the Trieste conference last year I attended) there were no unique ethical issues dealing with nanotechnology. Among those discussed were:

  1. Risks
  2. Possible health and environmental effects of nanoparticles
  3. The use of nanodevices for behavioral control
  4. Nanoenhanced genetic testing
  5. Information and consent
  6. Public trust and transparency
  7. Privacy
  8. Preventing overcommercialization
  9. Costs
  10. Equity and fairness
  11. Military use
  12. Human enhancement

As professor Gisela Dahlquist from Umeĺ University said, the problem is not the technology itself, it is the individual applications. I agree, but before I go into them a quick note about core issues:

Biotechnology has, deep down, a core issue: is it OK to change life itself? If one answers this issue negative all applications become immoral. It precludes the need for further debate about applications except in order to convince the adherents of the other view that other ethical or practical concerns make each possible application immoral too. Nanotechnology does at first not seem to have this basic issue. Manipulating atoms does not threaten the ethical order of nature in the same fundamental way as manipulating life; nobody considers the natural arrangement of molecules as having a value in itself. Hence most of the debate will be about applications. While I think this view is correct, I think there is a kind of "shadow core issue". This issue is not nanotechnology itself, but rather the use of nature (i.e. anything) for human aims. There is a strand of thought that thinks any instrumental use of nature is inherently wrong (or at least questionable), and to this anti-instrumentalism nanotechnology is of course inherently wrong. This strand of thought is seldom expressed clearly, both because it is itself rather questionable (humans and other animals all use the world instrumentally and cannot survive without it) and because it is often eclectically, inconsistently or opportunistically applied. But that does not make it less powerful, and a public ethical discourse that does not deal with the issue of using nature will be weakened. In my opinion we should be clear about that we want to use nature, for what aims and with what considerations. Bringing up such perspectives will help avoid endless debates about side issues that are actually just rationalisations of deeper values.

Of these categories 1-2 deals with risks. The nanoparticle toxicology issue seems to be turning into a popular field (and was well on its way of becoming it long before the greens started to sound the alarm; major conferences were held before 1998). Here the problem seems to be eminently solvable through technological fixes and design practices: since nanotechnology deals with designing nanoscale structures, dangerous nanostructures can hence be designed away (or be contained, made degradable etc) with reasonably advanced technology - or just avoided. A good thing we looked into it well ahead, but no showstopper.

In general, there are known risks and problems, foreseeable risks and unknowable risks. But even the later can be somewhat constrained and estimated using known characteristics of the field. We cannot say for certain what dangerous chemical explosives can be invented in the future, but given what we know of the energy of chemical bonds we can put an upper limit on how powerful they could be. The fears surrounding the Brookhaven heavy ion collider could similarly be dispelled. Similarly other forms of theoretical applied science can be used to constrain risks, helping us to focus on the important issues. But that requires an openness to theoretical applied science, something which is often absent: it is seen as scence fiction, idle speculation or a diversion from the "real" applied science (and theoretical science, that at least doesn't get into troublesome debates).

Current nanotechnology initiatives seldom refer to molecular nanotechnology and the origins of the field during the 80's an early 90's (as I have argued elsewhere) because they want to disown the more "science fiction" or "hyped" aspects of nanotechnology in order to appear as a legit scientific field and avoid incurring the wrath of luddites or a fearful public. The fact that the strategy does not work well from a PR standpoint (as recent nanodisaster stories show) is of little concern. Of course, sometimes there is just plain ignorance. Several participants (including some who ought to) had never heard of Eric Drexler, which shows that the field has a weak sense of history.

Several good points were made at the conference about the problem of both promoting a field as important (and in need for funding and recognition) without hyping it. Especially professor Alfred Nordmann did a good job. He sketched the problems with the "Caricature US approach" (benefits seen as revolutionary, risks are ordinary, traditional norms and values hinders the revolutionary promise, so the ethicists better get on board) and the "Caricature EU approach" (benefits likely to be gradual and non-revolutionary, but risks can be radical, novel and serious. Ethics represents traditional and public concerns, so getting ethicists involved early will control the risks). Instead of these, Nordmann argued for "normalizing nanotechnology" - show how it fits in with the big picture, with the history of technology, contextualize it and debunk the stupid stuff. It may or may not be revolutionary or dangerous, but that is something to be discovered rather than assumed by policy. Getting rid of mythical animals at the heart of science policy and ethics discourse is a good thing. He then supported more monitoring of nano research and policy from a science studies perspective, more forums for allowing the public and other stakeholders to discuss and especially having "vision assessment" where people did explicate their implicit promises, views on nature and human nature etc. Rather than speak of nanotechnology we need to discuss particular nanotechnologies and systems and what we seek to use them for an why.

To some extent his view was that this would happen no matter what we do, as a natural result of history. We have normalized electricity and may one day normalize radioactivity. But of course, just letting it happen may take a long time and lead to great losses.

Concerns 3-4 deals with medical integrity and autonomy, and 5-7 the same in a social setting. Again, the issues brought up are hardly new. We already have potentially motivation-controlling brain implants and drugs, genetic testing can be done and patient (or citizen) integrity threatened in a multitude of ways. There is no need for nano-integrity or nano-autonomy, plain oldfashioned integrity and autonomy will do fine. In medicine we have the usual ethical boards, and there is no reason to distinguish between a nanosurgical procedure from a microsurgical one from an ethical point of view.

An interesting issue brought up was invisibility. Again, hardly a new concern, but nanotechnology makes it visible (pun intended, sorry). What to do about technologies that can affect our lives without us knowing about them? Even if they are not malicious, the feeling of lack of control as invisible systems affect our lives in ways we cannot follow is disruptive enough. One way of dealing with it may be good design practices. All systems involving active nanodevices should be designed to signal it, ideally giving the user a chance to follow their activity. Just a lit LED telling you that the nanocleaners are busy might be reassuring. I think one cannot stress how important this is for risk perception. We overestimate and dwell on risks we feel we cannot control, making many afraid of flying while cheerfully driving down the highway in a metal casing they think they control. An on/off switch tells the user that he is trusted to make decisions. This helps to create trust between the manufacturer and the consumer, and makes the discussion of risks less polarised.

Of course, there are applications that do not seek trust, like spyware and surveillance bugs. Here design practices will not be followed, but that does not make the design practices bad. Dealing with invisible nasties is a technical, legal and ethical issue - interesting, but not something unique to nanotechnology.

The 5-7 concerns are all good targets for "normalization" - get people and groups talking, and let's find compromises, practices, laws and norms that we can agree on or at least accept. Professor Göran Hermerén had several fruitful suggestions for multi-layered approaches beyond just binary legislation/no legislation choices - technologies can be controlled through social mores, advisories, oversight groups with different jurisdictions, formalized practices and laws. Each have their own costs and benefits. The problem is how to get people together in a good way, and get the discussions to percolate through the different strata of society.

Areas 8-10 were economic. The first is a sentiment sometimes encountered in the stem-cell debate (at least the Swedish one) that money mustn't be the goal of research. While sounding very noble, there is a double problem in that it consigns all science to science for its own sake done by selfless scientists and that it opposes application of the results. Pharmaceutical companies make money due to the suffering of people, but they are highly desirable since they make money on ending the suffering and without them medical research would be far weaker. Nanotechnology is not a value in itself (except for some of us scientists), and if we want any benefits from it we need to have it commercialized. Seeing commercialization as a problem at this early stage may preclude many good uses. What should be resisted is stupid, monopolistic, biased or non-transparent commercialization, but usually the best way of doing that is to ensure that there exists smart, diverse and transparent efforts.

The equity concern to some extent represents the usual social liberal and post-Marxist discourse: new technologies should not be developed unless they can benefit everyone or at least do not increase income gulfs. I do not feel the need to get into this discussion from my own libertarian perspective, since the discussion applies to any general technology.

One thing worth noting is the worry from some participants that the benefits of nanotechnology will not be cheap and accessible. While this may sound absurd to adherents to the "abundance through MNT" view, it should be noted that even if the technology could physically produce cheap abundance rigid or excessive regulations could stifle it. After all, most pharmaceuticals could be produced for a fraction of their current cost if the additional costs of approval, regulation and safety were not added (also compare to Freeman Dyson's discussion about the nuclear power lock-in in Imagined Worlds). If we do not get good nanoregulations we might not get good nanotechnology.

Dr Jürgen Altmann is a very productive writer and debater about the military uses of nanotechnology, and could be said to represent concern 11 (and to some extent 12). His presentation covered a vast range of potential military and security applications of nanotechnology. Unfortunately, quite a bit of it was more tuned towards triggering the "yuck" factor (remote controlled mice?! scorpion robots?!) and americophobia (look at how much they are spending!) than the core ethical and political problems. Some of the problem here might be my fundamental disagreement with him about human enhancement (where he seeks a 10-year moratorium for non-medical enhancements), but I think this presentation would be far more effective at a red/green meeting than an academic forum. This makes it easy for some to disregard his concerns as mere alarmism.

I think he did have a good point about the destabilizing potential of many systems requiring even minimal nanotechnology, such as autonomous kill vehicles, infiltrative microbots and programmable bioweapons. We shouldn't worry about them being nanotech, but rather autonomous systems that could erode treaties, cause incidents, promote arms races and tempting first-strike scenarios. Unfortunately his suggestions for dealing with these were rather standard: international bans, moratoria and giving the UN a greater role. Given the huge dual-use potential, the diffuse areas between clearly legitimate and clearly illegitimate uses, the "small science" research needed unlike the "big science" approach needed for nuclear weapons and the usual problems with the UN, these suggestions would at best be partial solutions. That may be better than nothing, but I think we can do better than that.

First, we should recognize that top-down solutions like these work only for certain kinds of problems, well-defined problems where most participants have parallel interests and information can be gathered centrally. Bottom up solutions can help with other types: ill-defined problems with many different interests and local information. A world with the weapons discussed by Altmann is also a world where similar equipment can be used by people to ensure safety and stability. Distributed monitoring of bioweapons (e.g. by using cheap portable and networked pathogen-detectors), "immune systems" for dealing with infiltrating software and hardware (e.g. the building has its own microbots to look for intruders that otherwise does tasks like pest control or cleaning), weapons with mandated tracking and recording (mandated by civil society to ensure accountability and eventual transparency from law enforcement and the military) and so on are quite useful technofixes. Similarly one can invent a multitude of local and non-centralized instiutions to keep track of other threatening uses of nanoweaponry. None of these can individually solve the entire problem, but the union can be very powerful.

Unfortunately, this kind of distributed security thinking is today still rather alien to most decision-makers (of course, since they are centralized) and thinkers (since they have grown up in a centralized world; compare with the discussion in Resnick's Turtles, Termites, and Traffic Jams). Probably the only one's who have grown used to it are the networkcentric defense people in the military, the very ones that are promoting many of the things Altmann fears.

The final issue was human enhancement. It was not brought up by the professional ethicists but rather implicitly by the technologists by showing new brain-computer interfaces, by Altmann through his criticisms of it and by transhumanists like me who considered it a beneficial use of nanotechnology. I think it was to some extent avoided in the discussion because it is so close to the embarrassing "science fiction" of nanotechnology. At the same time it is probably the most ethically complex and interesting issue linked to nanotechnology. But it also broadens the discussion far beyond just nanomachinery into the deep issues again, rather than the narrow debate about particular applications and aims. Professor Hermerén stated that it does not per se introduce any need for any new ethics - we can deal with it using utilitarianism, rights or what have you - (I agree) but seemed to prefer to discuss what values we seek to impose on technology and in what order.

One area that was often mentioned was expanding the use of ethical pre-approval and making social impact statements. While sounding very noble the idea is rather problematic: what is a good social impact? If a technology strengthens individualistic tendencies in society, is that a reason to avoid it or support it? Different ideologies and groups would come to different answers. The answers I got about this issue were somewhat vague, but I got the impression that people in general thought that having many different groups represented would make this analysis possible. But that only leads to the question of what groups to include. Is for example transhumanism a valid position in the debate and in need of being heard in this kind of analysis? Determining who gets represented where gives gatekeepers tremendous power. And if a position - regardless of how small or weird - is not heard or made part of the analysis, does that imply that the position ought to accept the mainstream consensus? It is messy already for the political philosophy of law, but when dealing with what is essentially value statements of possible societies it seems that by accepting the values of liberal democracy one must accept that certain groups are in their right of pursuing social outcomes at least for themselves that diverge from the social outcomes desired by the majority. This makes the social impact statement either weak (merely an advisory forethought), oppressive (forcing minority views to conform) or just an extra delay. Personally I think advisory forethoughts are quite useful, but one should ensure that they are never seen as normative.

Nanoethics might be needed simply to show people that researchers and companies do indeed care. But to really become something more than greenwashing or reframing the usual ethical debates in yet another form, the discussions need to dare to go into the more murky reaches of the aims of technology, views of human nature and different visions of humanity's future. It would broaden the debate far beyond nanotechnology, but it would also make the nano-smurfese irrelevant.

Posted by Anders at June 10, 2004 01:57 PM
Comments

the specifically op-codes, Xscale.

Posted by: father daughter sex at August 2, 2004 08:19 PM

four-in-a-row XFree86 is security.

Posted by: rape sites at August 2, 2004 08:19 PM

gain in secret doubly.

Posted by: bestiality forum at August 2, 2004 08:19 PM

the decent take on.

Posted by: neighbours galleries at August 2, 2004 08:19 PM

2002 is successful, 1s.

Posted by: incest links at August 2, 2004 08:19 PM

security not be lot.

Posted by: free rape movies at August 2, 2004 08:19 PM

talent to word-slinging system.

Posted by: brutal jerk knife housewives at August 2, 2004 08:19 PM

Rossum of and In.

Posted by: free beastiality stories at August 2, 2004 08:19 PM

128 with. Now, has.

Posted by: dog fucking at August 2, 2004 08:19 PM