May 08, 2009

Buchanan on evolution and avoiding unintended consequences

Professor Allen Buchanan listening to a questionProfessor Allen Buchanan from Duke University is holding Uehiro ethics lectures this year. The subject is dear to my heart: the ethics of biomedical enhancement. Here are some notes and comments from the initial May 5 lecture.

The debate about the enhancement project

He argued that we are at a stage of the enhancement debate where we need going beyond pros and cons and instead develop institutional responses to the possibilities of enhancement. The pros and cons have already been hashed out at length by experts. Rejecting biomedical enhancements across the board is not an option: they are already here, they arrive as a natural by-product of other biomedicine and there are no convincing ethical arguments against enhancement per se. Allen is however not automatically for any enhancement: whether particular enhancements are acceptable depends on their effects, the social context and possibly attitudes expressed in pursuing them.

As he expressed it, the relevant question is: should society undertake the "enhancement project"? That is, to regard enhancement as a legitimate enterprise, subject to systematic public deliberation and social control.

This is why (together with his political science background) why he thinks we should take institutions more seriously as a component in the enhancement debate.


According to him there are five main concerns about enhancement:

  1. It could be damaging to or even destroy human nature.

  2. Unintended bad consequences (biological or social). This was the main theme of the lecture, developed below.

  3. Enhancement may be bad for our character. The pursuit of enhancement as might be a moral vice, or guided by flawed values/attitudes.

  4. Injustice.

  5. Dual use concerns (repressive social control, weaponry)

He briefly went over 1, since he has already written about it elsewhere. If enhancement damages human nature then this would make enhancement bad in itself. There are two versions of this concern: the first claims that it is wrong in itself to change or destroy human nature. But human nature includes bad as well as good characteristics, and it is hard to argue that every bad side (e.g. cruelty) is essential for human goodness or identity. Even if human nature is good, fixing the bad might lose the good. In short, changing human nature may be unnecessarily risky.

3 and 4 will be dealt with in later lectures.

Bad consequences of germline engineering: bad arguments

In analysing how to think about worries about bad consequences he focused on germline genetic engineering (IGM, intentional genetic modification), because this was the area where there is likely most risk and definitely most controversy. As noted in the discussion afterwards this approach also has the drawback of making most thinking stuck on particulars of this issue - there are many forms of enhancement (e.g. drugs) that are far less problematic but they tend to get lost in the framing of the germline debate.

A key issue with germline modifications is that the changes are irrevocable, and hence mistakes may be irrevocable. This results in shrill talk of damage to the human gene pool. But Allen pointed out that the human gene pool is not something to be preserved. It is not static and it is changing due to unintentional genetic modification, UGM. In fact, UGM (that is, natural evolution) also carries the risk of irreversibility: lineages and genes commonly go extinct. IGM can preserve valuable genes, reintroduce them or accelerate their diffusion to counteract this. IGM is *not* irreversible: phenotype is not genotype, it is possible to design IGMs that are not heritable or can be turned on or off.

Then there is the concern that IGM interferes with the "wisdom of nature". This is tied to the idea of evolution as a master engineer that it would be foolish to tamper with their masterwork. But evolution does *not* fit this metaphor. As Allen put it, it is not even a blind watchmaker but a morally blind fickle tightly shackled tinkerer. He gave lengthy list of examples demonstrating the ubiquity of suboptimal design in UGM, from the appendix to the defecation problems of bats (hint: try doing it hanging upside down).

UGM is not a successful process for improving or sustaining human life, especially since it is insensitive for postreproduction quality of life, which leads to cardiovascular degeneration, cancer and ageing. In principle IGM can ameliorate these, by modifying oncogenes, adding tumour suppressing factors, fixing waste build-up etc.

Selection does not imply optimality since the environment is changing. What was optimal, may now be fatal. Hence we are not the apex of "eons of exacting evolution". The duration of a lineage has no bearing on the survival of lineage.

There is also no stable status quo in which to choose between IGM and UGM; in some situations we might have to change genetically just to keep "the same" in relation to a changed environment.

Hence, Allen concluded that framing worries about unintended consequences as tampering with nature both misunderstand evolution and undervalue IGM. But the worry about unintended consequences remains!

Better heuristics?

To think about scientifically informed and realistic institutional responses Allen started out with Nick Bostrom's and my evolutionary precautionary heuristic: for any enhancement, the question "Why have we not evolved that way?" can tell us relevant things about the risk that we get unexpected side effects. However, Allen pointed out that what matters is the causal roles of genes, not whether traits were adaptive. Genes can be important even when complete by-products, and we may know more about causal roles than about aetiology. I largely agree with this criticism, although our heuristic can still gives useful (and different) information.

Allen presented a "better evolutionary precautionary heuristic" for IGM. The more of these points are fulfilled, the less likely side effects are:

  • IGM target genes lie downstream rather than upstream in the developmental process.

  • IGM if it is successful would not produce enhancement that exceeds the upper limits found for the current normal range. (We have well functioning examples of organisms at the upper bound)

  • IGM effects are containable within organism (or even the target organ).

  • Reversible, or better: deliberately activated in each organism

  • The intervention does not require major morphological changes.

  • If the goal is to eliminate a trait the causal roles of the trait should be well understood.

This list is to be seen as a counting principle: the more entries fulfilled, the better and the less chance there is for any unintended bad consequences. There is operational uncertainty, but this can be answered using empirical scientific research rather than handwaving.

Before we can even think about implementing this heuristic, we need consensus building and plenty of modifying/improving the heuristic. Allen noted that strict voluntary adherence to these principles is unlikely to work - institutions will be involved one way or another. He promised to get to the truly interesting questions in the last lecture: what institutions are most apt as venues to develop sound institutional responses? To what extent can we/do we need to modify existing institutions or add new ones?


All in all, a very *useful* lecture. The heuristic seems to be a good guideline for doing safe modifications *in practice* (and can be loosened for less problematic forms of modifications such as temporary drug effects). It is somewhat conservative (i.e. no entirely new traits through IGM) but this conservatism can be evidence-based. It also further shows the strong links between evolution and enhancement: by understanding where we come from we get better ideas about where we can and should/shouldn't go.

Posted by Anders3 at May 8, 2009 07:30 PM