May 14, 2008

Regulating Future Science

What science is forThis monday I attended the 21st Century School's Distinguished Public Lecture where John Sulston, John Harris and Richard Dawkins discussed what science is for. Beside the core discussion, where Harris was pretty utilitarian and Sulston emphasised the non-instrumental value of exploration, it was interesting to see that all three speakers (as well as a few distinguished members of the audience) all held a posthuman future as a possible or even likely possibility. Hence much of the discussion focused on the potential drawbacks and how to regulate science. The text below is my comment on the institute blog.


Regulation, to be effective, requires recognition that a field requires regulation (which in turn implies recognizing the field and some reasons for regulation) and an understanding of how it would likely affect the field. This can in general not be done a priori. We do not know beforehand what a new technology will produce or how it will be (mis)used, we can only reason from prior experience, which is of limited use. Before a field has developed far enough for us to amass some experience about how it works, what can be done, what can go wrong and how people apply it regulation will be premature and likely be ineffective (e.g. intellectual property regulation on the Internet) and/or have unintended effects (such as strangling a desirable emerging technology in red tape).

Regulating synthetic biology cannot be done before we have synthetic biology, except in attempts to prevent the whole field from emerging. Regulating human enhancement cannot be done without some experience with it; current regulation are either local to particular situations of little general applicability (doping in sports, military drug use) or due to older regulations not directly intended to regulate it (medical regulations). There is also a lack of recognition of the commonalities of enhancement from drugs, genetic interventions, education and information technology: if enhancement itself is worth regulating all such activities should be regulated together. This seems unlikely to happen, and a more likely approach would be local regulations for different kinds of enhancements .

Such local regulation would be unlikely to prevent or control the emergence of better-than-human-beings, since they can emerge from many research directions, possibly as a surprise and not necessarily as a recognized *kind* of being. We already have entities such as the Google search engine and multinational corporations that exhibit superhuman capabilities without being a being. It might even be unwise to attempt to design such beings according to a strict top-down plan, since we want to allow the emergence of new benefits unknowable for lesser minds such as ours.

Hence thinking in terms of regulation as control is likely to be mistaken: we are very unlikely to be able to control our "mind children". Regulation as influence on the other hand is feasible: by being clear on what values we want to promote (such as curiosity, the good life, freedom, safety etc.), keeping an eye on what is actually going on and being developed, and creating regulations and incitements that promote them in general we have a higher chance of influencing future developments in a beneficial way.


Posted by Anders3 at May 14, 2008 01:50 PM
Comments