A Defense of the Transhuman Principles

by Anders Sandberg

Summary: A response to Erik Möller's analysis of possible dangers of the Transhuman Principles.
Keywords: Principles, Debate, Coercion, Markets, Evolution, Memetics, Openness, Transhumanism, Prescription, Description

Introduction and a Little History

I was part of the team who interactively created the version 1.0 of the principles in a net-based brainstorming session last year. To be honest, I don't think any of us had any clear goal in mind other than an attempt to write up some of the basic views that we felt were central to transhumanism.

During the brainstorming (which has been archived by Alexander Chislenko) a lot of versions and issues were debated. Some ideas quickly crystalized, such as the general shape of the Principles and several of the points while other areas were debated until the end, such as how to compromise between collectivism and extreme individualism (not to mention the stylistic aspects).

In the end, the principles were released mainly because we felt we had done our share of work, now it was up to the transhumanist community to do whatever it wanted with them. At least one individual version and a Swedish translation has appeared, without any fuzz. Overall, the response seems to have been quite accepting, almost disappointingly so.

At least until Erik reacted to them.

Apparently, the principles which I sometimes have thought of as almost too toothless (after all, they were created by a committee) can also be seen as dangerous. I must admit I was a bit taken aback by this, and at the same time happy that at least somebody disagreed with me. As Sir Clive Sinclair said, "You get more out of people when you disagree with them".

What Are the Dangers of the Transhuman Principles?

If I have understood Erik right, he sees the principles dangerous because in his opinion
  1. They allow just about any definition of transhumanism, including some rather unsavory ones.
  2. Seem to allow almost any kind of behavior.
  3. Discourages the use of coercion in any form.
  4. Encourages a free market.
The main reason he disagrees with them seems to be that he sees them as prescriptive, not descriptive. Are the principles to be interpreted as how transhumanists ought to live their lives, or a description of commonly held views? Should they be viewed as a definition of transhumanism?

I firmly believe the principles are descriptive, or at least an attempt to describe what some of us believe in. As Alexander Chislenko wrote in his introduction to the original draft:

The purpose of these Principles is to define a "consensus platform" of Transhumanism that would allow us to see what ideas and goals we have in common as a group, and to present them to people trying to understand what this transhumanism is all about.
Nowhere does the principles imply that anybody not agreeing with them isn't a transhumanist (he or she is just not part of the consensus) or that anybody holding them is a transhumanist.

Erik worries that the principles condones downright evil acts (by whatever definition of 'evil' one cares to use). I think this is largely a matter of interpretation: of course Pragmatism can be interpreted as 'the ends justify the means' and Diversity as 'anything goes', but it is clearly against the spirit of the document. It is quite possible to interpret practically any declaration to suit one's need: the principles of the Red Cross says among other things (my italics):

The International Red Cross movement, born of a desire to bring assistance without discrimination to the wounded on the battlefield, endeavors, in its international and national capacity, to prevent and alleviate human suffering wherever it may be found . Its purpose is to protect life and health and to ensure respect for the human being. It promotes mutual understanding, friendship, cooperation and lasting peace amongst all peoples.
Does this mean that the Red Cross condones active euthanasia for hopeless cases? Unlikely.
In order to continue to enjoy the confidence of all, the movement may not take sides in hostilities or engage in any time in controversies of a political, racial, religious, or ideological nature.
Does this mean Red Cross people are not allowed to have any controversial views? Not at all, it is the movement that (for purely pragmatic reasons) does not take sides. But someone could bend the declaration to mean it.

In the same way that it is unlikely that the Red Cross would condone euthanasia because it is against the spirit of the organization and other parts of its principles, it is unlikely transhumanism would condone using pragmatic violence against ideological enemies - it goes against the spirit of the movement and definitely contradicts the strong emphasis of tolerance and diversity.

If one views the principles as a prescription of a certain morality, then their weaknesses (for they exist and must be acknowledged) would be problematic; a lot would be left up to the individual to interpret. But if they are viewed as descriptive of what transhumanists usually seek to do, then the problem vanishes: they are vague just because they seek to be general and describe a very diverse and individualistic movement. Any stricter definition would by necessity exclude many self-admitted transhumanists.

Coercion

The question of when coercion is acceptable (if at all) is one of the Eternal Questions that constantly reappear when libertarian issues are debated (as they often are in transhumanist fora). It basically boils down to a question of how individuals and society are to interact in a variety of situations - with each side pointing out examples of where the models of the other sides become nasty. It is gasoline for flamewars.

For once, I think Erik is somewhat right in his critique: discouraging any attempts to impose will or ideas through coercion may be a too strong statement. I know some of my co-authors disagree with this view. At the same time I do not interpret the problem the same way as Erik does, especially since redefining coercion to include persuasion widens the concept into uselessness.

On the other hand, having a strong bias against coercion may be better than the more cautious 'coercion is sometimes acceptable', since that statement will inevitably be followed by the question "Under what circumstances do transhumanists accept coercion?" - and we are back in the flamewar, and regardless of how reasonable your answer is, it will not represent transhumanists in general, and your questioner will doubtless find circumstances where it becomes unwieldy or nasty. Ignoring the coercion problem in the principles will not solve anything either, since that can be interpreted as the tacit acceptance of coercion to achieve transhuman goals.

What to do with this sticky problem? I don't know.

The Market

Erik claims that the principles promote a totally free market, which he sees as a Bad Thing. While they do not explicitly do this, it is a rather reasonable interpretation (after all, several of the original authors are extropians). They are compatible with other forms of markets too, as long as they are not coercive.

Whether a free market economy is good or bad is another big issue, just as controversial as the ethics of coercion. Suffice to say that I do not share Erik's view that a free market would drag us back to the middle ages or make most people poor; neither do I believe is the ideal solution of most social problems; I call myself market-agnostic. The market may be a first step towards a new kind of regulating system, based on voluntary self organization rather than deliberate control, but it's current form is definitely not the ultimate solution.

I think it is important for us to try to avoid the strong emotions that sometimes overcome us (despite our transhuman aspirations) when we discuss hotly debated topics, even when they are strongly linked with our worldviews. Is there a real, objective issue, or are we victims of a vicious fight between different selfish memes?

Some Specific Points

Erik also makes some points I feel I have to comment upon.

Openness Versus Silence

This is a very dangerous principle. First, if everyone knows about the possibilities of >H tech, this knowledge can be misused. If nationalist groups learn about Uploading and believe in it, they'll try to be among the first -- maybe successfully.
But as many others have pointed out, it is impossible and quite probably undesirable to limit the spread of information -- especially the knowledge of strategic technology.

Arthur Kantrowitz argues convincingly in The Weapon of Openness that the best way to ensure that democracies and open groups retain their technological lead is openness, not withholding information. In fact, secrecy promotes corruption and divisiveness and slows progress, while openness encourages critical examination, public discussion and allows a much larger group of people to pool their brainpower in a project.

We should not overestimate our originality; what we can come up with others can also come up with - and they often have better funding. It would be extremely unlikely that nasty groups would not learn about cutting edge research just because we wouldn't tell them. If uploading is becoming a real possibility, then there is no point in not telling people about it, since the groups with power are bound to have some experts that understand the question and may even be working on it.

Two Dangerous Memes

Achieving financial success -- probably the most common goal -- is not without danger and should therefore not be encouraged. Any financial advantage achieved by one person usually leads to financial disadvantages for others. The higher the money concentration is, the lower is the life standard for the masses.
This paragraph contains two of pernicious memes quite popular today.

First of all, if we avoid encouraging or discourage anything that is not without danger to the individual and society, we would have to discourage just about everything - there is nothing in the world that could not become dangerous in some way, neither water, transhumanism or lego blocks.

In the last decades many people have become accustomed to the relative safety of the western world and have started to demand safety in all situations - safe air travel, safe toys, safe nuclear power plants, safe genetic engineering, safe politics. In many cases this is quite positive and has driven useful technological change, but it sometimes becomes a conservative force that demands safety even where no safety can be provided or proved, and a ban on everything that could be unsafe.

But there is no safety anywhere in the universe. We can only do our best.

The second pernicious meme is much more dangerous. I call it the 'zero-sum meme', the 'economics of scarcity': what I win, somebody else looses. It is very common today and has been far too common in economy too, but as any modern economist can tell you it is false. If it was true, then the net wealth in society would not change over time, and we would still live in the stone age - which we obviously don't; there has been a tremendous growth of wealth since then. If two people make a transaction, they are usually both better off than before. The economy is usually a positive sum game.

The zero-sum meme is dangerous because it suggests that the best we can do is to distribute wealth equally, not create more of it, so the main issue becomes who has more wealth and how to distribute it according to some moral system (quite often against the wishes of the "wealthy"). This discourages attempts to increase wealth, and suggests that the transhuman goal of transcending all limits will benefit just an elite - and make the rest of humanity worse off.

Conclusions

A draft with some dangerous implications, I am afraid. Defining a consensus platform where there is no consense is quite pointless.
Do we need the principles? As Erik points out, transhumanism is so broad that any attempt to describe a consensus is bound to fail. The principles do not attempt to define what all transhumanists think, or what they should think. Instead they seek to suggest what we, or at least a subset of us, think is important in transhumanism.

These principles can be seen as an attempt to show what transhumanism was like in the late years of the 20th century, before it became something else. They were not intended to remain static and unchanging, but themselves suggest that to remain viable they have to be continually redefined and re-evaluated. Erik is helping this process along whether he wants it or not.