The Calculus of Identity

by Anders Sandberg

Transhumanists always seem to get into discussions about personal identity, and how it can change. Usually these discussions turn into philosophy and matters of opinion rather than useful discussions linked to the real world. This essay is an attempt to create a somewhat consistent framework to discuss questions of personal identity, change and transformation.

The framework will be based on element of basic calculus, but is intended to be qualitative rather than quantitative. A non-mathematician would likely have explained things differently but possibly equivalently.

The Self Function

My suggestion is something like this: self() is a function that acts on the current state of a system capable of computing it that produces something called 'sense of identity':
self: state -> sense of identity
It will occasionally be useful to talk about "state space", the space of all possible states of the system, and "identity space", the space of all possible identities. The self function maps the state space into identity space. Neither of these spaces need to be euclidean or finite dimensional, but I will assume that they are metric.

Note that as the state changes, so does the sense of identity. Hmm, I see I have not been general enough here: the self-function need not be universal, it is unique for each system (I identify myself with my actions, you might identify with your memes and somebody might identify with their body). So if we assume the existence of some kind of abstract "superself-function" which for any system gives us its sense of identity when in a certain state we get (for brevity we will call it the self-function anyway for the rest of the paper):

self: state x state -> sense of identity
This means that self(X,Y) is system X's estimation of system Y's sense of self. In practice X can of course never calculate this, only self(X,X), since it has only access to its own state, but it can make an estimate of self(Y,Y) which of course doesn't have to be the least close to self(Y,Y). Two estimates would be self(X,Y'), an estimate of how one would feel if one was like Y, and self(Y',Y'), an estimate of how someone like Y would feel about itself.

Note that self(X,X) is history-dependent if the system has a memory of its past. This information is included in its state X.

Also note that most people seems to assume self(X,X) never changes. I would say this is because 1) self(X,X) is rather slow-changing over time, and 2) because it makes a lot of sense to make self(X,X) one's mental origo ('me') when one compares oneself with other and potential selves.

The Dynamics of Self

Now, let's apply this to some transhumanistic problems. Let X(t) be my state over time. self(X(t),X(t)) would be my sense of identity. Through experience I know that I tend to identify with my past selves at least a certain time T back, so we get (assuming some kind of distance metric in identity space):
|self(X(t),X(t)) - Self(X(t-d),X(t-d))|
where epsilon is a constant and d is how far in the past I look. In fact, I would say that normally the distance is less than epsilon*d, suggesting that self(X(t),X(t)) is continous.

Notation: I will henceforth call |self(X(t),X(t)) - self(X(t),X(s))| dist(t,s), the distance between me at time t and me at time s, as evaluated at time t.

I notice that I can only evaluate self(X,Y) when I'm conscious. When I'm not conscious I will not do this, so self(nonconscious,conscious) is undefined, but self(conscious, nonconscious) is defined and seems to still be less than epsilon. So I consider myself sleeping in the past as myself.

What about the future? My state X(t) is evolving, and it is quite possible for dist(X(t),X(t+d)) to exceed any bound if I'm really lucky/unlucky (depending on view). That means I can become someone more different from my current self than I am from a stranger. This frightens many people. However, since X(t) is more or less continous and self(X(t),X(t)) seems to be continous and fairly resilent to noticeable changes in my body, mind and environment, it seems likely that barring any surprises I will remain myself (as estimated by me today) at least for some time.

If our states are evolving in a chaotic manner, which seems likely, then dist(t,t+d) ~ exp(lambda*d), where d is the time in the future and 0 our "identity lyapunov constant" (which may not be a constant either, but let's keep things simple right now).

Since our past seems to become "not me" in the far past, the above formula does not hold for d <-t, and we get a suggestion that X(t) is not only chaotic in the positive time sense but also in the negative time sense - i.e we have a whole spectrum of lyapunov constants of all signs.

Identity Horizons

Alexander Chislenko suggested (in his excellent paper "Drifting Identities", from which I have borrowed many ideas) that we have an identity horizon, beyond which we would not recognize versions of ourselves as ourselves. The canonical example would be the horizon dividing us from our current selves and our potential selves as dead. The horizons are of course state dependent, and can shift as we change.

However, these horizons need not be real: just like somebody falling into a black hole doesn't see any event horizon, they might receede as we move closer to them. Others may remain very constant - I do not consider a cloud of ionized plasma to be me, and I doubt I would do it even if I was standing close to an armed nuclear wepon. So "death" can be considered moving across a horizon.

Death

There are several kinds of death. The usual kind consists of having our states X(t) move towards a non-living, highly entropic attractor and loosing cohesion. Most people seem to think this part of state space is delineated by discontinuities in state, but as anybody who has actually seen another person die slowly knows, it can be a very gradual process with no clear discontinuities 1.

In fact, this may explain why some dying people accept their death: the horizons recede as they die, and they no longer consider their inevitable death as a loss of identity. Compare this to the behavior of Timothy Leary.

Death Forward

Another kind of death is "death forward": we change so much we are no longer recognizable to ourselves, and become new persons. Note that this already happens all the time: I doubt my 5-year old self would have recognized me as I am now as itself: our appearance, values and ways of thinking are simply to different. dist(5 years, 25 years) > Dmax(5 years). And the same goes for me: I have a hard time identifying with the little human who thought frozen puddles was a conspiracy and that it would be interesting to jump from a pier into deep water to see what would happen as very similar to myself, so dist(25 years, 5 years) > Dmax(25 years). Since different people X evaluate self(X,Y) differently, some might regard all their previous states (including some quite non-human states such as a blastula) as themselves, while others regard only the latest as themselves. Both are right, since they apply different evaluating functions self(X, ) to their pasts.

However, in the future we might change even more dramatically, by becoming immortal transhumans, posthuman jupiter brains or open standards. I would guess that it is very likely that many of the horizons will receede quite quickly as we approach them. Some might remain, and that suggests that there can be jumps in identity.

Uploading

One such example is destructive uploading: our minds are digitized in a destructive manner and a new entity, the uploaded version is created. So, will dist(human, upload) be too large for us to regard as ourselves? That seems to depend a lot on how we evaluate it; some people identify with their physical body and might hence regard the difference as immense, while others who identify with their mind would regard it as smaller. There doesn't seem to exist any reason to think the difference cannot be well within the identity horizon for some people.

If uploading is to be regarded as successful, the upload should consider itself to be the previous person: dist(upload, human) should be small enough. "Small enough" is commonly suggested to be roughly equal to the ordinary changes in identity during one's life; as we have seen, the definition of "one's life" may be a bit tricky since our remote pasts may actually be too alien. Perhaps a better definition should be that the maximal allowable change in identity should be on the order of the identity changes during our self-perceived past:

dist(upload, human(t)) max(human(t)).
Note that this can be far less than Dmax(human(t)), since most of the past may have been rather unchanging, with the exception of becoming the person in the first place.

Since the upload will have roughly the same mental structure and hence evaluating capabilities, self(upload, human) ~ self(human, upload) , at least right after the uploading. After a while the distance will likely grow.

Copying

Non-destructive uploading poses another problem: suppose a person X is copied into an upload Y, are they the same person? The main problem here is that people tend to get confused by semantics: there is a difference between being an independent being with an individual consciousness (I don't experience what anybody else experiences, and neither do they experience what I experience), being an individual with a sense of selfhood (i.e. self(X,X) exists), and being a person, which is a legal term rather than a philosophical concept. A conscious system is a being (let's ignore borganisms for the moment), and likely also an individual and if it is lucky, a person.

Now, let's look at X and Y. Both are beings (assuming uploads have consciousness), but neither will experience the experiences of the other 2, so X and Y will be different beings. However, both X and Y will evaluate their selves self(X,X) and self(Y,Y) to almost the same sense of identity (as derived above), so they will be the same individual. Legally, they might be persons or not and can change personhood just by changing jurisdiction.

So, it seems that if a person is copied (xoxed, forked or something similar) we will end up with a number of different beings, but the same individual. These beings will of course diverge at a rate determined by their lyapunov constants, and in the long run become different individuals.

Merging

Finally, what about merging (as described in Greg Egan's short story "Closer")? In this case two beings X and Y merge to form Z, a composite being with parts from both and possibly new emergent properties. It is not obvious how large dist(X,Z), dist(Y,Z), dist(Z,X) and dist(Z,Y) would become. A wild guess is that since Z would contain at least some of the identity of X, dist(X,Z) would be on the order of dist(X,Y)/2; this is likely more than most people would accept as themselves, so it seems likely Z is regarded as a new individual by X and Y. Z, on the other hand, can trace its past through the lives of X and Y, and might after a while with its new valuations regard itself as a continuation of both X and Y with a preserved identity.

Conclusion

This framwork for discussing identity is right now just a sketch, and maybe just a case of overapplication of mathematics. Still, I think it can provide a fairly neutral way of discussing matters of changing identity, assuming its basic assumptions do not turn out to be too shaky.

Footnotes

1: Our current "maximal hull of identity" is the set of states we can consider as ourselves:
H = {state : dist(X,state) < Dmax(X)}
where D_max(X) is the maximal change we can allow given our current mindset. This definition is a bit weak, but probably good enough to be used. The boundary of H is our current identity horizons in state space.

2: A simple way of proving this is to run Y on a deterministic computer with a deterministic environment (non-determinism can at least briefly be emulated by a look-up table with random numbers): since Y would by definition experience the same things each time the " Y program" was run, it cannot experience anything X is experiencing.

References

Alexander Chislenko, Drifting Identities".

Greg Egan, "Closer". Originally appeared pp. 81-91, Eidolon 9, July 1992.


Up to the Uploading Page

Up to the Transhuman Page

Anders Main Page

Anders Sandberg / asa@nada.kth.se