Transhumanists always seem to get into discussions about personal identity, and how it can change. Usually these discussions turn into philosophy and matters of opinion rather than useful discussions linked to the real world. This essay is an attempt to create a somewhat consistent framework to discuss questions of personal identity, change and transformation.

The framework will be based on element of basic calculus, but is intended to be qualitative rather than quantitative. A non-mathematician would likely have explained things differently but possibly equivalently.

It will occasionally be useful to talk about "state space", the space of all possible states of the system, and "identity space", the space of all possible identities. The self function maps the state space into identity space. Neither of these spaces need to be euclidean or finite dimensional, but I will assume that they are metric.self: state -> sense of identity

Note that as the state changes, so does the sense of identity. Hmm, I see I have not been general enough here: the self-function need not be universal, it is unique for each system (I identify myself with my actions, you might identify with your memes and somebody might identify with their body). So if we assume the existence of some kind of abstract "superself-function" which for any system gives us its sense of identity when in a certain state we get (for brevity we will call it the self-function anyway for the rest of the paper):

This means thatself: state x state -> sense of identity

Note that *self(X,X)* is history-dependent if the system has a
memory of its past. This information is included in its state *X*.

Also note that most people seems to assume *self(X,X)* never changes.
I would say this is because 1) *self(X,X)* is rather slow-changing
over time, and 2) because it makes a lot of sense to make *self(X,X)*
one's mental origo ('me') when one compares oneself with other
and potential selves.

|self(X(t),X(t)) - Self(X(t-d),X(t-d))|

Notation: I will henceforth call
*|self(X(t),X(t)) - self(X(t),X(s))| dist(t,s)*, the distance between
me at time *t* and me at time *s*, as evaluated at time *t*.

I notice that I can only evaluate *self(X,Y)* when I'm conscious.
When I'm not conscious I will not do this, so *self(nonconscious,conscious)*
is undefined, but *self(conscious, nonconscious)* is defined and seems
to still be less than epsilon. So I consider myself sleeping in the
past as myself.

What about the future? My state *X(t)* is evolving, and it is
quite possible for *dist(X(t),X(t+d))* to exceed any
bound if I'm really lucky/unlucky (depending on view). That
means I can become someone more different from my current self
than I am from a stranger. This frightens many people. However,
since *X(t)* is more or less continous and *self(X(t),X(t))* seems
to be continous and fairly resilent to noticeable changes in
my body, mind and environment, it seems likely that barring
any surprises I will remain myself (as estimated by me
today) at least for some time.

If our states are evolving in a chaotic manner, which seems likely,
then *dist(t,t+d) ~ exp(lambda*d)*, where *d* is the time in the future
and *0 our "identity lyapunov constant" (which may not be a
constant either, but let's keep things simple right now).
*

*
Since our past seems to become "not me" in the far past, the above
formula does not hold for d <-t, and we get a suggestion that X(t)
is not only chaotic in the positive time sense but also in the negative
time sense - i.e we have a whole spectrum of lyapunov constants
of all signs.
*

However, these horizons need not be real: just like somebody falling into a black hole doesn't see any event horizon, they might receede as we move closer to them. Others may remain very constant - I do not consider a cloud of ionized plasma to be me, and I doubt I would do it even if I was standing close to an armed nuclear wepon. So "death" can be considered moving across a horizon.

In fact, this may explain why some dying people accept their death: the horizons recede as they die, and they no longer consider their inevitable death as a loss of identity. Compare this to the behavior of Timothy Leary.

However, in the future we might change even more dramatically, by becoming immortal transhumans, posthuman jupiter brains or open standards. I would guess that it is very likely that many of the horizons will receede quite quickly as we approach them. Some might remain, and that suggests that there can be jumps in identity.

If uploading is to be regarded as successful, the upload should
consider itself to be the previous person: *dist(upload, human)*
should be small enough. "Small enough" is commonly suggested to
be roughly equal to the ordinary changes in identity during one's
life; as we have seen, the definition of "one's life" may be a
bit tricky since our remote pasts may actually be too alien.
Perhaps a better definition should be that the maximal allowable
change in identity should be on the order of the identity
changes during our self-perceived past:

Note that this can be far less thandist(upload, human(t))max(human(t)).

Since the upload will have roughly the same mental structure and
hence evaluating capabilities, *self(upload, human) ~ self(human, upload)*
, at least right after the uploading. After a while the distance
will likely grow.

Now, let's look at *X* and *Y*. Both are beings (assuming uploads have
consciousness), but neither will experience the experiences of the
other ^{2},
so X and Y will be different beings. However, both *X* and *Y* will
evaluate their selves *self(X,X)* and *self(Y,Y)* to almost the same
sense of identity (as derived above), so they will be the same individual.
Legally, they might be persons or not and can change personhood just
by changing jurisdiction.

So, it seems that if a person is copied (xoxed, forked or something similar) we will end up with a number of different beings, but the same individual. These beings will of course diverge at a rate determined by their lyapunov constants, and in the long run become different individuals.

whereH = {state : dist(X,state) < D_{max}(X)}

**2:**
A simple way of proving this is to run *Y* on a deterministic
computer with a deterministic environment (non-determinism can at
least briefly be emulated by a look-up table with random numbers):
since *Y* would by definition experience the same things each time the
" *Y* program" was run, it cannot experience anything *X* is experiencing.

Greg Egan, "Closer". Originally appeared pp. 81-91, *Eidolon* 9, July 1992.

Up to the Uploading PageAnders Sandberg / asa@nada.kth.se