April 23, 2012

What kind of humanity should we want to make?

Here is my TEDx Vasastan talk, on the big picture questions for the future:

Next TEDx with me will be in Tallinn.

Posted by Anders3 at 09:12 PM | Comments (0)

The limits of observational science

In bed with the greats[1204.0492] Non-detection of the Tooth Fairy at Optical Wavelengths. They used a 1.3 m telescope aimed at a wisdom tooth under a pillow, but it disappeared during the night without any photometric evidence. "We report a non-detection, to a limiting magnitude of V = 18.4 (9), of the elusive entity commonly described as the Tooth Fairy." They discuss whether this could be because of a superluminal or very fast Tooth Fairy, or perhaps that she is transparent at optical wavelengths.

Personally I would not discount quantum tunnelling or wormholes, both which might explain the gift-giving abilities of Santa.

OK, it is a April fool's paper.

Posted by Anders3 at 03:53 PM | Comments (0)

April 11, 2012

How many persons can there be: brain reconstruction and big numbers

Man on the brainI have mentioned my skepticism against the idea that we could be reconstructed in a meaningful way from our stored email, life recordings, personality quizzes and genomes. In a recent thread on the ExtroBritannia list the issue came up again, and I think we made some progress on showing its implausibility (no, just saying it is impossible is not convincing).

The goal of the exercise is to estimate how much information is needed to reconstruct an individual “well enough” without having direct access to their nervous system. The reconstructing agency is assumed to have arbitrary computational powers, but is limited by available information.

In the following I will be using the Stirling approximation of factorials (log(n!) ≈ n log(n)– n) to calculate binomial coefficients: log(N over k) ≈ N log(N) – (N-k)log(N-k) – k log(k). Also, remember that the number of bits of information you need to supply to find a particular object among N is log2(N), and that log2(10x) ≈ 3.32x.

Record personality


Some think it is enough to reconstruct the right personality. "The number of personalities" is however not well defined. If we consider just the “big five” and assume we can tell apart 1% differences, we end up with 1010 possibilities, just around 33 bits. But the difference between me and someone else is not really about fine differences in extroversion or conscientiousness, but personality quirks, language, memories and ways of deciding. Someone might share my personality traits yet hold different political views, belong to a different culture and have different experiences. There are far more possibilities there. But it is not clear how many, especially since there are nontrivial correlations and links between them – gay extrovert neophiles are less likely to be conservatives, believing Muslims are unlikely to espouse strong atheism, libertarians are somewhat likely to believe in the many-worlds interpretation of quantum mechanics and are very unlikely to be ancient Romans.

Record experience


Another argument would be that we are shaped by our experiences, so the number of possible experiences determines the number of possible humans. Linde and Vanchurin argue that during a lifetime we can acquire at most 1016 bits based on the Boltzman brain discussion in de Simone, Guth et al.. This paper also mentions Laundauer’s lower bound of just 109 bits stored over a lifetime. However, those bits seem to be consciously learned bits rather than the information embodied in who we are.

Other estimates can be made. Human retinas have a bandwidth of 8.75 megabits per second, providing the brain with about 20Mb/s in total across the 2 million axons in the optic nerves. The spinal cord similarly has a couple of million fibers and we can lump it all together by guessing that the maximum input is on the order of 200 Mb/s. 200 Mb/s over 80 years is 5*1017 bits. So by this approach, there might be up to 25*1017=101.5*1017 possible persons.

Unfortunately, while storing a few hundred petabits might soon be doable and life recording might allow us to document our lives very well, it doesn’t follow that we record the *right* information. My experience of a piece of music depends on complex details of my auditory physiology and mental processing: replaying it to another system is unlikely to produce the same experience.

The life recording approach forgets about initial conditions. Two pieces of software given the same input can behave utterly different, including storing different information. Chaotic dynamical systems (and we certainly have some of them inside us) diverge exponentially when given different initial conditions, even when perfectly deterministic. And babies already demonstrate personality differences. So the above numbers need to be multiplied by the number of distinct starting states.

There are about 1.5 gigabytes of information in the genome but much of this is shared between different humans; the genetic specification of a person relative to a baseline human genome is likely about 20 megabytes, 21.6*108 possibilities.

Genetics also doesn’t specify much of our brains: we have far fewer genes than neural connections, and they are generated in a complex semi-random process influenced by the environment in utero. So we have to take a look at the number of possible human brains. The calculations below show that this pumps up the information need enormously, at least by 15 orders of magnitude. And this information is not externally available (unlike the genetics, which is left in every skin flake).

So it appears unlikely that documenting the information in our environment plus initial conditions will be feasible within the conceivable future. So we need to get information from the brain to pin down what particular brain it is.

Human information output

Even if we were to visibly move our ~639 muscles at 10 Hz (about as fast as they can twitch) that provides just a few kilobits of information per second.
Spoken and written words are even worse: the average entropy per English text character is about one bit, and the entropy rate of spoken dialogue is a few bits per second.

The average daily email production is about 15 emails containing about 30 kilobytes each, corresponding to an information production of 41 bits/s – much of it of course header information generated by computer.

A high resolution video and audio recording of our lives might have a far high bit rate, we do not contribute much new information to each frame.

This leads to a first tentative argument against reconstruction based on external data: we are acquiring potentially personality-affecting information at a fairly high rate during our waking life, yet not revealing information at the same high rate. The ratio seems to be at least 1000:1.

Still, a reconstruction enthusiast might be undeterred. Most of those input bits are discarded: we learn and change far more slowly than what we sense. If the number of possible distinguishable human minds is small enough, we should be able to determine which one inhabits a certain brain by inferring it from its behavior.

Number of brains

A human brain contains 1011 neurons with a few thousand connections each, giving us around 1015 synapses.
A simple argument for the number of possible persons would be that the 1015 synapses of a brain can be in either a potentiated or unpotentiated state, leading to 21015 possible states. Or, more simply, we need 1015 = 1 petabit of information to specify which of them a given individual is. There is also the issue of how many ways the 0.5*1022 neuron pairs can be connected by these synapses. (0.5*1022 over 1015) is around 107*1015. So we need 23 petabits to specify which brain connectivity a person has.

But most of these brains are indistinguishable. We do not become entirely different persons because one pixel on a TV screen 10 years ago was different or because a synapse just got removed. Just like macrostates in statistical mechanics contain *lots* of different microstates, our personal identity macrostates have room for plenty of microvariations. The fact that we remain identifiable (mostly) day from day demonstrates this.

Many neural disorders can progress undetected until a few tens of percent of neurons in affected areas are gone. So let’s make the optimistic (in the sense that it makes reconstruction easier) assumption that brains with 90% the same connections produce the same person. This is likely not too far out: neural networks are robust to the deletion of a few connections. But it of course ignores that certain focal deletions can have big effects.

If we randomize 1014 out of the 1015 connections, we can select them in (1015 over 1014) ways, or 101.4*1014. We can then reconnect them in (0.5*1022-1015 over 1014) ways each, 108.1*1014. So the total number of brain connectivities giving persons indistinguishable from the original is the product, 109.5*1014. So out of the 107*1015 possible brains only 106.0*1015 are distinguishable. That requires 2*1016 bits, or 20 petabits.

20 petabits is staggering, yet not unheard of - there are certainly bigger data centers around today. However, assuming that we produce a 10 Kb/s of personal data per second by moving or not moving, it would take 63,377 years to get enough to specify a mind uniquely enough to construct a brain close enough connectivity. If we actually produce one megabit per second of personal data it can be done in 633 years. In order to get it down into the range of a human lifetime we need tens of megabits per second: this can likely not be done using external means, but requires interfacing directly to the nervous system.

Note that this only counts incompressible, relevant bits – information that actually helps determine the structure of the mind. In reality our activities are of course highly compressible and noisy: after having observed one of my verbal or kinetic tics for the first time, seeing repetitions are not very informative.

Conclusion

Even with arbitrarily powerful computational power, inferring which mind a given human contains appear infeasible given the limited amount of information about its internal state revealed in normal activity. Accurate documentation of the environment may provide helpful constraints, but since the relevant question is how environmental information is processed internally emitted information will always have far higher constraining value.

It might be simpler for arbitrarily powerful future entities to just simulate all possible past humans than try to reconstruct particular ones based on personal information.

In the absence of arbitrary computational power, we will likely have to make do by preserving the brains themselves.

Posted by Anders3 at 04:17 PM | Comments (0)

April 10, 2012

Nice to see someone get it

Shuttle worshipNature interviews Elon Musk:

Do you see a space-faring civilization as a way of defending humanity against a catastrophe on Earth?

Absolutely. We would be backing up the biosphere. We wouldn't just be preserving humanity, we would be preserving much of life. It is certainly possible for some calamity to come along — as we see in the several major extinction events in the fossil record. Humanity has obviously developed the means of destroying itself, so I think we need planetary redundancy to protect against the unlikely possibility of natural or man-made Armageddon.

It is important that we take action now to make life multi-planetary, because this is really the first point in the 4-billion-year history of Earth that it has been possible. That window of possibility will hopefully be open for a long time, but it may only be open for a short time. That's why I think urgent action is required on making life multi-planetary.

It is nice to see someone get it. Especially someone in a position to take positive action.

Reductions of existential risk have an amazingly large importance, making even low-probability of success projects worthwhile.

This is why we need something better than the shuttle.

Posted by Anders3 at 08:30 PM | Comments (0)