September 28, 2010

Uploading by Gmail

Exi-Chat Jan 2007Giulio Prisco gives Google permission to upload him based on the data in his Gmail account.

Of course, most content of that account is not information about Giulio's mental state but the mental states of others, so the process becomes much better the more people allow future Google to do this.

Humor aside, I would really like to develop a good argument about when reconstructing a mind from its inputs and outputs is possible. Being a slice-and-dice favoring WBE thinker, I am suspicious of the feasibility. But am I wrong?

It is not too hard to construct "minds" that cannot be reconstructed easily from outputs. Consider a cryptographically secure pseudorandom number generator: watching the first k bits will not allow you to predict the k+1 bit with more than 50% probability, until you have run through the complete statespace (requires up to ~2^(number of state bits) output bits). This "mind" is not reconstructible from its output in any useful way.

However, this cryptographic analogy also suggests that some cryptographic approaches might be relevant. Browsing a paper like Cryptanalytic Attacks on Pseudorandom Number Generators by Kelsey, Schneier, Wagner and Hall (PDF) shows a few possibilities: input-based attacks would involve sending various inputs to the mind, and cryptoanalyzing the outputs. State compromise extension attacks make use of partially known states (maybe we have some partial brainscans). But it also describes ways the attacks might be made harder, and many of these seem to apply to minds: outputs are hashed (there are nontrivial transformations between the mindstate and the observable behavior), inputs are combined with a timestamp (there might be timekeeping or awareness that makes the same experience experienced twice feel different), occasionally generate new starting state (brain states might change due to random factors such as neuron misfiring, metabolism or death, sleep and comas, local brain temperature, head motion, cell growth etc). While the analogy is limited (PRNGs are very discrete systems where the update rule is simple rather than messy, more or less continuous systems with complex update rules - much harder to neatly cryptoanalyze) I think these caveats do carry over.

But this is not a conclusive argument. Some minds are likely nonreconstructible (imagine the "mind" that just stores a list of its future actions is another example: it can be reconstructed up until the point where the data runs out, and then becomes completely opaque), but other minds are likely trivially reconstructible (like the "mind" that just outputs 1 at every opportunity). A better kind of argument is to what extent our behavioural output constrains possible brain states. I think the answer is hidden in the theory of figuring out hidden Markov models.

Posted by Anders3 at September 28, 2010 02:45 PM
Comments