March 16, 2013

The privacy of hive minds

Borg warningI was interviewed for George Dvorsky's article How Much Longer Until Humanity Becomes A Hive Mind?

There is an interesting twist to the privacy issue: is privacy even relevant in a hive mind? Of course, the situation is different if we just use the term to denote a highly connected society. But a complete hive mind would be a single being. Why should there be any need for privacy?

Neuroanatomical Connectivity MacaqueThere are two reasons. The first looks as privacy as compartmentalization. Compartmentalization is important in our own bodies and brains. Everything is *not* connected to everything, but usually in fairly specific ways. Our different mental modules do not completely talk to each other. Looking at brain connectivity matrices show that they are fairly sparse, and indeed seem to have a nonrandom structure. There are no doubt some cross-connections (and synesthetes), but there is not much reason to link the primary visual cortex straight to a motor planning.

Keeping systems separate makes it easier for them to learn what to actually respond to - a neural network trying to learn a mapping of inputs to a desired output will do it much better if there are no irrelevant input channels. Even the existing cross-connections seem to sometime impair our thinking, like the weird effect where immobilizing one arm makes the other more dexterous or where using TMS to inhibit some cortical areas improve performance. So a unified hive mind might actually not want every unit to be in contact with every other: beside the quadratic bandwidth requirement, most information would be irrelevant and distracting. Modularity and abstraction barriers are great for software and organisations, and no doubt good engineering principles for hive minds.

FootbridgeSecond, privacy is about our desire to control who have access to our information and what they do with it. The problem is that once information is "out there" we lack control. Leaks can happen because somebody breaches trust, but also because apparently innocuous information is inferentially promiscuous and allow others to deduce unintended things, stored information may become accessible in the unknown future, or we do not realize the range of what can be done with it. This has plenty to do with how the actions influence us or our goals, and little with the information itself. So one could argue that in a true hive mind this account of privacy as information control and trust is not needed since all units have aligned interests.

But that seems to be a big assumption. The "interests" of many parts of our bodies and minds do not seem to be perfectly aligned, yet they participate in the organism. There are selfish genes and genetic components, contradictory drives and plans, and so on. I have a desire to stay healthy in order to achieve various higher order life goals, but my hypothalamus makes me desire fattening food. And if the hive mind comes about due to the gradual merging of previously independent people, it is plausible that many forms of local and group selfishness may be grandfathered in. It is not even clear that it would be better to have perfectly aligned interests: competition is a good method of generating diverse new solutions. The fact that we have multiple goals and shift between them seems to prevent the kind of universalist ruthlessness it is easy to deduce from utility-based AI programs. A hive mind might hence benefit from not having subsystems with goals identical to itself, especially if the top level goals are so complex they are hard to represent by the subsystems.

So privacy might not be exactly dead even in a hive mind world.

Posted by Anders3 at March 16, 2013 03:44 PM
Comments