September 12, 2008

Mind taxonomies

These are not the droids you are looking forKevin Kelly speculates about a possible taxonomy of minds (and in another post discuss different kinds of self-improving intelligences). His aim is to consider how a mind might be superior to ours. Nice to see him rediscover/reinvent classic transhumanist ideas.

His list is rather random, a mixture of different implementations, different abilities, different properties and different abilities to improve themselves or other minds. Still, it is an interesting start and a good way for me to check what ought to go in my (still unpublished) paper on the varieties of superintelligence. Here is Kelly's list, ordered by me and with comments in parentheses.

Implementation


  • Super fast human mind. (the classic "weak superintelligence example")
  • Symbiont, half machine half animal mind.
  • Cyborg, half human half machine mind.
  • Q-mind, using quantum computing (whether this would provide anything different from a classical mind is unclear)
  • Global mind -- large supercritical mind mind of subcritical brains.

    • Hive mind -- large super critical mind made of smaller minds each of which is supercritical.
    • Low count hive mind with few critical minds making it up.

I find it interesting that he left out pure AI, a mind created de novo. In my paper we also include biologically enhanced humans.

Abilities


  • Any mind capable of general intelligence and self-awareness.
  • Mind capable of cloning itself and remaining in unity with clones. (cloning is easy if you are software, remaining "in unity" might be much trickier - I guess he refers to clones being able to exchange mental data, which seems to require at least some form of unchanging or lockstep-changing mental/neural structures)
  • Mind capable of immortality. (in our paper we noticed that there is a great deal of difference in what you can do with minds that can be run for an indefinite subjective time and the minds that you cannot)
  • Mind with communication access to all known "facts." (F1)
  • Mind which retains all known "facts," never erasing. (F2) (not clear whether this is even consistent; when a "fact" is disproven, does it leave a mental audit trail?)
  • Anticipators -- Minds specializing in scenario and prediction making. (overall, specialized minds have interesting potential)

A rather mixed bag of abilities.

Properties


  • Mind with operational access to its source code. (or its neural underpinnings. Not necessarily useful)
  • Super logic machine without emotion. (which might be inconsistent)
  • Storebit -- Mind based primarily on vast storage and memory. (the extreme would be the blockhead; I actually regard blockheads as intelligent)
  • Nano mind -- smallest (size and energy profile) possible super critical mind. (I guess supercritical refers to self-improving)
  • General intelligence without self-awareness. (might be unstable, and become self-aware. See the first chapter of Greg Egan's Diaspora)
  • Self-awareness without general intelligence. (probably not too hard to do)
  • Borg -- supercritical mind of smaller minds supercritical but not self-aware
  • Mind requiring protector while it develops. (overall, the issue of how vulnerable an entity is and how much development it needs is important)
  • Very slow "invisible" mind over large physical distance.
  • Vast mind employing faster-than-light communications.

Improvement abilities


  • Guardian angels -- Minds trained and dedicated to enhancing your mind, useless to anyone else.
  • Rapid dynamic mind able to change its mind-space-type sectors (think different)
  • Self-aware mind incapable of creating a greater mind.
  • Mind capable of imagining greater mind.
  • Mind capable of creating greater mind. (M2)
  • Mind capable of creating greater mind which creates greater mind. etc. (M3, and Mn)

I think one can clearly improve on this, and it would both be fun and useful.

What we really need is a better understanding of what would go into self-improving intelligence, since if there is any hint that it could indeed lead to a hard take-off scenario then a lot of existential risk concerns become policy relevant. At the same time being open for the diversity of possible minds is important, since there are likely plenty of choices - and some might be closer than we think.

In my and Tobys paper we argued that there are a few basic dimensions of superintelligence: speed, multiplicity (the ability to run copies in parallel), memory (which includes a sub-hierarchy of kinds of better working memory that likely closely relates to intelligence), I/O abilities and the ability to reorganize. These allow some superhuman abilities even using fairly simple "tricks" like running a group of subminds, creating internal organisations with division of labor just like organisation managers do, and "superknowledge" where the "thinking" is actually data-driven and due to the production of society at large. Different kinds of base minds are differently easy to upgrade along these dimensions, with biology having strong speed limitations, AI and brain emulations being suited for multiplicity, cyborgs by definition being better at interfacing new systems etc. Different kinds of problems would also benefit from different kinds of expansion, suggesting that even if the overall result is an increase of effective general intelligence in practice there are advantages and disadvantages for any particular problem that makes certain mind designs better for them. Ultra-fluid minds could of course adapt to this, but there is a cost to being fluid too.

Given that the collective minds formed of ARG-players can clearly form cheap kinds of superintelligence (or maybe supercompetence is a better word?) today, understanding the space of minds can be quite crucial and profitable.

Posted by Anders3 at September 12, 2008 10:05 PM
Comments