The Limitations of the Singularity

What limits are there to the speed knowledge and technology can increase? Singularitians often claim there are very few, and that the Singularity will be extremely rapid, on the order of a few days, hours or even seconds. But does this really hold water?

The Singularity is poorly defined, but one common definition is that it is the point where technology and science develops globally fastest, the inflexion point of the sigmoid curve of technological progress. This is also often believed to be the transition from humanity to posthumanity, although this could reasonably occur far before or after the inflexion point. The only thing everyone seems to agree on is that the Singularity is "The Big Event" of history in some way, which echoes the millennial meme and western eschatology (although it is separate from the Omega Point, which it often is confused with).

As I see it, there are a few limiting factors on the development of science and technology.

Information Overload

Information Overload is already becoming a huge problem among scientists - there is too much information, and even when the irrelevant information is filtered away the amount of relevant information is still growing at an immense speed. As someone pointed out, the majority of all scientists are living today, and the amount of research done is staggering. (Find data) This has the unfortunate effect of making it harder and harder to get at the forefront of a speciality (it has been a long time since anyone could know all of science, let alone a single field); students today have to learn more than students a few decades ago to excel in the same speciality or narrow their focus even more. This forces more and more specialisation into science and technology, which requires new and more efficient means to collaborate and communicate with experts in other fields to avoid stagnation. In the long run the theoretical limit is that we would have to spend practically all our productive lives learning and teaching instead of doing any research or actual work, and that each discipline becomes extremely specialised, with huge language and methodology barriers between them.

To overcome this we have to extend the life span of ourselves or find ways to improve learning and teaching immensely. Extending our lifespans will just make it possible to eventually make new contributions, but it would still slow down the rate of progress to a logarithmic crawl (assuming a constant number of scientists). Improving learning and teaching is a more promising avenue, where some significant progress have appeared the last years, such as mind maps, hypertext and the Internet. These results need better integration with each other and widespread use to become significant though. Vernor Vinge suggested that intelligence amplification is the primer for the Singularity.

We also have to deal with the fact that the amount of relevant information to any question is growing without any bound, necessitating better filters, software agents and our processing abilities. It is easy to become paralysed by the amount of available information, in which case most people simply return to common sense, "gut feeling" or wild guesses, sometimes with good, sometimes disastrous results. It is possible that we need to develop an entirely new common sense for the information-age.

Technology diffusion

Another fact, which is often ignored in transhumanistic discussions, is the fact that knowledge and skills are unevenly distributed. Exponential growth also leads to exponential growth of differences, and although there is diffusion of information between those who have lots of knowledge to those who have not, this becomes less and less effective as we approach the singularity. This will make the differences greater and greater, until they become barriers of communication and diffusion stops.

This may already have happened to some extent in the scientific community, which is a very worrying sign. An article in the August 1995 issue of Scientific American ("Lost Science in the Third World") discuss how third world scientific journals become essentially invisible to the mainstream (first world) scientific community due to the citation services and reviewer prejudice. Due to economic constraints the third world cannot afford many important scientific journals, and the Internet is slow to spread and expensive. This gap is self-reinforcing, and even if steps are taken to overcome them there is a definite risk that accelerating progress makes them impossible to heal.

The possibility that the Singularity occurs in a very limited subset of humanity (a "spike" as opposed to a "swell") cannot be ruled out. This would create a tremendous knowledge and ability gap with unpredictable effects. In the past we have always seen the have- nots rebel against the elite due to real or imagined grievances, and it would appear likely it would happen again - but this time the elite has a real chance of being so advanced that revolt is impossible.

On the other hand, the quicker a small group advances, the more likely it is that their attitudes and vision of the world will be strengthened (one of the main drawbacks of information filters is that they allow the user to filter away information that doesn't fit his or her attitude), leading to severe groupthink.

Cultural barriers can limit the spread of the Singularity or priming technology. Different languages effectively slow the transmission of information between different areas, and while English seems to be the current lingua franca it should be noted that sizeable subsets of the Internet is in other languages, which makes their information inaccessible for many people. And different cultures and value-sets will seek development in different areas, which might lead to very different kinds of technological evolution if the differences grow too large. .

Finally, there is the matter of human resistance to change. Even if a new technology is significantly useful, there is often a long delay before it becomes widely used, often simply because it is not perceived as necessary or as significant as it is. It took over 140 years (!) for the fax machine to become common and several decades for computers and television to develop to the point where they became usable. Often older, more inefficient systems remain even when better alternatives are present (for example, practically everyone uses qwerty-keyboards instead of the faster and less tiring dvorak-keyboards) as technologies become more entrenched and shifting to new systems appear more tiresome than using the old. And to move to a wholly new technology, often an entire infrastructure has to be built, which is both slow and expensive.

Design Ahead

Something that seems to be overlooked alarmingly often in discussions about the singularity is the huge interdependency of technological systems. Often singularitians claim a few key technologies (nanotechnology and AI are the most common examples) need to be developed, and then everything will start snowballing. But this is not true. To build a better computer network, we need both better hardware and software, and quite possibly to teach the users to use them well; it is not enough to have the technological ability in one area, other areas have to be developed far enough to lead to new breakthroughs.

Technological synergies certainly do occur, and often spawn wholly new fields, but it should be noted that they are often based on details from apparently unrelated fields and not a single breakthrough. The idea that we could progress by pouring all our energy into developing a few key technologies is just wishful thinking; if we could predict what was needed beforehand we would not have to do any research at all. Often the greatest developments stem from unexpected combinations from unrelated areas, and even more often there are unexpected problems in any scientific or engineering venture that have to be solved with new methods.

Appealing to the power of engineering AI, evolutionary machines or very fast uploads won't work. There seems to be a persistent myth that if a very fast intelligence could be created it would immediately start to self-evolve into superintelligence. This is unlikely; an uploaded dog running at a million times our speed would still not become more intelligent, and it appears unlikely that uploaded individuals or very fast AI would be able to learn all the required areas to self-evolve (see above for the problems of exponentially growing knowledge) and become skilled enough to apply them efficiently on themselves. In the past, individuals have been able to contribute significant results to science and technology, but as these areas grow and diversify, the major contributions have come more and more from teamwork, where experts from different disciplines can collaborate without having to understand every detail of the project. It seems unlikely that this would change in the future, although new tools for collaboration may make the process more streamlined.

Another problem when approaching the singularity is that even if intelligence amplification, nanotechnology and engineering AI are used, it still takes time to build new designs in the physical world (or for that matter to compile and deploy them in cyberspace). A brilliant new assembler idea could be imagined in a very short time by a nanotechnological being, but usually the time to implement the idea is much longer than the time needed to come up with it, it has to built and tested. Manipulating things in the physical world is limited by physical constraints; these constraints can often be circumvented or pushed back by new technology, but it has to be built first.

It should be noted that work in cyberspace (whether this is mental or a real Internet) is limited by the speed of light and available computational resources. These resources can be of course be extended, but that runs into the physical bottleneck: the nanotechnological superintelligences have to wait a subjective eternity for their assemblers to build more processors. We already have this problem, since we fill up our exponentially growing storage resources (both RAM and secondary storage) faster than they grow, and our programs bloat to fill up our growing processors. Niklaus Wirth, has proposed two new Parkinson's Laws for software: "Software expands to fill the available memory," and "Software is getting slower more rapidly than hardware gets faster."

So in the period close to the singularity, bandwidth limitations, lack of necessary processing power and storage combined with the relative slowness of their growth will limit the progress of transhumanity. Exactly how limiting they will be is hard to tell, but the definitely will prevent an infinitely fast growth.

Physical Limitations

The physical limitations of the Singularity exist on several levels. First there are the intermediate engineering problems, such as building useful nanotechnology and making it work on the macroscale. These problems represent engineering challenges but not much more. But there are real physical limitations to the growth of a civilisation.

The speed of light is the most obvious. limitation. It prevents us from exchanging information fast over long distances; to communicate faster we have to move closer. This delay is not relevant on a human level, but already a problem on the Internet and presumable moreso for a near-singularity >Web. There is no way around a delay of 0.04s or more around the world. And the speed of light also limits the spread of a technosphere.

There are also limits on information density (the Bekenstein Bound) and the speed of molecular or subatomic switches, which limits possible processing units and memories. It suffices to say that even very advanced civilisations will be subject to the laws of physics.

It is interesting to note that it is not impossible that the people living through the Singularity will not experience it as a singularity until they look at it afterwards. With intelligence amplification and other tools, subjective time will speed up until the rate of progress is merely fast for the inhabitants of a singularity-world, not as super-fast as an unaugmented being on the outside will perceive it. The singularity never appears to those who live in it.

Summary

The Singularity is sometimes mentioned with almost religious faith among transhumanists, but it is unlikely it will be as dramatic as some of its proponents suggest. That does not mean it won't change humanity beyond recognition (if it occurs), only that it may take some time. Even a period of maximum change over just a generation would be an extremely fast change, especially viewed on a large scale (100,000 years of very little action - and then a few millennia of frantic growth until...?). It is possible that superfast AI and posthuman uploads will appear and make it vingeanly fast, but they have to find ways to manage information flooding, design- ahead and cultural, economical and social barriers too.


Up to the Singularity Page

Up to the Transhuman Page

Anders Main Page

Anders Sandberg / asa@nada.kth.se