September 18, 2008

Designing Childhood's End

dc-science-sm.gifBTW, Dresden Codak's "Hob" is now concluded. If I ever get to run a singularity I will ask Aaron Diaz to design it.

I'm increasingly convinced that design, in its broadest sense, is going to be crucial for the advanced technologies in the future. What is under the hood seldom matters as much for the eventual impact as how you can use it, regardless of whether it is cars, computers or nanomachines. Which means that we may need to be as careful in designing as we want to be about core abilities. Design is also closely related to legislation - certain design choices make some activities easy or hard to do regardless of whether they are "allowed", and also make different kinds of rules enforceable. Classic examples are of course the Internet and email. And as Peter S. Jenkins points out, a combination of design choices and early case law can set the legal principles underlying a whole technology.

The problem is that "get it right first" seldom works. Most complex or general-purpose technologies get developed by multiple interests and with their consequences extremely hard to predict. In fact, most likely all nontrival consequences are in general impossible to predict even in principle. We can make statements like "any heat engine doing needs a temperature difference to do work" even about unknown future super-steam engines, but even discovering this basic constraint required the development of thermodynamics as a consequence of developing the steam engines. I hence make the conjecture that the extent we can predict and constrain the use of future technologies is proportional to the amount of the technology we already have experience with.

That is why saying anything useful about self-improving AI is so hard today: we don't have any examples at all. We can say a few things about self-replicating machines, because we have a few examples, and this leads to things like the efficient replicator conjecture or the estimated M1/4 replication time scaling. We can say a lot more about future computers, but again many of the rules we have learned (like network externalities) do not constrain things much: we can just deduce that networks will be enormously more powerful than individual units, but this is not enough to tell us anything about what they will be used for. It however tells us that if we want to keep some activity less powerful, we need to limit its networking abilities - even limited knowledge can be somewhat useful.

So if we get a singularity, I think we can predict from experience, that we will make irreversible mistakes. If it is not the Friendly AI that acts up it will be the social use of nanotechnology or the emergent hyper-economy (and that leaves out the risk of something completely unexpected). I'm fairly confident that my conjecture is true for any level of intelligence (due to complexity constraints) so the superintelligences are going to mess things up in very smart ways too. So we better learn how to design systems and especially systems of systems that it is not the whole world if they fail or turn out to be suboptimal.

Childhood's end is to realize that choices matter and that you will be held responsible - but you don't know what to do. And there is nobody else you can ask.

So maybe in these last years of humanity's childhood we should focus on enjoying doing science to our cookies.

Posted by Anders3 at September 18, 2008 10:55 PM