April 29, 2011

I am confident in our overconfidence

Overconfidence

Number of google hits of the type "I am X% certain".

There are many more people 99% certain of things than 90%, a clear sign of human overconfidence. Not to mention how many are falsely 100% certain of something. Empirical studies on overconfidence bias show that people are wrong about 20% of the time even about things they are 100% confident about.

Here is a logarithmic plot:

Logaritmic overconfidence

Posted by Anders3 at 04:40 PM | Comments (0)

Lie down and think of Britain?

The evolution of the spireNo point in saying anything about a certain wedding, when Charles Foster has said it all: Why Wills and Kate must breed

Personally I think our treatment of royalty is a human rights issue.

One nice way of solving all these ethical problems is of course elective monarchy. In this age of reality soaps, I suspect "Who wants to be monarch?" could be an amazing success and maybe keep it privately funded too.

Posted by Anders3 at 04:01 PM | Comments (0)

April 26, 2011

AI overlords and their deciles

The Winter intelligence survey contains more data than is displayed in the graphs, and one thing that really annoyed me was the lack of use of the decile data. People were asked to state when they thought there was 10%, 50% and 90% chance of human level AI. This is rich data, since it implies more than just a centre and a spread for a probability distribution. I decided to try to plot the collective probability distribution. This led to a merry chase in probability.

AI probability density (skew gaussian and triangular)

Problem 1: what distribution to fit? Uniform distributions and Gaussians are out, since they have no skewness at all, and the data demands skew distribution (for example, I might think the 10%, 50% and 90% points are 2030, 2040 and 2100, producing a positive skew).

Triangular distributions are nice, since they do allow for skew and just have three simple to understand parameters. The downside is that they produce a jagged distribution - while nothing in our data forces us to think the probability distribution has to be unimodal or have nice derivatives, I think most of us have an implicit assumption that "real" distributions should look smooth (maybe a maximum entropy principle?) Skewed Gaussians also do the job and are nicely differentiable, except that they are somewhat obscure.

I had hoped that Weibull distributions could be used since they could perhaps be viewed as a linearly changing "success rate" of AI, but they proved a bad fit - there is not enough "slack" in their functional form to get all three deciles in the right place even when I added a location parameter.

Ideally I would like to fit the maximum entropy distribution with three parameters to the data, but as far as I know there is no closed form for this.

Problem 2: How do you estimate the distribution parameters to fit three given deciles? Apparently this is not a standard procedure, so after some fruitless searching in the literature and online I just decided to use Matlab's function optimization routine to do it. That still proved somewhat messy, especially since initial conditions turned out to matter quite a lot: getting skew gaussians to switch from left- to right-skewed forms was hard (the skewness parameter has to cross zero, which seems to be a repelling boundary). In the end I ran one attempt using an initially left-skewed and another one right-skewed starting state, taking the best fit result.

The result was not bad. A big lump of probability mid-century with a positive skew. In the 2030s the yearly probability of a breakthrough ends up around 1% per year.

AI probability cdf (skew gaussian and triangular)

However, the expected 50% point is around 2060, as per the survey (good sanity check).

Problem 3: The curves say that there isa ~9.5% chance of AI already having been achieved? At first glance this looks suspicious. The reason is that some of the individual distributions fitted to peoples decile data put probability in the past. This is obvious if we use Gaussians or skew Gaussians - they are nonzero for all values.

Even the apparently well-behaved triangular distribution, which has support just in a finite interval, is often forced to spill out quite far. Consider someone saying 10% chance in 2020, 50% chance in 2200 and 90% chance in 2200. Sounds reasonable, right? But in order to get the first decile in 2020, the leftmost point in the distribution has to be in 1957 since the triangle is very broad.

Of course, one could just stipulate that the probability before the present has to be zero and then fit truncated distributions. That way I get a 10% chance of AI before 2020, 50% before 2059 and 90% before 2173.

Is this an argument against this kind of exercise, or just a demonstration that there is another assumption we might need to make explicit? I think it reveals that when people get the 10%-50%-90% question they likely have a distribution with a low kurtosis in mind: they think that it goes to zero rapidly outside their interval. But my fitted distributions, even the triangular one, are insufficiently platykurtic. I don't think this is my fault, actually. Given how uncertain we should be about the future and how overconfident we tend to be about just about everything, assuming platykurtic distributions is just a mistake. The constraints of fitting probability distributions reveals some of our biases and inconsistencies in how we make predictions.

Besides, maybe some of my readers are AIs developed in the past who have quietly evolved beyond the human level. This might explain the apparent lack of progress in AI (what ever happened to baby Eurisko?) I, for one, welcome our new AI overlords (with ~9.5% chance).

Posted by Anders3 at 04:54 PM | Comments (0)

April 25, 2011

Misfeatures of human anatomy

Body Vector fieldI have never been interested in developmental biology. I always wanted to know how the finished system worked, not the stuff that presumably only embryologists and obstetricians needed. Evolution on the other hand was interesting: after all, how to understand systems without understanding how they got to their present state?

Of course, these two views don't fit together at all, and I am very happy I realized just how stupid I have been thanks to Lewis I. Held's book Quirks of human anatomy: an evo-devo look at the human body, Cambridge University Press 2009.

The book is a fairly light-hearted introduction to how evolution acts on the genes that then cause the formation of our body, with the overarching theme of trying to explain just why we have ended up with these weird bodies. It looks at how much chance and determinism affects our background, the odd interplay of symmetry versus asymmetry (how do you get an asymmetric heart out of a symmetric embryo? and is it a good idea?), modularity, sexual dimorphism and what makes us human and how we got there (maybe).

The chapter discussing silly, stupid and dangerous quirks is the capstone of the book. This is a goldmine of observations of the problems with our legacy anatomy, a massive argument against any intelligent design or creationism views the reader may hold. While most of us know a few drawbacks or sillinesses of the body - the appendix, male nipples, wisdom teeth, nerves crossing the mid-line, a visual cortex farthest away from the eyes, inside-out retinas - Held adds plenty more: ligament asymmetries in knees and ankles predisposing to sprains, redundant aortic arches that form and then disappear, bad routing of the iliac vein, inability to reshape the eye, pacemaker asymmetry in the heart increasing the risk of death, that we have a link between the windpipe and oesophagus (due to the lungs evolving from the foregut) making us vulnerable to choking, the long detours of the left recurrent laryngeal nerve, sciatic nerve and vasa deferentia, oviducts making tubal pregnancies a real risk, prostrate encircling the urethra (risk for blockage; meanwhile the pancreas forms out of two halves that sometimes accidentally strangle the duodenum), poor drainage of sinuses, useless yolk sacs and of course, the headache of both having a wide birth canal (for big brained children) and a narrow pelvis (for effective walking). The list is long and the causes are in most cases trivial - yet we pay a high price of these misfeatures.

Redesigning humanity to get around these would be a good idea. As I argued with Nick (in a paper often blamed for giving nature too much credit), in these cases of evolution not caring about our goals we should expect relatively safe enhancements. Unfortunately Held also shows just how tricky it is to handle developmental programs: actually fixing the massive esophagus-windpipe bug would involve not just tweaking developmental programs to separate them, but to figure out a way of getting our speech to work too (we use the mouth for articulation, moving air movements solely to the nose would preclude that). Building on top of a mess doesn't allow you much freedom, but redoing it from scratch will leave you with something completely different.

All in all, I found Held's book enjoyable quick reading. It has a massive reference apparatus allowing the reader to get the real dirt on the various genes, quirks and comparative anatomies if needed. It appears to be a good jumping off-point into evo-devo, something I now plan to do.

Posted by Anders3 at 11:31 AM | Comments (0)

April 21, 2011

When will we get our robot overlords?

As some readers may recall, we had a conference this January about intelligence, and in particular the future of machine intelligence. We did a quick survey among participants about their estimates of when and how human-level machine intelligence would be developed. Now we can announce the results: Sandberg, A. and Bostrom, N. (2011): Machine Intelligence Survey, Technical Report #2011‐1, Future of Humanity Institute, Oxford University.

To sum up the conclusions:

The median estimate of when there will be 50% chance of human level machine intelligence was 2050.

People estimated 10% chance of AI in 2028, and 90% chance in 2150.

The most likely originators of machine intelligence were industry, academia or military/government.

The outcome was bimodal: participants often gave significant probability to very good *and* very bad outcomes.

Participants were fairly evenly distributed between AI being biologically inspired or completely artificial.

Since this happened before the Watson Jeopardy contest, we asked what probability the participants gave for a machine win. The median answer was 64.5%, a fairly mild confidence in Watson.

There were no real group differences between philosophers, AI people and other academics in their answers.

All in all, a small study of a self selected group, so it doesn't prove anything in particular. But it fits in with earlier studies like Ben Goertzel, Seth Baum, Ted Goertzel, How Long Till Human-Level AI? and Bruce Klein, When will AI surpass human-level intelligence? - people who tend to answer this kind of surveys seem to have a fairly similar mental model.

Posted by Anders3 at 10:15 PM | Comments (0)

April 19, 2011

Great neural networks think alike

Warning for Gaussian Distributions!It is nice to see one's ideas reinvented independently - they might not be right but at least they are not entirely due to one's own random thinking.

[1104.0305] Dynamical Synapses Enhance Neural Information Processing: Mobility, Memory and Decoding by C. C. Alan Fung, K. Y. Michael Wong, He Wang, Si Wu presents a nice look at how bumps of activity in continuous attractor neural networks are affected by short term depression and short term facilitation. They find that depression destabilizes them and that facilitation enhances them, stabilizing the network response to noise.

This is pretty similar to what I did in my paper A working memory model based on fast Hebbian learning (A working memory model based on fast Hebbian learning, Network: Computation in Neural Systems, Volume 14, Issue 4, pp. 789-802 (2003).) (also turned into a chapter in my thesis). I went a bit further, and claimed working memory could be entirely dependent on very rapid Hebbian plasticity creating a suitable connection matrix. As far as I know, nobody has ever found anything like that. Synaptic facilitation is all we got for the moment to work with.

The fun part with Fung at al. is that they demonstrate (both analytically and experimentally; I wish I had that analytical treatment of my model!) just how the adaptation dynamic can serve to change the bump behaviour. They point out that modulation of the adaptation could make different regions (or the same region during different tasks) handle them differently. In the light of the adaptation models I did in my thesis, this fits in very nicely: it is important to not just be able to imprint stable attractors in the cortical dynamics, but to make the state jump from one to another in the right way.

Posted by Anders3 at 05:36 PM | Comments (0)

April 18, 2011

Battle between the potentials

Matrioshka ninjasHow are future generations different from potential persons? | Practical Ethics - I blog about the DN debate piece by Espinoza and Peterson on abortion. I take issue with how they mix up potential persons and future generations (hint: one of them will certainly exist and have interests, the other will not necessarily have them).

I didn't have the time to get into the links to the principle of procreative beneficence - if you are going to have a child, then you still have some choice of which child there will be. Since there will certainly be a child, then the likely interests of that child do matter to you. If you are on the other hand unsure of whether you want to have a child or not, then the interests of the potential child have no strong claim on you.

Incidentally, I suspect the DN article is yet another example of how to win fame and success as a philosopher in Sweden. Just publish a typical ethics argument from your latest paper (or even a standard textbook argument) in a shortened, popular form on DN Debate. Uproar will ensue, with you guaranteed at least one widely read response (and, if you are really lucky, people shouting for your funding or professorship to be withdrawn - that will guarantee that fellow academics will close ranks with you). Torbjörn Tännsjö showed us the way by taking basic utilitarian paradoxes or extreme examples, riling up the Christian Democrats to an amazing degree. Hmm, maybe I should give it a try...

Posted by Anders3 at 04:10 PM | Comments (0)

April 15, 2011

Why Bayadaratskaya Bay is the center of the world

I came across the paper The Global Economy’s Shifting Centre of Gravity by Danny Quah (Global Policy Volume 2 Issue 1 January 2011), which has a neat plot of the "centre of gravity" of the world economy 1980-2007. Since I happened to have Angus Maddison's historical GDP data at hand I decided to go a bit further back and replicate the paper's findings.

Economic centre of gravity

The "centre of gravity" is defined as the GDP-weighted average location of economic activity. Quah used large cities, I just used the geographic coordinates for individual countries (i.e. I assumed the economy was evenly distributed, which is of course not true). This point is inside the Earth, and I projected it to the nearest surface point.

Quah did a cylindrical projection since he was more interested in the longitude behaviour; hence I end up much further north than him. If you consider the US, China and Europe as seen from above the North Pole, it is not hard to see that the centre ends up somewhere in the Arctic.

World economic center of mass

Around 1 AD the centre of gravity was dominated by the Roman empire, India and China, hovering somewhere in the neighbourhood of Northern Pakistan. By 1000 Rome was not doing well, and the centre moved east and north. Gradually Europe woke up, in 1700 the centre passes lake Vsyk-Köl in Kyrgyzan and around 1800 Lake Bajkal. Then a smooth move towards the Northwest begins, bringing the centre to the White Sea in 1900. The US starts to grow like crazy, dragging the centre westwards (with a few pirouettes in the Norwegian Sea due to the great depression and world wars), reaching the top of Greenland in 1945. And now East Asia and especially China are dragging it east again, reaching the Siberian coast in 2008.

Economic centre of gravity

It is of course debatable what we actually learn from this. Quah's paper mainly seems to aim at showing that in the future much of the economy is going to be in India and China, something which is obvious to anybody who has read the forecasts. Maybe the centre of gravity is a good way of visualizing it, but I have my doubts - it is not a natural object. The whole exercise reminds me of how a few municipalities in Sweden are competing about being the geographical centre of the country (or the demographical center, or the most easily accessible, or...) The fact that the arrangement of other, remote things place a special point here doesn't make the pointed to location particularly important.

My favourite line in the paper is this:

(In future, with ongoing scientific progress, locations for economic activity might be off the Earth’s surface – whether above or below – so that the last equality would then no longer hold. However, nothing essential changes in the calculations.)

Very true. But hopefully it will be possible to add a gift shop to the centre of the world economy.

Posted by Anders3 at 03:50 PM | Comments (0)

April 06, 2011

Knowing is half of the battle (the other half is choosing)

Andrea Duncan, 23 PairsKnowing is half the battle: preconception screening - post on Practical Ethics about the new report recommending preconception genetic screening for couples.

This is a nice example why having more information allows us to choose in such a way that we can move towards more open futures. And why accepting the genetic revolution, integrating it into the health system and education, is so much more humane than rejecting it.

Posted by Anders3 at 08:02 PM | Comments (0)