July 24, 2008

Yaaay! We're doomed!

James and the bombFor the last week or so I have been involved with workshops and a conference about Global Catastrophic Risks - disasters that threaten a sizeable fraction of humanity or the survival of the human species itself. Some might go even further like wiping out the biosphere or the universe, but we mainly kept this anthropocentric.

To sum up a lot of talks and discussion by lots of smart people:

In the past threats were from nature, now they are human-made. The good news is that we have reduced overall risk, but that might just have pushed up large, unlikely risks. To some extent we are dealing with a luxury problem: when small disasters happen all the time we do not have the time to plan for the big ones. But it could also be a forest fire problem: by reducing small disasters we might set ourselves up for bigger ones. In the case of forest fires preventing small ones lead to a build-up of combustible plants and debris on the forest floor that eventually leads to a major conflagration. In the case of banking systems we might prevent banks from going bust, but that adds to the inefficiencies and distributed risk of the whole system.

The top existential risk concerns seemed to be bioweapons, nanoweapons, superintelligence and "other bangs". Note that these risks are right now low probability, but likely to grow in danger over time. The "other bangs" category corresponds to extinction events that we so far have not thought about. My argument for it is that if the Great Silence (the absence of alien civilizations) is due to disasters, it has to be disasters that are 1) very big, 2) hard to predict even when you know or suspect they are there. Looking at near-term risks nuclear war looks like it is making a comeback due to recent nuclear winter simulations (it is not the nukes that kill, but the famine after failed crops), and a terrorist nuke can ruin everyone's day despite being just a local mess. Global dictatorships should not be discounted either, especially if highly disruptive technologies crop up - or powerful content management systems instituted to deal with disruptive technologies. Astronomical threats on the other hand seem to be receeding (with the possible exceptions of dark comets, Eta Carina having a spherical hypernova or WR 104 sending a gamma ray burst at us). Similarly nobody took physics disasters seriously. Climate change may be messy and increases various risks, but is far "milder" than most discussed disasters. Instead it is a slow problem that suffers more than usual from coordination and uncertainties. Radical climate changes cannot be discounted entirely, but do not seem terribly likely.

GamesProbability matters, but is hard to do right. Practically all debates where people did not agree dealt with probability of some kind. Not only are objective and subjective probabilities hard to distinguish, it matters enormously whether we think of the probability of something happening within a span of 50 years or to the end of time. The risk that a giant meteor will strike is minuscule per year, but pretty certain if we wait long enough (and do not stop them) - but other threats are likely to occur long before mankind is wiped out by a dinosaur killer, so relative, conditional and changing probabilities also matter. Today pandemics and nuclear wars may be a large fraction of the risk probability mass, in a few decades bioweapons and nanoweapons might be much more significant. Worse, even when we estimate probabilities sensibly we have a decent chance of being wrong anyway. Just having people estimate likelihoods on the other hand is to invite cognitive biases, politics and mixing up probability with concern. Simulations and models are often problematic, yet doubting their accuracy does not imply that one should be complacent.

Fat tails are dangerous. Many disasters have a "fat tail" distribution where the likelihood of very large disasters declines as a power-law of size. This means that the total number of causalities over time is dominated by the largest disaster, not the median disaster. Some disasters like earthquakes probably have a cut-off (they are local, and even a great tsunami is local to a particular ocean and its shores), but wars and democides could potentially go all the way up to the total human population. Governments may be best at managing median disasters, not the extreme ones where no old rules apply - here we need new approaches (decentralization and decoordination? in-depth resiliency?).

Building refuges makes sense, but is fraught with practical and political problems. Having underground redoubts where populations are isolated for long periods of time yet have the right equipment to return after a disaster might increase species survival chances a lot. The problem is that they are going to have visible costs and mostly not do anything visibly useful. Robin Hanson suggested an interesting refuge market that allows people to vote with their wallets, giving good information about the current risks and a way of getting a semirandom sample of people into refuges. There could even be competing refuge organisations. The main problem is the expense of setting up even a small one. Perhaps it is better to ensure that nomadic people in remote corners of the earth remain self-sufficient.

Societal breakdown is bad. Many disasters kill more people through the chaos afterwards than through the disaster itself. A global disaster could kill us by disrupting the fragile global economical system, destroying economies of scale and networks of trust. Many kinds of disasters would also produce societal breakdowns, so this is a kind of common mechanism of doing damage that could be dealt with. If we can increase resiliency to breakdowns or ensure that parts do not break down, disasters can be ameliorated. Right now few people are considering how to do "civilization resiliency", but more should.

Punting 4Cognitive biases make our risk thinking bad. Most of the time cognitive biases have small stakes, and just cause everyday mistakes. In a disaster small irrationalities can suddenly become deadly. We are under-investing in dealing with uncommon, "silly" or "unthinkable" disasters, we allocate resources based on probability or harm estimates that are biased and we might overreact when something happens.

Risks and risk thinking are highly cultural and sociological. Although we may attempt to consider it rationally, most risk thinking in society is not rational at all and influenced by other factors. James Hughes gave a talk about religious millennialism and showed how its different types recur in thinking about disasters or end of the world scenarios even among secular people (as well as religious opinion influencing real risk planning). Steve Rayner described the various sociological explanations for why some people are catastrophists and others not; his theory seems to explain why there is a link between egalitarian group values and catastrophist thinking and between competitive individuals/markets and non-catastrophist thinking.

Decisionmaking about risk is tricky. Often risk discussions are actually proxies for ideological discussions, and in government programs about risk often become hijacked by various interests. Rayner's theory shows how different groups "like" different levels of uncertainty and stakes: the catastrophist egalitarians prefer high stake or high uncertainty issues since they tend to require the ideological/value based approaches they favour, technocrats like to reduce stakes and uncertainty so their solutions work and the market people like the intermediate realm where individual solutions can be tried.

We might be in a window of opportunity. Risks from nanotechnology, AI and biotechnology are not yet large. This means we can actually try to come up with sensible safeguards before they happen, informed by past experiences with handling other risks like terrorism. There seems to be some new nuclear disarmament approaches, and people are willing to consider various approaches on climate change (as long as they involve sack cloth and ashes, and not geoengineering or someone getting rich from them).

All in all, there is a lot to do now. We have a rough map of the field of global catastrophic risks, we have some rough ideas of levers to pull and attractor states to understand better. Given the lack of studies on this level there is a great deal of opportunity - and we better make use of it.

Posted by Anders3 at July 24, 2008 07:29 PM
Comments