November 28, 2010

Existential risk and rationality

Its Christmas at ground zeroHere is the one page handout I did for a seminar with the Oxford transhumanist rationality group about existential risk:

Existential risk and rationality


Existential risk: a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target). Classic paper by Bostrom. Not necessarily extinction, could involve permanent negative value or even “axiological risks”: things don’t turn out bad, but could have been a lot better.

Examples: War, Genocide/democide , Totalitarianism, Famine, Social collapse, Pandemics, Supervolcanoes, Meteor impacts, Supernovae/gamma ray bursts, Climate changes, Resource crunches, Global computer failure, Nuclear conflicts, Bioweapons, Nanoweapons, Physical experiments, Bad superintelligence, Loss of human potential, Evolutionary risks, Unknowns

Types of risk: exogenous/endogenous/systemic, accidental/deliberate. Many ongoing

What is the disvalue of human extinction? Parfit’s discussion of whether having 100% of humanity is wiped out is much worse than 99.999% of humanity killed. How to discount the future?

Predictability: Past predictions have been bad. Known knowns/known unknowns/unknown unknowns/unknown knowns.

Many risks affect each other’s likelihood, either through correlation, causation (war increases epidemic risk) or shielding (high probability of an early xrisk occurring will reduce the probability of a later risk wiping us out).

Can we even do probability estimates? Xrisks cannot be observed in our past (bad for frequentists). But near misses might give us information.

Partial information: power law distributed disasters, models (nuclear war near misses), experts?

Anthropic bias: our existence can bias our observations. See Cirkovic, Sandberg and Bostrom, “Anthropic shadows: observation selection effects and human extinction risks” However, independent observations or facts can constrain these biases: Tegmark and Bostrom, “How unlikely is a doomsday catastrophe?”

Cognitive biases: big problem here. Low probability, dramatic effects, high uncertainty. Yudkowsky, “Cognitive biases potentially affecting judgment of global risks”.

Risk of arguments being wrong: Ord, Hillerbrand and Sandberg, “Probing the Improbable”

How to act rationally: Xrisk likely underestimated. Must be proactive rather than reactive? Reducing risk might be worth more in terms of future lives than expanding fast (Bostrom “Astronomical Waste: The Opportunity Cost of Delayed Technological Development” ) The maxipok principle: Maximize the probability of an okay outcome. Spend most on biggest threat, earliest threat, easiest threat, least known threat or most political threat?

Also worth considering: when do we want true information to spread? Bostrom, “Information hazards”


Posted by Anders3 at November 28, 2010 03:37 PM
Comments