October 31, 2008

Probing the Improbable

Toby and his diagramMy (and Toby's and Rafaela's) paper is now up as [0810.5515] Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes on arxiv. We'll see if we get our houses egged this Halloween by annoyed physicists.

Our basic argument ought to be obvious: without any data, a certain risk may be between zero and X. An argument arrives saying that it is less than Y. So if it is right we should now think the risk is at most Y. But the argument itself has a certain chance P of being wrong. If that chance is large compared to Y, the remaining risk XP will be larger than Y, and we should estimate the risk to be about XP+(1-P)Y or less. So arguments showing a risk to be zero are less effective than they would seem at first glance in convincing rational observers to lower their risk estimates. This doesn't matter much in cases where the risks are larger than P, which is true for most everyday risks. But it matters a lot in the case of really unlikely but big risks, like human extinction from physics disasters.

We suggest that there are ways of fixing the problem. But they all take a bit of extra work. A simple way is having several independent risk estimates. The chance of them all being wrong can be made very small - if each P is about one chance in a hundred, combining four of them gets the total risk of a flawed argument down to 10-8. Another approach is to have red team/blue team competitive evaluation where one side tries to find faults in the other side's approach. This is likely to produce more robust estimates. Having estimates that also make falsifiable predictions is very useful.

The sad part is that no matter if this is done or not, it is not going to be very effective in convincing the worried public that the risks have now been rationally bounded. To achieve that requires entirely different sociological strategies. But from a truth-seeking, rational standpoint it is at least nice to figure out how to handle the slipperiness of small probabilities.

Posted by Anders3 at 05:57 PM | Comments (0)

October 26, 2008

Placebo: makes everybody but ethicists happy

Happy lakeHalf of Doctors Routinely Prescribe Placebos - NYTimes.com refers to the paper Tilburt, Kaptchuk, et al. Prescribing "placebo treatments": results of national survey of US internists and rheumatologists, BMJ 2008;337;a1938. It seems that about half of the responding physicians prescribe placebo regularly, with over the counter analesics and vitamins as the most popular placebos (just 3% used saline and 2% sugar pills), with a worrying number of sedatives (13%) and antibiotics (13%). 62% thought the practice was ethically permissible.

Many medical ethicists are less keen on placebos, thinking the deception involved is bad for the physician-patient relationship since it may involve deception or lack of informed consent.

“Everyone comes out happy: the doctor is happy, the patient is happy,” said Dr. Emanuel, chairman of the bioethics department at the health institutes. “But ethical challenges remain.”

But given that placebos seem to have powerful effects, from a purely consequentialist position there might be good reasons to use them (as long as risky drugs or drugs likely to promote antibiotics resistance are not used). Researching ways of making placebo work better (and when it is actually worse than doing nothing) might be good value for money.

I wonder how many ethicists who are non-consequentialists, and how much the job security plays into their position. If you are a consequentialist you will relegate most issues to empirical research and at most weigh a few risks and benefits. If you are a non-consequentialist you will always have a chance to be the quoted dissenting opinion about any "advance", you will be able to invoke impressive ethics, defend the question from empirical study and always argue that there is a need for an ethicist on the committee. That ought to create a selection for non-consequentialist "yes, but..."-sayers in important positions even if people do not adjust their views for job security. On the other hand, the consequentialists fit in nicely with the formal rationality of health buraucracies, so their concerns are easier to implement than the more substantive rationality of non-consequentialists.

Posted by Anders3 at 04:32 PM | Comments (0)

October 21, 2008

Holes in the brain

LeavesPractical Ethics: Finding holes in the brain: to test or not to test for Creutzfeldt-Jakob? - a small ethics blog about whether to test for vCJD or not. I'm in favour of it.

One interesting thing with this kind of very long incubation time diseases is that science is (hopefully) advancing much faster than the disease. Prions were isolated in 1982, just 26 years ago. Fixing it with gene therapy was total science fiction at that time, a basic science problem today but may be therapeutic in 26 years time. Given a 30-50 year incubation period that is not too long.

Except of course those unlucky enough to die before they got their chance. Even a totally successful treatment of any lethal condition will lead to regret that it did not save more people.

Posted by Anders3 at 07:58 PM | Comments (0)

October 17, 2008

It is hard not to write satire

Cooling 2I recently came across: Malcolm Fairbairn and Bob McElrath, There is no explosion risk associated with superfluid Helium in the LHC cooling system arXiv:0809.4004v1 [physics.pop-ph]. At first I thought this was a clever satire, LHC physicists getting back at their critics by writing an apparently serious rebuttal of a patently absurd new physics threat. Then I realized this was a response to real claims made in court.

Basically the paper rebuts the risk that the cooling helium could become a superfluid, undergo "Bose-nova" phenomena and detonate due to cold fusion. There are solid reasons to think there is no risk, such as the long use of liquid helium in a lot of other places, the spin structure not allowing EBCs and the requirement of any fusion to be the three-body 3x4He to 12C process.

Even applying the kind of error argument we have used in our paper on the limitations of physics risks the risk comes out extremely low. I count about 7 independent arguments, so if each had a failure rate of 10-3) the total chance of failure would be 10-21. Even assuming a one in a hundred failure rate would still give a total risk less than 10-14.

It is things like this that makes me understand why physicists are somewhat emotional about arguments like our above paper (or my Bayesian argument). While we are emphatically not against the LHC or any high-energy research, our argument is yet another "Well, but..." thrown in the way of the people trying to do some lovely science. The difference, we hope, is of course that our caveats could actually be helpful in the future to make better risk assessments and hence build trust. Worrying about Bose-novas seems to be just rationalizing an irrational fear of accelerators.

Michelangelo showing off the LHCWhich brings up a curious problem: how to tell serious arguments from non-serious ones. The classic nuclear ignition of the atmosphere scenario was a serious argument, and was also dealt with seriously (it seems to me that it would apply to an accelerator beam hitting a target too). The vacuum decay argument was serious (if unlikely) and solved with the cosmic ray argument. The strangelet argument was perhaps not taken seriously enough soon enough, which led to a proliferation of worry, but the combined cosmic ray/supernova/anthropic principle argument finally took it seriously and dealt with it. Similarly I think Mangano fixed the black hole argument. These arguments all seem serious enough. But what about the cooling system exploding (the current paper), catalytic conversion of matter into dark matter or God punishing humans for prying into Her secrets? At what point do we find concerns to be irrelevant enough so that physicists do not have to write papers debunking them?

Basically we have to judge them on their prior probability of being sensible. An argument that the LHC might cause killer tomatoes will get an extremely low prior, while we would assign black holes a much higher (albeit still a very low) prior because it actually fits some physical theories not yet ruled out. A coherent risk scenario gets a much higher prior than an incoherent one, and one with some evidence (even if it is "evidence" in the sense that other more or less accepted theories allow it to happen) also have priority.

But what about the divine punishment scenario? A lot of people think supernatural forces do intervene in the world, and many of them probably think there is something impious going on with all those hadrons. I might not believe in that kind of issue, but I find it hard to claim that I believe the probability of vengeful divinities existing is so much less than the probability that extra dimension theories are true that I could claim we need to investigate risks from the latter but not the former [*]. And given that a sizeable fraction of humans hold the probability of divinities to be many orders of magnitude higher, it would almost seem that the correct priority would be to first hire a bunch of theologians, and once they concluded their risk assessment start looking at strangelets and black holes.

[*]: However, given that my view is that a couple of billion believing people are wrong (despite often having very strong convictions) and I'm right, I actually seem to claim a rather extreme certainty. My main reason for not becoming a mealy-mouthed agnostic is that I know the believers do not have uncorrelated beliefs, so the sheer number of them has a much smaller impact on my probability estimate than my prior and other evidence.

Boom!From a practical standpoint there is a limit to just how much risk assessment we can do. It is rational to reduce the uncertainties as much as possible, which suggests that we should attack the biggest or most amenable to reduction ones first. By that standard the helium issue should actually have been dealt with first, followed by the "classic" physics risks. And killer tomatoes and gods would have been skipped since there is no real way of assessing them (hmm, have I managed to annoy the theology faculty too now?)

The problem with this "look under the streetlamp" method is that there might be sizeable but hard to estimate risks. The Asilomar process for genetic modification had the sensible idea to focus attention on developing knowledge about relevant but at the point unestimable risks. This works when some risks have apparently high priors and some plausible way of understanding them better is possible. But it doesn't solve the problem of the myriad of tiny possibilities that just might be real, or potentially big risks where there is no way of investigating.

Obviously social capital plays a great deal of role in who gets their risks investigated. If the Catholic Church was concerned enough about divine interventions in the LHC, there would be a report about it. If a nobody writes a physics preprint with a potential disaster scenario it will likely not be investigated as thoroughly as its contents rationally might demand. In many situations this social factor is rational: in a well-functioning scientific community (or society) people with good ideas and accurate predictions should drift towards prominent positions while bad predictors should become obscure. It is just that the process is noisy enough that the correlation between prominence and correct relevance guesses might be too weak for this to be very effective - and there might be systematic biases, such as people making wilder predictions before becoming prominent. Improving the estimation, for example through information markets, would be a good thing.

In most situations the stakes are low, and we can safely not just ignore most unlikely possibilities but even a few likely ones, if the total amount of probability or hazard missed is much smaller than the probability and hazard we actually assess. The real mess happens for global catastrophic risks, since here the stakes are so high that even ridiculously low probabilities actually matter for real-world decisions. Small probabilities = we cannot ignore the burgeoning set of barely-possible risks - and since this set is so large and fuzzy we no longer have a rational way of telling what we can safely ignore.

So there you have it: we are most likely irrational about the truly globally important threats (especially the ones we will discover actually have a pretty high chance of occurring). At least we can be pretty certain that the LHC cooling system is not going to blow up.

Posted by Anders3 at 04:54 PM | Comments (0)

October 14, 2008

The Economy of Fun

Morphological Freedom in Second LifeOn Practical Ethics in the News I blog about Protectionist deities vs. the economy of fun: ownership of virtual possessions. Do we have a moral right to our virtual possessions in online games? As I see it, of course we have such rights, both from a classical liberal and from a leftist perspective. It is all about ensuring the maximum production of fun.

The interesting thing with this moral case is that it does not imply that ownership has to follow the usual rules: there is potential for very different in-game ownership systems as long as they contribute to the game experience. It is simplest to use rules similar to the outside society since we are familiar with them, but they could work in very different ways. Something which will make taxation of online property interesting - "sorry, but that mountain is the joint property of our dwarven line-marriage".

Posted by Anders3 at 10:14 PM | Comments (0)

October 11, 2008

Putting out fires

Martin Wood MondrianBill McGuire reviews Global Catastrophic Risks (eds. Nick Bostrom and Milan Cirkovic) in Nature (full disclosure: Nick is my boss and I’m good friend with Milan, but I did not contribute to the book). McGuire takes the volume to task for not taking climate change seriously enough:

“Any ranking exercise of global threats must put contemporary climate change and peak oil firmly in the first two places. Yet the latter is not even mentioned in Global Catastrophic Risks”

Not that the book actually tries to rank the threats; it represents a first stab at putting all global nastiness into a framework; much discussion during the GCR conference this summer was about how ranking could actually be done sensibly. It is not clear to me that peak oil would achieve sufficient nastiness to be worth putting into the same list as nuclear war, grey goo or global oppression. It might certainly be trans-generational, but it is definitely endurable. Maybe if one subscribes to the Olduvai theory it would be a GCR (but that theory is about as plausible as the Club of Rome predictions – a typical “lets ignore technology and economics” doom theory).

“If we are to evaluate future global threats sensibly, we must distinguish between real and projected risks. We should consider separately events that are happening now, such as anthropogenic climate change and the imminent peak in the oil supply, other events that we know with certainty will occur in the longer term, notably asteroid and comet impacts and volcanic super-eruptions, and extrapolated risks, such as those associated with developments in artificial intelligence and nanotechnology, which are largely imagined.”

This is where things get interesting. McGuire is clearly right about this. But we can suffer a 1917-style pandemic today, and a IL4-armed smallpox virus could be made and distributed by a number of actors, likely killing tens of millions if not billions.

“A mushroom cloud may hang over the distant horizon and nano-goo may be oozing in our direction, but we still need to douse the flames wrought by climate change and peak oil if we are to retain for ourselves a base from which to tackle such menaces, when and if required.”

ParhelionHis view seems to be that we should focus entirely on the near-term risks in order to deal with future risks later on. Again, this sounds rational. But the implications are likely not what he would approve of. Given the current credit crisis, wouldn’t this be a valid argument for putting attempts to fix climate change on hold while the financial markets get fixed? Obviously, if there is no money in the future for the environment due to a global recession, less will actually be done. Given that climate change is going to be a slow process stretching over decade to century timescales it makes sense to prioritize threats that are closer and can be easier dealt with. One can quibble over what discounting rate to use (choose a too high, and you will ignore anything beyond the near present), but as the Copenhagen Consensus has shown, there are some pretty sizeable problems that can be dealt with in the near future.

What is the rational strategy of prioritizing GCRs? What we want to maximize is (likely) our long-term survival chances (individually and as a species). This involves reducing the total probability per year that we get killed by one or another GCR. Some probabilities are cheap and easy to reduce (get rid of bioweapon stockpiles, fund an asteroid survey), some are harder (wipe out infectious disease, build secure shelters) and some are likely extremely costly (space colonization to reduce risks). If we have a set of risk probabilities, a marginal cost for risk reduction ($ per probability unit reduced) for each risk and a fixed budget and are trying to reduce the sum of the risks, then the rational strategy at each moment is to spend money on the risk that has the lowest cost per unit of reduced risk. This continues until the cost rises to the same level as the next “cheapest” risk, at which point we divide resources evenly between them. Eventually we get to the third, fourth and so on.

In order to do this we need to know a lot. We need to know what risks there are, how deadly they could be, and their “fixing elasticity” – how much we can reduce them per billion thrown at the problem. We don’t have that information for most risks in the book or beyond. We are just getting started here. This is what annoys me about McGuire’s criticism: he assumes he already knows the relevant information. I am pretty convinced he is wrong about that.

Doom and gloomA lot of effort, money, research and books are already spent on climate change and oil. The total amount of effort spent on acquiring the above information needs has so far been minuscule, and it seems frankly irrational to think we know enough. If we go back in time and look at the Big Concerns of the past they have often been deeply mistaken (dysgenics is no longer the big challenge for western civilization, the population bomb got defused by the demographic transition). Hence we should be skeptical about overly certain claims about what we need to prioritize.

I’m also not entirely convinced my risk-reduction strategy above is the best. If we actually try to minimize the expected risk (with some discounting) over the future of all risks the optimal strategy might look different (I’d love to do some math on this). In particular, we have to take uncertainty in our estimates into account and model that some risks are changing over time (by 2100 supervolcanism will be at the current level, peak oil will be resolved one way or another but unfriendly AI will likely be a much larger threat than today). We should also consider the effect of growing technological abilities and wealth (SARS was dealt with in a way not possible 50 years ago). Some problems should be given to our children, because they will be better than us at solving them. But that requires both a sense of how technology is likely to develop, how tractable some problems will be in the future and whether problems will compound themselves over time or not. Nuclear waste may be a problem that our children will find easy, but they will berate us for spending so much effort on it while we crashed the oceans by overfishing.

There are a lot of things to do if we are serious about reducing global risks. One of the most obvious is to actually figure out how to get better at it. When the house is on fire it is stupid not to investigate (quickly!) where it is most important to put out the fire. Otherwise you might be saving the living room as the gas tubes in the garage blow up.

Posted by Anders3 at 10:48 PM | Comments (0)

October 08, 2008

Cut the sweetie

Old man signThe Daily Mail reports Talking to old people like children cuts eight years off their lives, says Yale study. Previous reports have shown that infantilized speaking to elderly makes them resistive to care (!) And elderly with positive self-perceptions of ageing lived 7.5 years longer than those with less positive perceptions. I guess the Mail story is based on a continuation of this longitudinal study.

If these findings are true, speaking down to people is more dangerous than smoking.

Now, given current attitudes to smoking, this suggests that speaking down to people or otherwise annoying them must be regarded as a public health hazard. Maybe we should first ban it in public buildings. We should tax people who cannot resist saying "sweetie" (or arrest them for assault?). And nobody is allowed to use babyspeak to anybody below 18.

Posted by Anders3 at 06:51 PM | Comments (0)

Evolution is Dead, Long Live Evolution (again)!

Brain evolutionPractical Ethics: If evolution grinds to a halt, we move on - my blog about Steve Jones claims that we are not evolving any more. It is roughly the same claim as I did earlier, that autoevolution is better than evolution in any case.

Ah, I just noticed that I forgot to mention the evidence of recent rapid evolution, which further undercuts Jones claim. Even if that selection has stopped, we are still a long way from equilibrium.

He also seems to claim that the current developed world environment is as close to utopia as we could come, at least evolutionarily speaking. I doubt that. We still have more than 50% embryo mortality and a finite reproductive lifespan. And from a human perspective we could clearly do with more freedom, intelligence, creativity, beauty, happiness and material wealth. Even if the marginal returns of wealth in terms of happiness are small, it is very helpful against disasters and to bring about desired things (like going into space, spreading life to other planets or just make really nice personal handmade birthday presents). And then there is the possibility of posthuman sources of value, which could be arbitrarily greater. I think our current "utopia" is going to look pretty quaint very soon.

Posted by Anders3 at 06:37 PM | Comments (0)

October 06, 2008

Does SF Predict or Inspire the Future?

ConnectorThe 24th of September I lectured as part of the Royal Academy of Fine Arts exhibition Beyond Future. My subject was whether science fiction teaches us something important about the future, or whether it is "just" entertainment and art.

The strongest claim is that sf predicts the future. There are plenty of things mentioned in sf that have come true, even when the predictions were at the time amazing: space travel, computers behaving in semi-intelligent ways, biotechnology. But this could be a selection effect where we remember the successes and forget all the rayguns, flying cars, moving sidewalks and other failed predictions. As an exercise I checked the latest CNN and BBC technology headlines to see whether they had been foreshadowed in science fiction. Overall the performance was not that good; beside some areas where sf has been around (like space and biotech) it had completely missed the social/economical internet that plays such a key part of current events.

A weaker possibility is that sf inspires technologists to 1) become technologists and 2) make the story ideas real. I think the evidence is stronger for this. SF is widely read among the digerati, we have a clear link between at least some sf and some technologies (such as cyberpunk stimulating VR).

Another possibility is that SF allows society to come to terms with new technologies. It stretches our minds and makes us see the unusual and arbitrary in everyday "it has always been this way" thinking. I think there is some truth to this. Unfortunately we are currently suffering from a "science fact" problem where people uncritically mix up reality and speculations of what is possible. The stretching of minds also did not seem to make people more open to biotechnology, quite the reverse. Maybe the stretching is selective and patchy like the previous cases, in that some people indeed become openminded and future-aware, while others think Fringe depicts reality or that the main intended use of cloning is dictator-multiplying.

I did a survey of the cognition enhancement SF literature, and was actually suprised by how little insight it gave. Most stories were not very plausible from a medical or technical standpoint, but more surprising was that so little was actually done with the concept and its social implications. Maybe cognition enhancement is unusually hard to write well about compared to aliens, but in terms of enhancement the literature has little to offer. There are the usual concerns about equality and who has power over new invasive technologies, some questioning about motives and priorities (would greater minds really be useful?) and the totally obvious safety and trade-off issues. One of the few important concepts to emerge from it is the Singularity, and that is still a fairly narrow concern among us futurists. The main thing to take away from cognition enhancement sf may be just good ideas for enhancements - extra perceptual cortices for information like in The Risen Empire, multitasking a la Aristoi and of course the eagerly awaited neurointerface.

But our experience with the Internet, VR and GPS shows that when they arrive they are going to be buggy and cause plenty of interesting kinds of trouble. But nobody wants to write stories about spam, headset cleaning and misnavigation, so I think it is unlikely we will be prepared for cortical confusion, intrapersonal deadlocks and brain firewall compatibility problems. So I confidently predict that whatever the biggest problems with these technologies will be, we will not be prepared for them.

Posted by Anders3 at 09:31 PM | Comments (0)

Pro moron

Error, room not foundThis made my day: Ben Goldacre linked to my post on the Durham study in his miniblog, writing "A pro weighs in on the morons." Of course, it could mean that I'm a professional moron (something I want to be open-minded about... sometimes I wonder).

Apropos idiots, I have admired the many kinds of idiot-proofing computer components this weekend while building my new system. To me every connector that absolutely refuses to insert the wrong way is an engineering marvel we should admire: imagine how much grief they have saved! From the asymmetries of processor capsules and bus connectors to the SATA ports. The only break in this idiot-proofing I saw was the identical USB and Firewire connectors on the motherboard, where the case leads actually could be mis-connected with potentially bad results. Poka-yoke is something I try apply more and more in my life - given existing research I know just how unreliable my (and other's) memory and cognition is, so it would be irrational not to add simple behaviour-shapers to improve the correctness of my behaviour. So these days I put my charging cellphone in my shoe, so I will bring it with me when I leave.

Apropos the human error website, it refers to a study that probably demonstrates the lower bound on how error-free any voluntary human activity can be. Rabbit [1990] tried flashing one of two letters on a display screen. The subject hit one of two keys in response. Even after after correction the error rate per choice was 0.6%. There are a few slightly lower error rates for other simple tasks, but personally I think 0.5% is a good rule-of thumb as the lowest possible human error rate. Any task requiring fewer errors must incorporate error-detection (which, when done by humans, seem to catch 80% or errors per pass) and error-correction.

If you really need to avoid errors, use several independent passes and checks, ideally with poka-yoke. But it seems that a smarter approach is to recognize that errors will occur anyway (since the defensive design is done by error-prone and finite humans) and ensure that when they happen they fail securely.

Posted by Anders3 at 04:58 PM | Comments (0)

October 03, 2008

Financial Placebo

Zehn Mark IPractical Ethics: Fishing outside the reef: the illusion of control and finance - I blog about the nice little Science paper about how the desire for control in chaotic situations make people see patterns that aren't there in order to regain an illusion of control. Looking at current financial events it seems that not just many financial but many political decisions right now are taken for exactly this reason.

Posted by Anders3 at 06:55 PM | Comments (0)