October 17, 2008

It is hard not to write satire

Cooling 2I recently came across: Malcolm Fairbairn and Bob McElrath, There is no explosion risk associated with superfluid Helium in the LHC cooling system arXiv:0809.4004v1 [physics.pop-ph]. At first I thought this was a clever satire, LHC physicists getting back at their critics by writing an apparently serious rebuttal of a patently absurd new physics threat. Then I realized this was a response to real claims made in court.

Basically the paper rebuts the risk that the cooling helium could become a superfluid, undergo "Bose-nova" phenomena and detonate due to cold fusion. There are solid reasons to think there is no risk, such as the long use of liquid helium in a lot of other places, the spin structure not allowing EBCs and the requirement of any fusion to be the three-body 3x4He to 12C process.

Even applying the kind of error argument we have used in our paper on the limitations of physics risks the risk comes out extremely low. I count about 7 independent arguments, so if each had a failure rate of 10-3) the total chance of failure would be 10-21. Even assuming a one in a hundred failure rate would still give a total risk less than 10-14.

It is things like this that makes me understand why physicists are somewhat emotional about arguments like our above paper (or my Bayesian argument). While we are emphatically not against the LHC or any high-energy research, our argument is yet another "Well, but..." thrown in the way of the people trying to do some lovely science. The difference, we hope, is of course that our caveats could actually be helpful in the future to make better risk assessments and hence build trust. Worrying about Bose-novas seems to be just rationalizing an irrational fear of accelerators.

Michelangelo showing off the LHCWhich brings up a curious problem: how to tell serious arguments from non-serious ones. The classic nuclear ignition of the atmosphere scenario was a serious argument, and was also dealt with seriously (it seems to me that it would apply to an accelerator beam hitting a target too). The vacuum decay argument was serious (if unlikely) and solved with the cosmic ray argument. The strangelet argument was perhaps not taken seriously enough soon enough, which led to a proliferation of worry, but the combined cosmic ray/supernova/anthropic principle argument finally took it seriously and dealt with it. Similarly I think Mangano fixed the black hole argument. These arguments all seem serious enough. But what about the cooling system exploding (the current paper), catalytic conversion of matter into dark matter or God punishing humans for prying into Her secrets? At what point do we find concerns to be irrelevant enough so that physicists do not have to write papers debunking them?

Basically we have to judge them on their prior probability of being sensible. An argument that the LHC might cause killer tomatoes will get an extremely low prior, while we would assign black holes a much higher (albeit still a very low) prior because it actually fits some physical theories not yet ruled out. A coherent risk scenario gets a much higher prior than an incoherent one, and one with some evidence (even if it is "evidence" in the sense that other more or less accepted theories allow it to happen) also have priority.

But what about the divine punishment scenario? A lot of people think supernatural forces do intervene in the world, and many of them probably think there is something impious going on with all those hadrons. I might not believe in that kind of issue, but I find it hard to claim that I believe the probability of vengeful divinities existing is so much less than the probability that extra dimension theories are true that I could claim we need to investigate risks from the latter but not the former [*]. And given that a sizeable fraction of humans hold the probability of divinities to be many orders of magnitude higher, it would almost seem that the correct priority would be to first hire a bunch of theologians, and once they concluded their risk assessment start looking at strangelets and black holes.

[*]: However, given that my view is that a couple of billion believing people are wrong (despite often having very strong convictions) and I'm right, I actually seem to claim a rather extreme certainty. My main reason for not becoming a mealy-mouthed agnostic is that I know the believers do not have uncorrelated beliefs, so the sheer number of them has a much smaller impact on my probability estimate than my prior and other evidence.

Boom!From a practical standpoint there is a limit to just how much risk assessment we can do. It is rational to reduce the uncertainties as much as possible, which suggests that we should attack the biggest or most amenable to reduction ones first. By that standard the helium issue should actually have been dealt with first, followed by the "classic" physics risks. And killer tomatoes and gods would have been skipped since there is no real way of assessing them (hmm, have I managed to annoy the theology faculty too now?)

The problem with this "look under the streetlamp" method is that there might be sizeable but hard to estimate risks. The Asilomar process for genetic modification had the sensible idea to focus attention on developing knowledge about relevant but at the point unestimable risks. This works when some risks have apparently high priors and some plausible way of understanding them better is possible. But it doesn't solve the problem of the myriad of tiny possibilities that just might be real, or potentially big risks where there is no way of investigating.

Obviously social capital plays a great deal of role in who gets their risks investigated. If the Catholic Church was concerned enough about divine interventions in the LHC, there would be a report about it. If a nobody writes a physics preprint with a potential disaster scenario it will likely not be investigated as thoroughly as its contents rationally might demand. In many situations this social factor is rational: in a well-functioning scientific community (or society) people with good ideas and accurate predictions should drift towards prominent positions while bad predictors should become obscure. It is just that the process is noisy enough that the correlation between prominence and correct relevance guesses might be too weak for this to be very effective - and there might be systematic biases, such as people making wilder predictions before becoming prominent. Improving the estimation, for example through information markets, would be a good thing.

In most situations the stakes are low, and we can safely not just ignore most unlikely possibilities but even a few likely ones, if the total amount of probability or hazard missed is much smaller than the probability and hazard we actually assess. The real mess happens for global catastrophic risks, since here the stakes are so high that even ridiculously low probabilities actually matter for real-world decisions. Small probabilities = we cannot ignore the burgeoning set of barely-possible risks - and since this set is so large and fuzzy we no longer have a rational way of telling what we can safely ignore.

So there you have it: we are most likely irrational about the truly globally important threats (especially the ones we will discover actually have a pretty high chance of occurring). At least we can be pretty certain that the LHC cooling system is not going to blow up.

Posted by Anders3 at October 17, 2008 04:54 PM
Comments