I gush about 2001: The Spark: 2001, artificial intelligence and the future of humanity
The interview made me re-recognize how absolutely awesome the movie is. It is in many ways *the* movie to watch to understand what our institute tries to deal with.
Gerard 't Hooft has written an excellent set of pages on How to become a GOOD Theoretical Physicist - essentially his sketch of what a person ought to know before they can actually do any relevant work in theoretical physics. Yes, you might have a good idea, but unless you actually know what (say) a Lagrangian is it is pretty likely that you will not actually answer any real question. And in order to get to the Lagrangians you need certain math and mechanics.
I think the site needs a clearer dependency graph, but it is quite neat. Also, we need similar sites for other disciplines.
A while ago I participated in a discussion at the Hay-on-Wye philosophy festival about cloning. Now the footage is online: Planet of the Clones: The future of human cloning. Hilary Rose, Anders Sandberg, Ian Wilmut. Barnaby Martin hosts
We had fun, agreeing and disagreeing quite well. Nobody thought reproductive cloning today was a good idea, but whether it was inherently bad and what it implied, we were all over the place about.
Earlier this week I attended an excellent talk by Simon Wain-Hobson from the Pasteur Institute about gain of function (GOF) experiments on flu viruses. They are controversial because they involve making viruses more pathogenic or more transmissible; I have blogged about it before.
Simon argued that GOF research is frighteningly out of touch. The scientific benefit of the research is debatable: the research tend to use old strains rather than the currently emerging ones (so whatever is learned may not be applicable to our current situation), there is a bias towards more spectacular and lethal virulence because it gets published and funded, there are statistical issues about sample sizes, there is no reason to think evolution will move in the same way (it is highly contingent, and hence what is learned may not help make vaccines or drugs), and the key experiment (doing it with humans) is unethical, and hence unfalsifiable. The ethics is also really problematic, since the rate of lab releases is not negligible and a flu outbreak can easily kill people - it has a skew distribution with a heavy tail. A normal flu season kills hundreds of thousands of people; a release would make somebody morally responsible for part of this.
He pointed out several worrying GOF experiments. Ostrich H7N1 was lethal when inoculated into ferrets and did not lose lethality when adapted to aerosol transmission: 3/5 of the animals died in both cases. More recently the Kawaoka lab assembled a 1918 flu "look alike" from 8 genes in wild that were the closest matches, and engineered it to be airborne (and yes, it was able to kill ferrets). So now we have a 1918 flu that could in theory spread.
Simon also mentioned that Kawaoka had engineered 2009 H1N1 to escape convalescent sera, that is to make something that escapes vaccine coverage and is definitely transmissible between humans. This is a human pandemic virus. When Simon gave the talk this was just a report from the conference, but now Kawaoka has apparently published. The Independent writes: Controversial US scientist creates deadly new flu strain for pandemic research. The responses from the scientific community were pretty shocked and outraged. Some were pretty rude.
The real scandal is that this was done in a BSL2 lab rather than a BSL3 lab.
Although fellow flu researcher professor Wendy Barclay at Imperial College said there was nothing wrong with doing the research in a BSL-2 lab: “In nature there is no containment. He’s only doing what happens in nature every day.” Which is true for ebola too.
I think it is this blithe assumption that nothing wrong can happen that will cause the real disaster. The double Pirbright foot-and-mouth disease outbreaks ought to cause pause. Many organisations seem to miss insider threats. Simon made the BOTE estimate that the liabilities of an outbreak traceable to a university lab could lead to class action suits that could ruin even the largest endowment, Harvard.
And from an ethics standpoint it is clear that adding risk without having a good reason is not right. The flu researchers seem to think they have a good enough reason, but everybody else - especially fellow virologists - disagree. Let's hope the flu researchers can be convinced by arguments (as I have; I have definitely shifted my position to a more restrictive view thanks to the arguments I have read) rather than by the aftermath of an accident that leaves people dead.