April 30, 2006

Today and Tomorrow

todaytomorrow.jpgA cute little advertising spot that could just as well be an advertisement for transhumanism as insurance.

In general, transhumanist themes seem to be expressed quite often in advertisements. It is probably the combination of popular surrealism (Margritte-goes-CGI), that advertisments unlike most modern art seeks to lift people up, and of course the posthuman mystique of elite athletes. The same goes for many music videos.

While mentioning online videos, Pleix ought to be mentioned. Beside the fun militarism of "Kish Kasch" and the satire of "Plaiditsu", "Beauty Kit" and "E-Baby", "Net-Lag" is downright brilliant: this is the world as seen from inside the net.

Posted by Anders3 at 03:45 PM | Comments (0)

April 14, 2006

Climbing to the Best Cocktail

John H. Miller, Ralph Zinner, and Brittany Barrett, Directed discovery of novel drug cocktails, SFI working paper 05-07-031 (2005)

A fun little paper showing how one can use nonlinear search algorithms (in this case an evolutionary algorithm with a bit of hill-climbing) to come up with mixtures of drugs that are more efficient than their components. Drug cocktails are a nice way of avoiding the long drug pipeline to innovate new drugs, but they have the disadvantage that the number of possible mixtures grows exponentially with the number of component drugs. Nonlinear search might be a way to find maxima in this complex search space without having to test them all.

The authors used cultures with lung cancer cells as the target. A population composed of mixtures was tested on the cells, and they got scores based on how much they killed the cancer cells minus the number of component drugs (the clearing efficiency had to be 10% greater to counteract the addition of one new drug). After each generation 15 mixtures were selected randomly based on their scores and moved to the next generation, and 15 that were slightly changed versions of previous cocktails.

The result was that while the average performance didn't climb much (since the new mictures oftwn were bad) the best performance managed to climb around 1-2 standard deviations from the initial random population. The researchers then added a few generations of hillclimbing: presumably there might be some good cocktails close to the best individual, so they tested each of the variants of adding or removing one drug from the mixture, finding the best of those mixtures, and continuing until it didn't get better. The result was a cocktail that was 4.19 standard deviations better than the initial mixture.

The approach looks promising but has a few problems. The authors were mainly concerned about statistical noise and the problem of distinguishing the best drugs since measurements of cell density had a cut-off. Then there is the issue of new drug interactions: the cocktail has to be tested on animals and humans once it has proven itself. Doing automated animal testing might be doable but would be horrendously expensive – but side effects could now be introduced in the fitness function, enabling a search not only for an efficacious cocktail but a safe one. Also, nonlinear in vivo effects might be the truly useful target for cocktails, not nonlinear in vitro effects.

My guess is that this approach could really come into its own for automated attacks on drug resistant bacteria strains. Maybe one could also use this for a huge egg-handling influenza vaccine/treatment testing system to deal with new strains.

A problem might be the time it takes to do the experiment. If fully automated, a population of N individuals running for G generations taking time T each can search (optimistically assuming an exponential search ability due to the schema theorem) c2^(NG) combinations in GT time where c is a small constant. So if there are K drugs, we should expect a “full search” in around (K-log(c))/N generations. Sounds nice at first. But assuming k=19, N=30, G=9 as in this case we get an estimate of c as 2^(-251) or so. Tiny indeed, and probably explains why the hill climbing was so useful. So we may want a broader parallelism, and maybe more local hill climbing. The number of generations is what really takes time, since T was 44 hours in this case. Doubling N ought to halve T to a first approximation (assuming the previous wild handwaving isn't just wrong, which would be very likely). But doubling N increases the cost of the system. If the increase is sublinear everything is fine, but my guess is that off-the shelf lab equipment increases in price rather superlinearly as you scale it up.

Posted by Anders3 at 04:21 PM | Comments (67910)