July 30, 2004

Heavenly Earth

Conquering Heaven on Earth by Lene Johansen brings up an interesting issue: as our economy advances, what will we use it for, and what becomes the new status objects?

The basic argument could perhaps be described as "divergent Maslowian": as we make the bare necessities cheap enough, people start spending on the next step in the ladder of needs. But each level offers far more different ways of satisfying the need. We can only get oxygen by breathing, but we can vary food quite a bit. In fact, once the basic nutrition demand is satisfied we will add taste, culture, ethics, political and status signals to it (a bit of spicy freedom fries to the venison, anybody?). And the ways we signal socially are even more diverse, since they are so unconstrained by reality.

As we become better at producing goods to satisfy desires we run into problems. Hardwired desires for high-calorie food to ensure survival clash with ready availability and produce obesity. It is interesting to note that over-drinking is not the same problem as over-eating, since the threshold when the drive stops appears to be much sharper. Since we cannot retain water efficiently there is no point in trying to drink too much, and evolution has not favored overdrinking. It is the drives that do not saturate fast that leads to trouble.

Will we get similar problems when we get good at producing services for every desire? My guess is that eventually most sexual needs will be more efficiently satisfied using artificial systems ("technogamy") than other humans, which is probably just as well. But until the artificial systems around us also produce the same emotional and social interactions as the humans we have evolved to co-live with people will still have unmet emotional needs for each other.

[ I don't see any reason why we wouldn't reach the point where ultrasocial AI also would fulfill our emotional needs, but at that point the assumptions of the scenario also implies the existence of human-level, emotionally humanoid AI, and we end up with a mixed human/AI society. With AIs as potential consumers too, of course. They will have needs too. ]

So my guess is that a world with effective service production the big unsaturated drives will be mostly social and self-actualizing. Which doesn't seem to be as risky as overeating from our current perspective. That may well turn out to be very wrong; one could imagine a material paradise where people spent most of their time engaged in social competition and climbing. Hmm, sounds like quite a few reality soaps, in fact.

But such drives are less constrained, and hence can go in many more direction than just "more". The one exception might be social rank, which in humans seems to be pretty scalar. But being recognized by others, that can be achieved both by being extrovert, funny, good-looking, having achieved something, trying to achieve something, or a thousand other possibilities. This means there is competition only for attention and social time. The interesting thing is that such unconstrained happiness-seeking seems to be never-ending and fundamentally creative.

In the end the difference might be that some forms of happiness are consummatory: we consume something, and then we are satisfied. Others are productive: we create or do something, and we are happy when we do it. Compare it with ordinary pleasure and eudaimonic flow.

So maybe we are moving towards heaven on earth (thanks to free markets, free competition and free creativity), but it seems plausible that the immensely happy people of this world will consider themselves engaged in fierce competition and hard work. Just as we think we live hard in this modern world.

Posted by Anders at 10:22 PM | Comments (0)

Small Solar Systems

While passing by the Smithsonian Aerospace Museum yesterday, I noticed the solar system scale model along the sidewalk of Jefferson avenue. A gilded sphere about ~15 cm across was the sun, and along the next few 600 meters plates (on Star Trekesque stands) marked the planets and gave information about them. Each planet was a tiny dot on an oval glass screen.

Such scale models are fairly common. One of the largest is the Swedish Solar System scale model (once officially completed, it will likely become the largest). Here the spherical Globen arena in Stockholm is used as the sun, placing more human-sized planet sculptures across a good part of Sweden. Pluto ends up in Delsbo (about 300 kilometers away), and the Sun's terminal shock in Kiruna (950 km). The Venus model is near me at the Royal Institute, and it is pretty neat, a meter sized sphere with inscriptions reminding of its properties (such as H2SO4 for the sulpuric acid). Several of the sculptures of other planets seem to be rather nice too, like the tomb-like monument around the Pluto model or the little gold model of Eros.

Which system model is the best? The Smithsonian one is sized to get an impression of the width of the system. One can walk along it with reasonable exertion. But the planets are dissapointing tiny dots. The Swedish one has SCALE - knowing the sheer size of everything involved really brings home how large the system is. But one can never see the other planets; there will not even be a line of sight between the Moon and Earth models.

In a sense all attempts of showing the scale fails because there are two very different size scales that one wishes to show, the scale of the orbits and entire system and the scale and features of the planets themselves. But since they are so different that cannot be showed at the same time: either the system becomes graspable but the planets become meaningless dots, or the planets become plausible worlds but then the system becomes unmanageably large.

It is of course possible to cheat by using different scales for orbits and planets or a logarithmic approach, but that is misleading or confusing. Far too much bad science illustrations and science fiction gives the impression that space is crammed with planets. Similarly, the vast difference in size between Jupiter and Earth is often hidden. These misconceptions should not be perpetuated.

In the end it might be that the only good way of getting a simultaneous feeling of the planets themselves and their distances is 3D astronomy programs like Celestia that allows you to fly between the planets. Simulations instead of models, removing us from some of the limitations of street space. But until the simulations gives us a strong sense of presence the solidity of these walkable models still has something important to contribute.

Posted by Anders2 at 05:12 PM | Comments (1)

July 28, 2004

Computational Neuroscience 2004

If the SFN Neuroscience Conference is a big city, the CNS conference is a small village where everybody knows everybody else. Although computational neuroscience is maturing as a field and becoming increasingly part of normal neuroscience, it still retains that small-scale feeling. Part of it may be due to the deliberate culture creation of the EU Advanced Neuroscience Course and the Woods Hole courses, part might be that the PIs know each other from when the field was even smaller, but to a large degree it is due to the CNS conferences. A sizeable percentage of the researchers in the field go there every year and it is a good way of seeing what is going in.

Here are some of my notes about interesting talks and presentations at the conference.

Mary Kennedy opened with a talk about simulation of the biochemical signalling in synaptic spines. We are now close to characterising most major proteins involved in the signalling cascade in the spine that underlies the induction of synaptic change and hence much of learning. The downside is that even in what looks like fairly straightforward cascades many surprising interactions can occur. She brought up one of my favourites: calmodulin binds up to four calcium ions, but it seems that it has different effects depending on the number of ions (1-2 or 3-4) since it will activate different parts of the cascade due to various subtle cooperativities and competitive processes. Even worse, all the usual assumptions about Michaelis-Menten dynamics (everything is well mixed, there is a large number of molecules and they are many compared to the number of enzymes) are likely violated in the spine. However, there were one poster by someone that had done Monte Carlo simulations of the spine, apparently reaching the conclusion that MM is still a good approximation. Let’s wish that remains true. In any case, a good talk that gave me the feeling that we might be reaching the inflexion point in the knowledge sigmoid about the synaptic spine.

Beth L. Chen held a wonderful talk titled ”Why are most neurons in the head?”. She had explored what happens if neurons are distributed so as to minimise the cost of axon connections between them and the sensory organs and muscles they subserve. The sensors and actuators place boundary conditions, and the cost was in her model quadratic (essentially it was a spring layout algorithm, without any node repulsion). The model system was C. Elegans, and her model did a surprisingly good job at predicting where neurons should go. She then went on to study the outliers, the neurons her model predicted a different location from their real one. It turned out that in all cases there was a clear explanation: either they were pioneer neurons that helped others grow in the right direction during development, or command neurons for forward/backward movement. Very nice.

Her advisor Dmiti B. Chklovskii did another good talk about synaptic connectivity and neuron morphology. He started out with the wiring problem: how to wire a fixed number of neurons to each other using as little connections as possible. Using straight connectivity (on axon per pair) with axons the total axon volume ended up 30,000 times larger than reality. With branching axons the number ended up a 100 times as large. With branching axons and dendrites just twice as large, and with synaptic spines the total volume ended up 60% lower, which also gives room for glia and blood vessels. Spines appear to be rather dynamic compared to axons and dendrites, appearing and disappearing at their near-crossings.

Ruggero Scorcioni described reconstruction of axon arbors from scans; most of the reconstructed neurons up until today have been mainly soma and dendrites. Adding the axons makes them far more impressive, and hints at tricky interconnections. Overall the work in Giorgio Ascoli’s group at the Krasnow Institute at GMU at reconstructing neurons and then processing their morphology is producing some interesting results. Alexei Samsonovitch demonstrated very plausible models of granule and pyramidal cells based on hidden Markov models. They could also show statistically that dendrites grow away as if repulsed by the cell body. This is rather tricky to explain biologically (if it was a chemical repellent the dendrites would be affected by all other cells too, if it was cytoskeletal rigidity the dendrites would not be able to bend and then regain their original direction as they do). They suggest that one way of achieving it could be that cell spiking produces external pH gradients that guide the cell, which would be a rather impressive phenomenon.

Yael Niv presented some simulations relevant to the debate about what the dopamine signal represents in the brain. According to the results of Schultz et al. the signal encodes the temporal difference between reward and expected reward, just as in TD-learning: an unexpected reward creates a rise, the absence of reward a decrease. But more recent results (Schultz et al.2003) have suggested that it could be just a signal of uncertainty. Yael presented simulations that show how their results could be reprodued in the normal TD-learning case; it is all about distinguishing inter/intra-trial phenomena.

Erik Fransén showed how co-release (some synapses release two neurotransmittors, and sometimes the postsynaptic neurons signals back!) and conditioning enables some synapses to switch from inhibitory to excitatory during certain conditions. A very odd phenomenon that could be a curiosity or very deep and important.

Terry Sejnowski did an overview of his greatest hits, or perhaps rather the greatest spike train hits. Neurons really seem to have a high degree of fidelity: give them the same input signals, and they respond in the same way. Even when they miss certain spikes they leave them out in neat patterns. The precise patterns seem to repeat even across individuals (Reinagel & Reid 2000)! As he remarked, this is not neural coding but anti-coding: different codes for the same stimulus, very similar but leaving out different parts. He suggested that what we are seeing are temporal attractors, trajectories that are fairly stable to disruptions. One function could be to handle noisy and uncertain situations, where the temporal structure matters, while clear-cut and strong stimuli are just rate codes.

Alexander Grushkin presented a model of the emergence of hemispheric asymmetry, where a genetic algorithm got to evolve connection weights under different fitness functions. It appears that lateralisation occurs when one needs to do trade-offs like between accuracy, response time or energy consumption.

Eugene Izhikevich did a great talk where he compared different neuron models. He plotted computational complexity vs. number of features (like being able to burst, show Calcium spikes etc) and did a kind of consumer report. He especially picked apart why the integrate-and-fire neurons are bad, and then went on to show how in most neuron models the important thing is to get the vicinity around the fixed points corresponding to the resting state and the spiking threshold right. He then showed a general form that could reproduce all the ~30 features he had listed while keeping the computation demands extremely low. The audience could not tell his simulated neurons apart from real neurons. Matlab files can be found at www.izhikevich.com

As a finale, Miguel Nicolelis presented his work on neural recording from monkeys and using it to control robot arms. Doing up to 300 simultaneous recordings is now feasible, and one can do fascinating data mining: this is really large-scale neuroscience. One application was tracking the appearance of Parkinson symptoms in DAT-KO knockout mice over the span of a few hours and during their recovery when given l-dopa – a very good look into a complex dynamical process. Another application is real time neurophysiology where the signals can be used as inputs to computational models that can then be used to compare output with brain activity and behaviour. One result of his monkey work seems to be that the brain changes to incorporate any tool we use to manipulate the environment with into our body representation (as he put it, to Pelé the football was an extension of the foot). Another wild and stimulating claim was that there is no neural code: we may be constantly changing our internal representations. During the workshop on internal representations this was further debated; both from a semantic perspective (what we call a neural ’code’ is often just a neural correlation between a signal and a behaviour/stimuli) and from practical perspectives. Regardless of the representation issue it is clear (as Krishna Shenoy showed in his lecture) that we are getting better and better at building decoders that translate neural data into real-world action that fits reasonably well with the desired result. Maybe we will figure out the representations long after the neural prosthetics and implants have become commonplace.

Among the posters I found a few titbits:

Andreas Knoblauch had done a bit of statistical analysis on the weight distribution in Hebbian learning, naming the particularly nasty non-monotonous weight distribution one gets with a relatively sparse network the ’Willshaw distribution’. Doing a bit of statistics with this enabled him to construct a very neat spiking Willshaw-like network with an almost suspiciously high capacity and noise tolerance. Perhaps not very biologically plausible, but good for hardware implementation.

A poster Roberto Fernández Galán demonstrated the formation of what looked like Hebbian cell assemblies in the honeybee olfactory cortex: cells that became correlated by the smell remained correlated afterwards, just as the cells that became anti-correlated with each other. Nice to know that the same dynamics seems to recur from insects to mammals, when one compares with the McNaughton and Buszaki experiments in rats and monkeys.

I presented a poster about STDP together with Erik Fransén. Our main question was how to achieve a high degree of temporal precision in the STDP (milliseconds) curve when the molecules involved in the process have reaction time constants on the order of hundreds of milliseconds. Our approach was to use autocatalytic amplification to scale up the minute differences caused by different calcium traces: after a few hundred milliseconds they were big enough to start irreversible synaptic changes, and yet these changes now reliably detected very fast transitions.

This model remains rather conceptual, so it was fun to compare with the talk given Mary Kennedy and try to map the known chemistry onto our variables. In our cases we get the prediction that there should be some autocatalytic enhancement of the calcineurin system. Richard Gerkin also presented a conceptual STDP model that did rather well; their addition of a ”veto-variable” to prevent overshoot fits with some of the ideas Erik and I have discussed. In our model we just assume that the synapse knows how much calcium is from the presynaptic spike and how much from the backpropagating action potential. While this could be done using proteins located near different ion channels, a more plausible approach is to have proteins with different affinities that activate the different autocatalytic chains. But then one needs to prevent an erroneous LTD when the calcium concentration declines after the spikes, and here some form of deactivation is needed.

Finally, Anatoli Gorchetnikov demonstrated a simple rule of multiplying the membrane current and potential could produce workable STDP in a cell model. So regardless of what one thinks STDP is good for (there were lots of posters about that), it seems that we are starting to figure out its implementation, from the phenomenology to the biochemistry.

All in all, a very stimulating conference. The only letdown was the lack of the Rock’n-roll jam session from previous years. The field may mature, but it would be boring if the researchers did.

Posted by Anders2 at 05:42 PM | Comments (2)

Worldbuilding Light

A mini-review of M. John Harrison's novel Light.

Most magic tricks are uninteresting in themselves. Seeing a handkerchief disappear is in itself not very interesting, it is the framing that makes it interesting: the build-up of suspense, the ritual of the magic trick, the intention to surprise. In the same way it is quite possible to have a story that is all suspense and style – a quite successful story – and yet the core events are not in themselves very interesting. John Harrison’s Light thrives on weirdness, vivid scenery, odd characters and throwaway details that hint at a greater world. But the core story is at the end not very impressive: it doesn’t need to be, it is the handkerchief magically disappearing and re-appearing.

The setting is partly present-day England, where a disturbed physicist flounders through life, partly in a future where mankind has spread through the galaxy towards the Kefahuchi Tract. The Tract is an immense astrophysical anomaly, something so different from everything else that countless alien species have gone there to study it over the last billennia. They left behind artefacts and megascale engineering the humans could mine, producing a dynamic and somewhat violent economy. In this setting the book follows a confused man who has spent several years in virtual reality and a renegade starship pilot fused with her ship. Needless to say, their apparently random paths through the world of entradistas, reality circuses, romantically packed incomprehensible alien artefacts, murder and queer physics are actually entangled.

I prefer to read books for ideas and worldbuilding. And there are plenty of them here, from the shadow operators, AIs that behave like caring but neurotic elderly ladies, over the New Men who conquered the Earth and then mismanaged it in their friendly, ineffectual way until they ended up in ghettos themselves, to the possibility that every species invents its own hyperdrive based on incompatible physics. But by moving into a universe where literally anything seems to be possible the author gains too much freedom: things do not have to make sense, there is no need to be consistent. It is not just a matter of advanced technology appearing magical; there seems to be very literal infinite possibilities here. Light survives this as a novel by concentrating on the characters and their personalities, but it makes it rather weak in the worldbuilding department – many great concepts, but merely sketches of how the entire world hangs together. That probably doesn’t matter to most readers, but for me it feels like cheating. I want to know how the economics and society of Radio Bay hang together, and exactly what kind of bizarre orbits the artificial planets move along.

In the end I felt the same inventive energy as from an Iain M. Banks novel from Light, but I’m one of those people who think it is far more interesting to learn how a magic trick is performed than to actually just see it.

Posted by Anders2 at 05:37 PM | Comments (0)

July 16, 2004

Bedtime Stories for Nanotechnology

Cyborg democracy brings up Lawrence Lessig's criticism of how the nanotechnology establishment has grown negative to molecular manufacturing.

Lessig's critique largely mirrors my own criticism (c.f. Nanotechnology: Losing the Revolution and Smurfy Nanoethics). In order to establish itself as a serious, important and fundable field the researchers flocking to nanotechnology wanted to tone down anything that seemed too alarmist, too much science fiction. The result was a field that is profundly uneasy with discussions about long-range future goals and how it actually came into existence (everybody mentions Saint Feynman ritually, almost as if to ward off spooky Drexler).

Cyborg Democracy bemoans Lessig's bemoaning that the political process is plagued by irrationality, and correctly points out that it is the one we have and it is better to have a funding-savy advocate than none. But I think it is wrong to assume that we should be complacent about fields getting twisted to fit into what funding agencies would like.

Technological lock-in is a real problem: if all nanotech funding is empathically going to non-molecular structures, that means progress, investment, manufacturing procedures and eventually products will be based on that. It has a chance to becoming a technological paraprax (see secion V) that limits development in other areas of nanotechnology, rather than being a stepping stone towards the full potential of the field. Compare with how semiconductor technology has grown to become the dominant computing technology; optronics could likely do the same or even better but will only be developed for niche applications or when Moore's law hits the wall. It is not necessarily bad that nearly all effort in a field goes in a certain direction, but it should ideally be choosen because it is the most workable or has the greatest potential, not that it is the least upsetting to politicians. Imagine that stem cell research becomes focused solely on adult stem cells, spending enormous time, energy and money on overcoming the problems that embryonal stem cells do not have. In the end we might still end up with workable stem-cell technology, but the path was not chosen for its efficiency and many valuable research contributions (e.g. in embryonal differentiation and gene imprinting) would be foregone.

I think molecular nanotechnology will come to pass, but it will not be called nanotechnology but biotechnology. For a few years at the end of the 90's I shifted from my old opinion that wet nano was the likely way to reach true nanotechnology, impressed by the advances done in the surface physics and fullerene camp. But now these political developments are returning me to the old conviction. Biotechnology is a burgeoning field able to stand on its own. It can easily encompass the idea of molecular machines, redesigning biological components for industrial processes and the use of complex systems as tools. It will likely absorb many molecular nanotechnology refugees from chemistry and physics, and the first unequivocal molecular nanomachines will be biological.

Second, allowing fears of being seen as too radical or potentially risky to control how a technology is developed runs the risk of backfiring. Nanotechnology opponents are making radical claims, and they won't go away just because Smalley calls them tales to scare children. It is the tales that convince us as humans, not the rational arguments. Biotechnology made that mistake and has paid for it dearly. When the researcher cannot show a vision of what a technology is intended to be used for, in what framework and according to what ethical principles the winner in a confrontation will be the opponent who tells the story about the potential risks, how it fits in with some evil order (capitalism, world government, what have you) and how it is basically unethical.

And who does the politician listen to? Once the technocrats had the ears of the political class (and they largely overlapped, at least here in Sweden), but no more. Today the politician gets his worldview and much analysis from the media and less from the science advisor. In addition, more and more funding is politicized. The politician will act according both to a desire to be re-elected and whether he is convinced a certain technology is promising or not. It is in his interest to channel the fears of the population into calming pronounciations of starting new oversight, new safety studies or even stopping potentially dangerous research.

The audience might not agree with the visions presented by the researcher, but they will have to judge the visions rather than be affected by the sole vision presented by the opponent. And that may enable a far more intelligent debate about nanotechnology funding, usage and risks.

The example of how nanotechnology has developed should be examined closely. It is an enormously promising technology even in its weaker forms, and its brief and well documented history from bright idea, popularisation, rejection as science fiction, acceptance and current transformation into surface and materials science is instructive if we want to keep other technologies from being foreclosed into something far more limited than they could be. Do we wish to see cognitive enhancement being turned into only help for learning disabled, or interactive telecommunications into yet another broadcast medium with heavy digital rights control? The way of allowing the fields to fulfill their potential is to support a broad, free exploration of them. And that requires not just a close watch on the funding agencies and the rhetoric of how the fields are 'sold', but to stimulate visionary debate, low thresholds to entry and exploration and alternative funding structures.

Posted by Anders at 02:32 PM | Comments (0)

July 15, 2004

Too Simple to be Safe

The Singularity Institute has started a site exploring the problems with Isaac Asimov's three laws of robotics, 3 Laws Unsafe.

That the three laws are insufficient to guarantee robot behavior should be obvious to anybody who has read Asimov's stories. Usually the main plot is about misbehaving robots and the mystery is why - rather than being "whodunnist" they are "howthinkits". But how complex do the rules of robot behavior have to be before we can consider them safe?

Is the 3 Laws Unsafe site necessary? To some extent it is just timed advertising, with the film based on I, Robot arriving. The real goal is not to push the thesis that the 3 laws are bad, but to interest a wider public to get into discussions of AI ethics. That is very laudable in itself. But I think one should not underestimate the misconceptions about AI programming, and that pointing out the complex problems of simple solutions may be necessary.

To many people "computers just do what they are told" is an article of faith. As personal computers become more widespread the fallacy of this statement becomes apparent: computers crash, refuse to print documents, update their software and occasionally exhibit remarkably strange behavior. Often these actions are the result of complex interactions between pre-existing software modules, where human ingenuity couldn't predict the outcome of the particular combination in a particular computer at a given time. But despite this, people often seem to think that creating artificial intelligence would produce something that one could give a set of rules and have it follow slavishly.

In Asimov's stories this is the case, and the result is chaos anyway. In reality things will be even worse: beside ambiguities in the rules and how they are to be applied, there will be errors in cognition, perception and execution of actions. And of course low-level crosstalk and software bugs too. And this is just in the case of an ordinarily intelligent machine. The self-enhancing AIs envisioned by the Singularity Institute will have far more degrees of freedom to train against reality (and hence able to get wrong), and the number of potential non-obvious interactions increases exponentially.

So what to do about it? The idea I really like about friendly AI is the attempt to formulate a goal architecture that is robust. If something goes wrong the system tries to adjust itself to make things better. It is no guarantee that it works, but experience shows that some systems are far less brittle than others.

Real intelligence exhibits several important traits: it interacts with its world, it is able to learn new behaviors (or unlearn old) and it can solve new problems using earlier information, heuristics and strategies. The learning aspect enables us to speak about the ethics of an AI program: how does it live up to its own goals, the goals of others and perhaps universal virtues. Asimovian AI was limited to interaction and problem-solving in most situations involving the three laws. It was in a very strong sense amoral: it could not act "immorally" and was hence no better than the protagonist of A Clockwork Orange after being treated. A learning agent on the other hand might have less strong barriers against dangerous behaviors, but would be able to learn to act well (under the right circumstances) and generalize these experiences anew.

If the AIs communicate with each other they might even transfer these moral experiences, enabling AIs not exposed to the critical situations to handle them as they arrive. We humans do it all the time through our books, films and stories: I may not have encountered a situation where I discover that my government is acting immorally and I have to choose between remaining comfortably silent or taking possible illegal action to change things, but I have read numerous fictional and real versions of the scenario that have given me at least some crude moral training.

Learning is also the key to robustness. Software that adapts to an uncertain outer and inner environment is more likely to function when an error actually occurs (as witnessed by the resiliency of neural network architectures) than fixed rules. To some extent this is again the difference between laws (fixed rules) and moral principles (goals).

But learning never occurs in a vacuum. The Bias-Variance Dilemma shows that any learning system has a tradeoff between being general (no bias) and requiring as little training as possible (low variance). A "pure AI" that has no preconceptions about anything would require a tremendous amount of training examples (upbringing) to become able to think usefully. A heavily biased AI with many built-in assumptions (reality is 3+1 dimensional, gravity exists, it is bad to bump into things and especially humans...) would need far less upbringing but would likely exhibit many strange or inflexible behaviors when the biases interacted. In many ways Asimovian AI is a pure AI with heavy "moral" biases (which is why learning/adaptation is so irrelevant to the intended use of the three laws).

Living beings have solved the bias variance dilemma by cheating: we get a lot of pre-packaged biases that are the result of evolutionary learning. When the baby cries when it is hungry it automatically signals the mother to come rather than try to learn what actions would produce relief from hunger. When the baby wrinkles its face against bitter tastes and enjoys sweetness, it uses a bias laid down by countless of generations encountering often poisonous bitter alkaloids and energy-rich (and hence fitness enhancing) sweet fruits. We benefit from the price paid by trillions of creatures that were selected away by evolution's ruthless hand.

A robot will likely benefit a bit from this too, as we humans try to act as its evolutionary past and throw in useful biases. But balancing the prior information with the ability to re-learn as conditions change is a challenge. It requires different levels of flexibility in different environments, and meta-flexibility to detect what kind of new environment one has entered and how to change the flexibility. It seems likely that it is not possible to find an optimal level of flexibility in general (as a proof sketch, consider that the environment might contain undecideable aspects that determine how fast it will change).

We humans have a range of flexibility both as individuals and as a species; we benefit from having at least some people more adapted to others when things change. It might be the same thing among AIs: rather than to seek an optimal design and then copy it endlessly, we create a good design and create a large number of slightly different variants. The next generation of AIs would be based on the most successful variants, as well as the knowledge gained by the experiences of the AIs themselves. This approach enables AIs to develop along divergent tracks in different circumstances - the kind of intelligence and personality useful for programming is different from what is useful as an entertainer or diplomat.

But what about the guarantees of keeping these devices moral? The three laws promise guarantees but at most produces safety railings (which is nothing to sneeze at; even the above flexible AIs will likely have a few built in limiters and biases of a similar kind - the fact that most humans are emotionally unable to kill other humans has not prevented some from doing it or training others to do it, but the overall effect is quite positive). Setting up a single master goal that is strongly linked to the core value system of the AI might be more robust to experience, reprogramming and accidents. But it would still be subject to the bias-variance dilemma, and the complexities of interpreting that goal might make it rather unstable in individual AIs. Having a surrounding "AI community" and an AI-human shared society moderates these instabilities - moral experiences and values are shared, webs of trust and trade integrated different kinds of agents and a multitude of goals and styles of thinking co-exist. Rogue agents can be both inhibited by the interactions with the society and in extreme cases by the combined resources and coercive power of the others. While moral in the end resides in the individual actions of agents, it can be sustained by collective interaction.

This is the multi-layered approach to creating "safe" AI (and humans). At the bottom level some built-in biases and inhibitions. Above it goals and motivational structures that are basically "good" (it is an interesting subject for another essay to analyse how different motivation architectures affect ethics; c.f. Aristotle's ethics, the effect of temporal-difference learning in dopamine signals and naturalistic decision-making for some ideas). Above these goals are the experiences and schemas built by the agent, as well as what it has learned from others. Surrounding the agent is a social situation, further affecting its behavior even when it is rationally selfish by giving incentives and disincentives to certain actions. And finally there are society-level precautions against misbehavior.

This is far from the neatness of the three laws. It is a complex mess, with no guarantees on any level. But it is also a very resilient yet flexible mess: it won't break down if there is a problem on one level, and multi-level problems are less likely. If the situations change the participants can change.

But to most people this complexity is unappealing: give us the apparent certainty of the three laws! There is a strong tendency to distrust complex spontaneous orders (despite our own bodies and minds being examples!) and to prefer apparent simplicity. This is where I think the 3 Laws Unsafe site is necessary: to remind people that simplicity isn't to be trusted unconditionally, and to show the fascinating array of possibilities AI ethics can offer.

Posted by Anders at 03:51 PM | Comments (2)

July 13, 2004

Lilliputians, Microputians, Nanoputians, ...

Synthesis of Anthropomorphic Molecules: The NanoPutians (Stephanie H. Chanteau and James M. Tour, J. Org. Chem., 68 (23), 8750 -8766, 2003)

Probably one of the cutest and most useless chemical syntheses I have ever encountered: the construction of human-like molecules. On the other hand, it is a good example of how the toolchest of organic chemistry allows the design of desired molecules with atomic precision (although in the bulk processes used in the paper, there is of course a lot of waste).

The synthesis builds the "body" from simpler chemicals, finally adding a "head" with a distinctive structure to make "nanoprofessionals" such as the nanochef or nanoscholar. Other additions enable the formation of chains holding hands, nanoballetdancers and binding to gold surfaces.

One of the authors has started an educational website, Nanokids about chemistry and nanotechnology. Some interesting resources, but IMHO too much cuteness, flash and raytracings for my taste (but still a good idea). Apropos other fun chemistry sites, I discovered the nanoputians from Molecules with Silly or Unusual Names - which is, just as it is labelled, mostly silly, but has some interesting random nuggets of information about everything from draculin and rudolphomycin to annamox biochemistry.

The paper claims "Beyond the molecular-sized domain there is no conceivable entity upon which to tailor architectures that could have programmed cohesive interactions between the individual building blocks". That is of course a gauntlet thrown at the atomic physicists to make an even smaller picoputian. Let's see if they can sculpt a character out of a Rydberg state, or make the strong nuclear force count for something!

Posted by Anders at 07:30 PM | Comments (18)

July 08, 2004

A Steak and Ancestor, Please

CNN.com - Restaurant offers DNA test for link to Genghis Khan - Jul 7, 2004

Egan's Mitochondrial Eve, anyone?

A wonderful example of how genetic testing is becoming commonplace and used for entirely different things than medicine.

Posted by Anders at 03:37 AM | Comments (9)

A Golden Dark Age of Vaccines

I blogged about Vaccines at the CNEhealth.org Blog.

The upshoot of the story was the collision between reading this CNN story and reviews like this - the contrast is stark. On one hand we are seeing declines in vaccination not due to economics or lack of technology but "well-disinformed parents". On the other hand it seems like vaccines are ready to take off and become a far wider range of therapeutic tools than just prophylaxis against epidemic diseases.


To extend my reasoning about the distrust of vaccines: The original fears about vaccination were partially about safety, but also included ethical opposition on the ground that the process was inherently immoral, mixed species and changed the structure of creation. In many ways they mirror current fears of xenotransplantation, where the safety concerns are fuelled by deeper, less often expressed "philosophical unease" (or gut reactions) about the implications.

This is a common pattern in the slow transformation of biotechnology from perversion to practice (and maybe later religion); we see it in the current biotechnology debates, where the narrow discussions abour safety, equity and ethics in the narrow sense are fuelled by a deeper and broader conflict about differing views of nature, humanity and what they should become.

Later opposition was more about the rights of an individual to choose to vaccinate or not to, with a debate about the ethics of public health. Here the debate seems to have ended with a fairly broad consensus that the collective benefits of vaccination outweigh the individual's right to bodily self-determination. Here most collectivist, utilitarian or duty-based moral systems were in agreement, with some libertarian opposition. But even from a libertarian standpoint it is not trivial to determine whether the expected harm to others due to not vaccinating oneself outweighs the small risks/costs/compulsion to oneself due to vaccination. Being myself of the opinion that an act of omission of help to others is not necessarily immoral and that the right to one's body rank very high in the rights hierarchy, I think it would not be immoral to abstain from vaccination. After all, we do not forcibly confine people carrying colds to stay at home, despite the fact that they do infect others and the infections contribute to overall mortality (see also this analysis). It might still be morally commendable and the rational thing to do out of self-interest. It is also potentially something that could be seen as part of a social contract: just as we relinquish certain rights to a government even in the minarchist case (i.e. the legal use of violence) in order to gain a useful social order, we could have a disease control clause in the social contract. But this presupposes (morally) explicit contracts that can be chosen or not, not the current coercive form.

Leaving people to decide whether to vaccinate clearly can lead to under- or un-vaccination. This can occur even in the completely rational and fully informed case; see Chris T. Bauch, Alison P. Galvani & David J. D. Earn, Group interest versus self-interest in smallpox vaccination policy, PNAS, September 2, 2003, 100(18), 10564-10567 for a game-theoretic analysis of voluntary vaccination reaching the conclusion that it would not reach the optimal level.

But I find the trend of ignoring vaccinations due to vaccine scepticism more worrying. Rational non-vaccination can be handled through rational discussion, the construction of suitable institutions or incentives (what about my favorites, the insurance companies? one could find the cost of the non-vaccination risk and use them to have defectors pay it, while using insurance to compensate for the vaccine risks). Lack of information and misunderstandings can be handled through education. But resistance due to risk complacency brought about by a safe environment, risk aversiveness towards mediagenic risks, selective information gathering and powered by traces of an anti-technological, romantic perspective cannot be handled this way. It would be tragic if the only way to reverse the trend would be a widely publicized epidemic killing children (but it would likely work; maybe this is good material for a TV miniseries? For once a scare story could be based on something real). I think the only way of really getting anywhere with this is to re-establish a sense of "scientific belonging": to get people to really know how the vaccines works, their real pros and cons, how their efficacy and safety are tested, how the current controversies are handled and especially show how we belong in the scientific universe. An enormously tall order, as usual (I better go back to asking for Dyson Spheres instead). But helping establish at least some scientific literacy is a good start to get people engaged in making actual risk assessments (e.g. "even if the sceptics are right and MMW vaccine increases the risk of autism, is that risk increase worse than the risk increase to my child (and others) of measles, mumps and rubella?").

As always with me, the "answers" seems to be 1) we need to create flexible, free institutions that help us sustain the benefits of our civilisation without dangerous or immoral concentrations of power, 2) we need to become more aware of how the world around us works. But even if this is just me hitting all problems that look like nails with my solution hammer, I doubt these solutions have any serious side-effects.

Posted by Anders at 01:26 AM | Comments (3)

July 02, 2004

Gotta Catch Them All

(From Waldemar Ingdahl): CDC disease trading cards. Why trade movie stars or butterflies when you can trade anthrax and cryptosporidiosis? Fun and education for the whole family!

Actually, this is probably a good idea. The explanations are simple and educational, giving straightforward tips on prevention and treatment. And if more kids learned just why they have to get their shots we could avoid risky silliness such as the MMR affair in Britain.

But why not go the whole way and make it a collectible card game a la Magic: the Gathering? Have cards with different levels of rarity, deadliness and infectiousness (maybe for game balance we need to reduce the power of the flu). Have the kids learn the difference between a well balanced waterborne infection deck and decks based around AIDS-related immunocompromise. Is really Ebola that tough? Can you amass enough drug resistance cards to beat your opponent's wide spectrum antibiotics - or will he play the rare and experimental DNA vaccine card?

Posted by Anders at 05:32 PM | Comments (9)

July 01, 2004

Neuroethics Stocks Are Up!

I blogged about the neuroethics boom at CNE Health.

In many ways I think the neuroethics debate is a step forward. People are starting to realize how profoundly neuroscience matters to many areas, and that things are happening at a fast pace. If one considers the enormous influence by Freud on the 20th century (for good and ill) it is clear that radical new perspectives on how we function, who we are and what we can do matter. Neuroethics also brings up the enhancement debate, which is great: it needs to be analysed, debated, extended and slowly, slowly turned from transhumanist speculation into mainstream policy. But we who consider many enhancements to be ethically acceptable better participate at an early stage, lest Leon Kass and the others take over.

At the same time I worry that every new field is going to be saddled with its own -ethics: immunoethics, wireless ethics, space ethics, quantum ethics, you name it. Acting morally is necessary in all areas, but not all areas require their own ethics. There is a very real risk that we splinter the field of ethics into a myriad of subdisciplines with little contact with each other, each engaged in discovering or at least debating ethics as it relates to a particular subject but without any overall consistency. That many modern ethicists are far more interested in discussing rather than taking a normative stand or suggest general frameworks contributes to this.

What we need is a general perspective (or several) of what we want to achieve and become, and what we do not want to achieve and become. From such a perspective one can always go down an look at how it could be implemented within quantum ethics or the ethics of soil mechanics. One cannot start from them and go the other way around. Such general perspectives are not necessarily ethical systems per se, but rather goals and visions of what kind of future, humans, society and indivdual lives we want. It is up to us all to formulate them and try to see what fits together.

Posted by Anders at 01:12 PM | Comments (19)

Freeing the Mind

Neuroscience: Change of mind (Nature 430, 14 (01 July 2004)). There are also some links at Dr. Lythgoe's site.

A neuropsychology sunshine story about how a cerebral haemorrhage turned an ex-convict into an artist/poet. Goes to show that not all brain damage is entirely negative.

I'm reminded of the "gourmand syndrome" where a stroke or trauma in the right frontal region turns people into food hedonists. It seems to be similar to the above case in involving the frontal regions. The article describes how the artistic patient has trouble switching between mental tracks, a typical frontal symptom. There also seems to be an almost compulsive component in creation and enjoying food.

Maybe it is all just about lifting the prefrontal inhibition of behavior: we all have a potential artist or gourmet inside ourselves, but learned behavior patterns ("don't play! don't eat too much!") inhibit ourselves. This suggests interesting drug targets. Rather than just try to generally disinhibit the brain with alcohol, what about temporarily inhbiting the connections between the frontal lobes and (guessing wildly here) the basal ganglia-hypothalamic complex? It has likely to be fairly local to just one "drive" such as artistic impulses or food, which might be tricky to achieve chemically. But the rewards would likely be high: showing people what potential worlds and persons they carry around within themselves.

Posted by Anders at 12:55 PM | Comments (9)