Very cute. Outsourcing of officially unethical research? (some papers from the Brundin lab)
Of course, Pentagon and other US agencies have always funded some research outside the US, this project is not truly unique. But it brings up some interesting things by the controversial nature of the research.
The fear/hope that bans on research would make researchers move to less restrictive countries have in most cases not been borne out. Usually it is because less restrictive countries are less appealing to do research in: Haiti probably has very loose laws of biomedical experimentation, but the lab facilities and general environment does not attract many people. Even fairly well developed places like Russia have a hard time attracting researchers and research capital. It is very much in line with Richard Florida's model of creative clusters, with the added effect that moving a research career is hard and expensive.
But stem cell research is interesting in that the regulatory climate is so clearly different in different nations, making it possible to find research clusters that are able to do the research both practically and legally.
My prediction is that as more and more of the emerging economies outside "the old west" appears we are going to see less and less of a bioethical consensus. This is going to produce more cases of this kind of outsourcing. The key is the existence of independent clusters with talent, technology and tolerance (for that particular issue, at least).
This is one reason why I hope we never get a more homogeneous world culture: there will always be a place for the heretics (and hereticorps) to go. With faster global communications, we better start producing more divergent culture too.
New Scientist recently reported that giving pregnant rats choline enhanced memory performance of their children.
This is not new; studies have shown the beneficial effects of prenatal choline supplements earlier. The news is that it was apparently due to bigger and more active hippocampal cells.
The study is Dietary Prenatal Choline Supplementation Alters Postnatal Hippocampal Structure and Function (Qiang Li, Shirley Guo-Ross, Darrell V. Lewis, Dennis Turner, Aaron M. White, Wilkie A. Wilson and H. Scott Swartzwelder, J Neurophysiol 91: 1545-1555, 2004.
That issue also has one of the funniest editorial titles I've seen in a while: Homeland Defense Begins in Precentral Cortex, about sensorimotor integration for defensive behavior).
Earlier work on prenatal choline was mainly due to Warren H. Meck and his team:
Meck, Warren H., Smith, Rebecca A., Williams, Christina L., Organizational Changes in Cholinergic Activity and Enhanced Visuospatial Memory as a Function of Choline Administered Prenatally or Postnatally or Both, Behavioral Neuroscience, 1989, 103: 6, 1234--1241
Meck, Warren H., Smith, Rebecca A., Williams, Christina L., Pre- and Postnatal Choline Supplementation Produces Long-rerm Facilitation of Spatial Memory, Developmental Psychobiology, 1987, 21:4, 339--353
Garner, Sanford C., Mar, Mei-Heng, Zeisel, Steven H., Choline Distribution and Metabolism in Pregnant Rats and Fetuses Are Influenced by the Choline Content of the Maternal Diet, J. Nutrition, 1995, 125: 11, Nov, 2851--2858
W. H. Meck and C. L. Williams, Perinatal choline supplementation increases the threshold for chunking in spatial memory, Neuroreport, 8:14, 3053--9, Sep 29,
W. H. Meck and C. L. Williams, Characterization of the facilitative effects of perinatal choline supplementation on timing and temporal memory, Neuroreport, 8:13, 2831--5, Sep 8, 1997
It seems plausible this works in humans too. The interesting question this raises is of course whether having mothers eat "healthy" choline-producing food is ethically the same as using biotechnology to achieve bigger brained children. Just as for playing Mozart to the foetus, it seems that the low-tech aspects of the treatment will make people accept it. But the arguments given by Fukuyama et al. against reproductory medicine as reducing human self-determination by manipulating one's ontogenesis seem to hold here too. The child would be "doomed" to have better memory regardless of how much he or she would subscribe to ideas that ignorance is bliss.
Of course, one might argue that here one simply enhances a natural biological health mechanism, but if one accepts that then one ought to accept activating that mechanism through any other means. As well as activating every single such health mechanism - which would presumably produce a child that was much healthier than normal children would ever be by chance.
I think perinatal functional food might become a big thing. Adding the right fatty acids to breast milk substitutes have been shown to enhance cognitive function in infants, and people are exploring probiotics for infants and various supplements for the mother to inhibit development disorders. In many ways this might be the soft sell of human enhancement. It is natural enough to make even the harshest critic of human enhancement quiet, while being both simple and under the control of parents.
Peter Sheridan Dodds, Duncan J. Watts and Charles F. Sabel, Information exchange and the robustness of organizational networks, PNAS October 14, 2003 vol. 100 no. 21 12516-12521
A fun paper on how to create robust networks, with many applications for organisations. The question is how to organise links in the network so that it can both withstand the loss of key nodes/people and there is less chance of them getting overloaded. The best practical solution appears to be multiscale networks that mix hiearchy with cross-division interconnections.
How should an organisation be organised in order to work at its best? Usually communication is the bottleneck. If everybody communicates with everybody else the time spent on communicating soon overtakes the time spent on working (since the number of pairs of people grows quadratically while the amount of potential work only grows linearly with the number of members). If an executive manages the communication it becomes more efficient (he gives and receives information from everybody else, who can spend more time working). But now the executive is a bottleneck: if there are too many people under him he will be overwhelmed by the information flow. Adding higher levels partially solves the problem, allowing huge organisations. But there is a loss of robustness too: if the boss is sick or away, communications between different parts of the organisation breaks down. In the everybody-talks-to-everybody case there is no such risk.
The paper creates a model of networks starting with a hierarchical network extra links are added. The probability of the extra link between two nodes depends on their organisational distance (how far away are they on the hierarchical org chart) and their depth in the organisation. The results range (for different parameters) between random networks, local teams (people talk within the team), random interdivisional (people have extra links across divisions but few within), core-periphery (the top level has all the extra connections, but the rest is strictly hierarchical) and a multiscale mixture for intermediate parameters.
Then they tested the probability of overloading: information is sent between random pairs of nodes along the network and the total traffic across the network calculated. The congestion centrality is the amount of traffic at the most overworked node in the network. It turns out that there is a very narrow minimum corresponding to a kind of core-periphery structure that has the best performance (the higher-ups are good at distributing information among themselves, making themselves unlikely to be overloaded) and a broad minimum for the multiscale network. So while the organisation might be efficient in the core-periphery architecture, random changes in connectivity wrecks it easily. Hence it is not very robust.
Even more importantly, if highly connected nodes are deleted, the core-periphery and local-team networks fare very poorly. Random and interdivisional networks survive well, and the multiscale networks almost equally well. So the multiscale networks can survive both loss of nodes and overloading.
The moral seems to be that highly optimized networks are not very trustworthy in a messy reality where people meeting in the smoking room form important informal information conduits, executives get random colds and section 9 grows a bit beyond what is optimal for the overworked administrator far up in the hierarchy. Having interconnections across divisions, between levels and within teams becomes important in the core part of the organisation and as it grows they need to grow with it.
It is interesting to consider the similarities to other networks, like the brain. Here the peripheral systems bringing in and out information like the primary cortices are less densely interconnected as the higher association cortices. See for example
From sensation to cognition by M.M. Mesulam (Brain, Vol 121, Issue 6 1013-1052). Of course, in the brain everything is connected to everything else, but it is not implausible to think that the strength of the interconnections follow a pattern reminiscent of the paper by Dodds, Watts and Sabel.
I just realized I spent over two hours watching an interface (eMule, if anybody is interested).
Did I need to? No, the program did what it was supposed to. There was no need for me to intervene directly even if there was something wrong. Did it tell me something new? Not really, most of the information was only about a current state that would vanish soon. Was it interesting? Not really. And yet I sat there watching numbers update themselves, progress bars advancing, twiddling with ways of ordering the displays and so on. I might be an information addict, but I think this is a general problem: many interfaces eat our attention.
The brain has an orientation system that directs our gaze and attention at things that are happening. This system is not under voluntary control (unless you are a Zen master), making anything that moves in the periphery of our vision or breaks off from the surroundings a strong attractor... and hence a distractor if you were trying to do something else.
Interfaces with moving information (like progress bars, animated icons, numbers that update, lists where items change position or animated music visualisations) direct our attention to the changes. Try to do something else on the desktop and attention will be moved back to the program even by the moves in a partially uncovered window. This is nothing truly new; the plague of animated gifs on webpages demonstrate the problem well.
What I think is worrying is that as our software becomes more autonomous it is going to be doing a lot more in realtime, and this information is assumed by the designers to be wanted by the users. This is often true, since we humans dislike devices we cannot control or at least get the assumption of control from by watching their action. So we get displays of information that update incessantly, eating our precious attention. Moving attention away from them is hard since it is constantly re-attracted, especially if there are things to "do" with the display. Rather than create something or even consume high-quality information we sit there consuming junk information.
What about turning them off or iconifying them? In some cases it is impossible (or at least not the default), like the little animated net traffic graph from my firewall in the system tray. But even when a program is hidden we are aware of their existence and the constant information that we are now not seeing. Maybe I'm too easily distracted, but the idea of something showing information that I am not seeing makes me want to occasionally seek it out and check on what is going on... and then I get stuck watching it.
P2P programs might be even more dangerous: they reward us by occasionally giving us files to open, presumably with contents we like. So constant attention is rewarded by increased expectations ("only 10 seconds left... 9... 8....") that are released by the arrival of the file and then a quick perusal. This is a recipe for addiction.
So, what to do? In many ways windowing systems are bad for attention since they promote multitasking between different contexts - each switch between one mental task or context costs us attention (and the risk of being distracted into a third task), just as task-switching is expensive for operating systems. Maybe we should have a grand iconoclasm and defenestration and go back to the black and white purity of the shell. But even there the aptly named curses package and other animated progress information lurk. Besides, there are plenty of necessary uses of graphical user interfaces (like in computer graphics). Babies and bathwater.
A simpler approach is just to promote awareness of the problem of information eaters. Just as all good web-design courses tell people about the problem of animated gifs all interface design courses ought to make people aware of the problem of attention-eating interfaces. Most progress bars are uneccessary or take too much space; a tiny indicator that things haven't locked up is enough (is it possible to do a non-animated such indicator? Maybe a color slowly changing?). Interfaces shouldn't by default show too much junk information - let the user request it.
We also need to figure out how to create more discreet technology. When everything is autonomous and contains state information, we need to deliberately make it discreet in order not to overwhelm ourselves. In many ways the sound/vibrations of a hard drive is ideal: usually too low to be troublesome, but gives information about system state that we tend to pick up on. The same ought to go for our applications, so that we usually can ignore them but also note when something unusual is going on at our firewall and that our web trader is getting frustrated.
In the end, it is all about the attention economy. Attention is valuable. If you squander it as a person you lose out on life and life quality. If you squander it for others through spam or bad interfaces, then you are a kind of vandal.
Just a family of fractals I spent the week playing around with. Why settle for polynomial iteration when rational functions are more meromorphic?
Cardiovascular fitness, cortical plasticity, and aging by Stanley J. Colcombe, Arthur F. Kramer, Kirk I. Erickson, Paige Scalf, Edward McAuley, Neal J. Cohen, Andrew Webb, Gerry J. Jerome, David X. Marquez, and Steriani Elavsky
That ill health is bad for cognition is fairly obvious, but does good health help thinking? The answer seems to be a fairly strong 'yes'. Animal experiments demonstrate that aerobic training improve how much blood is supplied to the brain, synaptic growth and even the emergence of new neurons. Trained animals have more adaptable brains that learn better, especially in old age. This paper demonstrates that the same is true for humans.
The researchers first tested older people with a reaction-time/attention task, and found that people with better cardiovascular fitness did better on the task. Then they divided a group of test subjects into two groups, one that got aerobic exercise and one that just got stretching and toning. It turned out that the exercised people did 11% better on the task after six months compared to just 2% better for the stretching people.
In addition, there were some interesting changes in brain activation. Fit or trained people showed higher activations in the prefrontal areas responsible for preventing distraction and in the parietal regions involved in hand-eye-space integration. There were also effects in the anterior cingulate cortex, the link between prefrontal thinking and the attentional-emotional systems of the brain.
Most likely these effects are mediated through growth factors (and perhaps a bit more blood and oxygen). Insuline like growth factor (IGF-1) is one such potential factor. It has neuroprotective effects, promotes the growth of new cells and appears to reduce the amyloid-beta protein involved in Alzheimer's disease. It has also been successfully inserted into mice muscles using gene therapy to prevent age-related muscle loss and improve healing. Maybe it is also a good idea against age-related mental loss.