I think Florida's analysis is convincing (and deeply worrying). I just have to observe the slow shift in my own postdoc plans to convince me that the US is making itself less appealing for the creatives to immigrate to. But he also claims Scandinavia is getting ahead in creativity. Is this true? Some thoughts on why the same but different problems might limit Sweden.
Floridas core idea is that economic development is increasingly driven by "the creative class", people whose work is based on their own creativity, initiative and skill. Places that attract them will grow and prosper, and in turn become even more attractive. There are of course plenty of spill-over effects to the rest of the economy and other locations, but the movement of the relatively moveable creatives are a key social force. The key according to Florida is the combination of talent, tolerance and technology. Sweden clearly has all three. But why isn't it booming? Sure, it is not doing bad, but it is hardly a place like Austin or Amsterdam. Here are some of the likely causes; in short, it is all about structure. Without the three 'T's it is hard to get the creative feedback, but the feedback can be inhibited by other factors.
Stockholm is an obvious creative cluster in Sweden, the only real metropolitan area (OK, I'm a Stockholmer and biased). There are some other areas too (like the Öresund region in southern Sweden), but given the small population of Sweden it is hard to get enough people to set off the creative feedback loop. You cannot spread them too widely, so they tend to end up in the few centers. So a simple cause is just lack of people.
However, it is notable that people are relatively unwilling to move around in Sweden. While creatives are likely more mobile, I have a distinct impression that quite a few creatives do not leave their home regions. This means that they do not contribute, or gain from, the creative feedback of a cluster. Quite a bit of resources are wasted here. Politicians of course want to keep voters in their districts happy, so making sure nobody needs to leave to find work has been a powerful political force over the last decades. This of course compounds the problem, as the atmosphere of "jag flytt' int" ("won't move") is reflected back as policy making it possible to stay even when ecnomics says differently, and produces a lot of local projects that do tie up the creatives in less-than-optimal structures. Tax transfers from the apparently undeserving metropolitan creative regions to the rest of the country add another burden (as well as disincentive to go to Stockholm).
Even worse, horizontal career mobility in Sweden is also relatively weak. Creatives are often stuck within the same few major companies. This suggests that just as there are underutilized creatives in the non-metropolitan regions, there are also lots of underutilized creatives in the Swedish companies. This is made worse by a government policy that benefits big companies (low taxes, a government that likes you) but blocks small companies and entrepreneurs (much administrative overhead, being treated by the governments as at best an "alternate lifestyle" and more often as potential economic criminals). The creatives can start their own firms, but it is much hard (creativity-sapping) work, and the high wages even for non-creative support jobs like secretaries makes it expensive to hire people to deal with the overhead.
Start-ups doing something tanglible and old-industrial get far better support than these new, strange, unclassifiable ideas. Swedish industry and government understand steelworks, machine workshops, call centers and mobile phones. They do not understand think tanks, lifestyle medicine or the Internet. There is a multitude of programs to help people start their own companies, but these are of course aimed at old style companies - and especially in the rural regions.
Florida is keen on universities, and lauds the idea of opening more universities in the less creative areas. From his perspective Sweden would be marvellous, since the government has started many new universities (and turning university colleges into formal univeristies) over the last decade. More people than ever attend tertiary education.
But there is a problem here. Florida sees universities as places where the creative feedback process starts, where people gain the values of the creative class that enables them to link with others later on, and places where new creative clusters can emerge. But just as he describes the problem of Pittsburgh in his book, in the same way it is not enough to just drop even a good university into a small town and expect it to blossom. Many of these new universities are just as ineffectual in turning around their regions to the creative economy as Carnegie Mellon apparently has been in Pittsburgh. There are plenty of company clusters around Swedish universities, but far too many are dependent on a continuing outside support rather than making themselves profitable.
But worse, the move towards mass tertiary education may be diluting the creativity fostering effect of higher education. Quite a few of the people attending education these days are there simply because they won't show up in unemployment statistics. As long as they study they get education grants, and the government can boast about a high degree of education and low unemployment. But university funding (which is, surprise, from the government) is dependent on the number of students who pass exams. Having lots of students that all pass is far better than having fewer students and where people fail. While people at universities are not willingly or deliberately lowering standards, the funding system rewards them for doing so. I have seen it for myself, and it is not good for one's sense of academic honesty.
And there is the final problem with this system. I have a strong impression that quite a few of the students go through the system without acquiring creative values. They have the skills, but they do not see themselves as creative agents able and willing to shape their own future. They become well adjusted for a job in a big company, but not likely to create something new. This weakening of the enculturation may actually undermine the role of universities in the long run, as they become less and less creative cores.
One can of course blame many factors here, but I think the criticism Florida makes of the US government and political parties is (in a slightly updated form) relevant for Sweden too. The political class does not understand the creative class and does not promote it. The division between high- and low-creative areas leads to political decisions that further limit the creatives.
Thankfully we can all go to Lausanne, Amsterdam or Wellington.
A very nice biotech application, a plant that changes color in the presence of nitrogen dioxide, marking where mines are buried in the soil. The plant, the beloved Arabidopsis thaliana, has been modified by Aresa Biodetection. Since the plant can be made male sterile or reproduction limited to in the presence of a growth hormone concerns about spreading can be ameliorated. But is that the right solution? Maybe we should allow it to spread wildly instead.
The careful approach of first clearing the land, then sowing the plant, waiting, and then removing the mines and planting something else, might work where the mine density is fairly high and doing this kind of clearing has few other effects. But in many places clearing the land would cause severe erosion, and if the mine density is low it would be a very expensive way of finding them (although likely better than plenty of other high-tech solutions, and of course safer than having people poke with sticks). The method is not presented as a panacea, and it isn't.
But what if modified Arabidopsis (that is also clearly visible as modified, e.g. by leaf shape) is simply spread and allowed to grow freely? That would be an extremely cost effective way of finding out the presence of those truly unexpected mines and marking them.
The ecological risk of the change appears low. Most likely the normal strain has an advantage over the modified strain since it adapts to stress by changing color (an evolved response that presumably is an advantage) while the modified strain won't do it except near mines. And if other species were to pick up the mine detecting effect, it would actually extend the benefit. Anthocyanins are even antioxidants , so it might be a good thing if they get into food :-)
Of course, the political climate in the West is likely mostly against this. But if the choice is between a potential, vague and likely very small ecological risk and the real and serious effect of land mines, the only thing the precautionary principle tells us is to add safeguards to the modified plant, not to avoid spreading it (making the plant extra sensitive to herbicides might be prudent). Hypothetical risks can not trump real risks. Even if the plant got out of hand and spread wildly it would be very unlikely that it caused more damage than 26,000 killed or injured people per year, the cruel cost of mines. Those holding the bioconservative view that nature should not be tampered with under any circumstance, they need to explain how the tampering done by slowly decaying landmines (not to mention their human cost) is less than the change in coloration behavior of a plant. Again, are these spiritual or aesthetic costs greater than very real deaths and dismemberments?
There are many more likely practical showstoppers - can the seeds be produced cheaply, will the plant thrive in affected areas, can people reliably use it to find mines and so on. And in many situations other methods are still superior. In fact, I wouldn't be surprised if mine-detecting plants will never be used, yet another charming idea left on the drawing board (or rather, in the seed bank). But I think we should carefully consider one day releasing this kind of safeguard plants deliberately into the environment. If our environment could clearly signal pollution or danger it would be far easier to protect - and it would protect us better too.
Fluctuations in Network Dynamics, M. Argollo de Menezes and A.-L. Barabási, Phys. Rev. Lett. 92, 028701 (2004)
Traffic across a complex network, be it packets across internet, signals in a microchip or water through a river network, fluctuates. This paper shows that there is an apparently general patternt o the size of fluctuations compared to the normal flow across a place in the network. Either the fluctuations scale proportionally to the normal flow (examples: the WWW, rivers and highway traffic), or they scale as the square root of the normal flow (Internet traffic, microchip signals).
The paper makes a nice argument for why this is to be expected. In short, networks where the activity fluctuations are driven by internal randomness have fluctuations that are square root sized. Networks where external factors are important (people rushing to news websites when a disaster happens, rivers receiving rain) have fluctuations proportional to the normal flow. In principle there could also be intermediate forms too, especially for smaller networks.
If the argument holds, it has some interesting consequences. Quite often bad things happen when a fluctuation that is large compared to the normal flow happens: Internet routers clog, rivers flood, traffic jams. The max capacity of nodes are often proportional to their normal capacity.
Networks with square root fluctuations have the nice property that the fluctuations get smaller compared to the normal flow (and capacity) for more loaded nodes. Hence it is unlikely that they get overloaded. Good news for the Internet. But networks with fluctuations proportional to normal flow have a fixed risk (if capacity is proportional to the normal flow) per node of failing. Hence we should expect overloading in the WWW and traffic jams here and there.
The risk of failure can be decreased by adding more capacity, but that costs. Cost concerns will tend to reduce the capacity until the cost of failures will equal the marginal cost of capacity (and in the real world, when a major disaster has not happened, usually below this). I wouldn't be surprised if this led directly to a power law distribution of failures.
My guess is that some networks are to be expected to show unexpected fluctuations that cause trouble, even when they function optimally. The systems to look out for are those linked to a changeable, random environment. Thats is, practically all important and interesting systems.
I have followed the Israeli ambassador art vandalism affair with some interest. The affair has many similarities with conceptual art, and might be a better artwork than the installation that caused the furor.
Some years ago celebrated Swedish artist Dan Wolgers stole two benches from the Liljevalchs art museum. He had been asked to participate in an exhibition 1993, intended to have artists depict their view of human nature. He stole two benches from the exhibition hall and sold them - that was his artwork. It was probably the most debated, most significant artwork in Sweden of the 90's. Everybody was debating whether it was art (and good or bad art). His earlier tricks of hiring an ad agency to make one of his exhibitions or putting his own phone number as an artwork on the Swedish phonebook had earned some remarks, but this made the recondite intellectual debate on what should be called art and the limits of conceptual art spead everywhere in society. In the end he was fined (he also sold the fine at a profit - who said crime, and art, doesn't pay?).
I get the same feelings from the "Snowwhite affair". At first I thought the ambassador had just drunk too much at the reception and become sincerely angry - an amusing faux pas, but little more. But then he claimed that he knew what he was doing and senior Israeli government officials came out in support. Now things began to turn interesting.
The artwork itself is, as concept art goes, not bad from what I have seen. A basin of red water, a floating sailing boat with the picture of the suicide bomber. Some Bach music, and (this is important) a text that links together the story of Snow White with the terrorist. Compared to much other concept art it is quite aesthetic by the contrast of red and white, the fixed water and the floating boat (although the recent cold has apparently frozen it in place, maybe another metaphor for the Mid-East conflict?). And compared to nearly all other political art it is subtle. Hardly a call for blowing somebody up (especially given the text), unless it is because "the red is beautiful against the white". And as antisemitism goes, a boat with a picture of prime minister Ariel Sharon sailing on a sea of blood would likely be far more likely to be interpreted as negative against Israel or jews.
But just like those stolen benches, the real interest isn't in the object, it is the happening that is occuring right now. People are rushing to the museum to see the artwork, interview in front of it, criticise it, vandalize it (one threw a picture of Mijailo Mijailovic into it) or even attack the curator. And of course to see all of the above. But that is just the local part of this active artwork. The real beauty occurs in the diplomatic sphere.
Dagens Nyheter suggested an analysis that made the political pieces fall into place for me. Why would an ambassador attack art? Due to uncontrollable rage? Given that Zvi Mazel has been ambassador in Egypt, he is used to far more critical (not to say hostile) treatments of Israel and jews. But right now Israel is under growing international criticism, and Europe has become interested in negotiating in the Mid-East situation. This is not something Israel wants (or rather, not something that the Likud government wants). It is responding by counterattacking western Europe for not doing anything about the growing antisemitism, and hoping to reduce the credibility of all EU governments in dealing with the middle east. But Sweden has been unusually pro-Israel due to the interest Göran Person has taken in various holocaust memorial and prevention projects (despite the otherwise strong Swedish left support for the palestinians). An art scandal is perfect: not something that will cause lasting problems, but just the right amount of bad feeling at the right moment to keep another part of the EU from meddling. This makes a kind of sense, although it is the kind of sense one expects in an Illuminati game.
The part that actually makes this artwork interesting and interactive, is the media. I watched how the news climbed on the newssites in Sweden across the night and saturday. Then it began to pop up in google.news, climbing in the World section until it got to the first page and finally climbed to top news a few times. It was like watching a brave snail racing to the top. And the real fun was of course to read the different takes on the story.
At first, everybody uses the same Reuters and AP sources and prints identical articles. But then things diverge, and everybody starts to give the story their own slant.
Israeli press had a field day with the perfidious Swedes and all their antisemitic plotting. One reported that the Israeli embassy was being forced out (it is located in the middle of an otherwise normal affluent mixed office/apartment building, and presumably the landlord might be a bit nervous about it and might not want to renew the contract).
Then there is of course the Tensta angle. According to one Israel newspaper, those antisemitic Swedish authorities had banned another Israeli artist, Amit Goren, from participating in the Making Differences exhibition and were now relenting under international pressure. The real reason his artwork was not originally shown was actually due to a guest performance by the board of the Tensta art hall where he was intended to be shown. Tensta is a less fashionable suburb of Stockholm, but had a local art hall led by the ambitious Gregor Wroblewski who actually managed to put it on the map. Unfortunately the board wasn't quite as daring and had problems with him, so they forced him to quit. Much chaos ensued, with Wroblewski occupying the hall, refusing to give away the keys and denouncing the board as being antisemites. Local people organized themselves in defense of their art hall against the art council, politicians making confused statements. I must admit I had a hard time keeping track of this little affair, but now it has joined the big one. Points for style.
The BBC had a British deadpan "You are ruining our party, Ambassador" headline. One of my favorites was the obviously twice (or more) translated Turkish paper that claimed a painting had been attacked.
But the final piece, the one that turned this from merely diplomatic irony into true satire, was that Dror Feiler has been accused by a music company for copyright infringement. Apparently he used a Bach recording (downloaded from the net) that may have been copyrighted. And here the artist immediately relented and switched to another recording - angering Israel is safe, but woe to he who breaks copyright law.
In the end, what do we have here? The stolen benches was on the first level a statement about the greed of humans, on the second level a media circus (and a play on media circuses) and on the third level a continuation of the "what is art" discource in public, linking it to the goal of getting the public involved with art.
But that was a partially deliberate structure. The Snowwhite piece was on the first level an attempt to understand what drives a suicide bomber. On the second, a happening arranged by the ambassador (as somebody at the reception remarked, "first I thought the demolition was part of the piece") but actually on the third level a political/diplomatical game. But it also has a fourth level of media coverage, where the original story explodes outwards, mutating to fit whatever agendas are around. To return as a chaotic urge to see, destroy (=participate) and debate. There is a beautiful feedback here, with the contrasts of blood/purity, serious diplomacy/rage, illuminated plotting and ridiculous maneouvering as driving symbols.
In the end, the art attack might be a cynical attack on free expression for murky political goals. But it might ironically be good art itself. In many ways it is far more interesting than the conference that is being held right now, a conference on preventing genocides where no genocide where the major participant nations have participated (like Armenia, Chechnia, the Mid-East situation etc) is to be discussed. The conference is a planned event with no unexpected moves. It could probably be criticised as "bourgeois art". And while the art vandalism scandal certainly had elements of planning, the different plans collided in an interesting and creative explosion.
I had the chance to become better acquainted with rhinoviruses and streptococci this week, i.e. I got a cold. Very annoying as it kept me from important work by making me more distractible than usual. Which includes analysing the game theory of the common cold, work and epidemics. In fact, I ended up doing a bit of evolutionary game theory while my immune system was doing its duty in its evolutionary game. Are discounts and fees the cure for the common cold?
In an ideal world everybody who had a cold would stay at home, keeping away from everybody else. This would reduce the spread of infection and make people in general healthier. In reality many people go to work anyway, work at somewhat reduced capacity (but earn money, keep up contacts etc) and increase the risk for everybody else.
This can be analysed as a mathematical game (for those of the readers who can't stand math, jump directly to policy implications).
The simplest game might be between me and 'everyone else'. We both can choose between the staying at home when sick or going to work. Let's say the basic probability of getting a cold is P (about 3/52 per week for adults), and this is increased by alpha (a measure of infectiousness) if another player goes to work sick. Each week of healthy work gives utility 1 (wages, companionship etc), a sick week produces half of that (due to unavoidable absence when too sick etc) and a negative utility epsilon due to the sheer nastiness of being sick. If I stay at home completely I get utility -epsilon.
The utilities for me looks like this:
|Others stay||Others go|
|I stay||(1-P) - epsilon*P||(1-P-alpha)-epsilon*(P+alpha)|
For the other side it is identical, with the roles reversed. So if we say that epsilon is 1 (having a cold is bad), alpha is 10% of P (=3/52) we get the following numbers:
|Others stay||Others go|
So the smartest move I can do is to go to work when I have a cold, since even the worst outcome is better than the loss of income from staying at home. That others also think like this will lead to the bottom right corner, where we all get sicker but still are better off than in the altruist upper left corner! In this case, for these parameters, it is acceptable to give each other the cold.
What if we increase epsilon? For larger values of epsilon it becomes worse to be sick, and the extra utility for working does not compensate it. But it is also a Prisoners' Dilemma situation: if I know the others stay home, I'm better off working. Which they will also think, producing a situation where we all make each other worse off.
|Others stay||Others go|
This is OK for a simple analysis of what I guess most people's intuitions tell them. But what about having an entire population? And in reality sometimes we go, sometimes we don't: we use mixed strategies. I did some simulations of that for different values of alpha and epsilon (details and code at the end). Imagine a number of agents (who could be populations of people thinking identically), where each agent has an individual probability of staying home when sick. Each week agents become sick with probability P + alpha* number of sick agents last week. Each agent gets a score for that week equal to 1 if healthy and working, -epsilon if sick and home and 0.5-epsilon if sick and working. After 52 weeks the 20% agents with the lowest scores rethink their strategy and change it randomly, and the whole process is repeated.
So, what happens? First we see individual cold cases that tend to occur together in waves - epidemies. Agents that stay at home get less score than the workers, so they change their views and there is a trend towards higher likeliehood of working. The epidemics become more and more common, making it even worse to be a stay at home person. In the end nearly everybody works despite getting colds all the time, and the total utility has decreased to a low valyu. A real tragedy of the commons.
This is the case if alpha is too large. For lower alphas the tendency is markedly weaker. Going to work doesn't change so much, and the epidemics that really hit the stay-at-homes hard doesn't occur.
One can also vary the value of the work done while sick at work. Assuming the value is very low of course makes it less enticing to go there, producing stay-at homes. If it is closer to normal utility then the opposite is true, and we get many who work and cause epidemics.
So, what are the policy implications? Sometimes it is rational to risk increased infection rates for oneself and others, especially when the disease is not very distressing, the increase is small or the loss of pay from being sick is large. In other situations people would be better off if they stayed at home but the competition from others cause them to go. How can this be avoided? One can try to lower alpha (like the Japanese with their masks), but it is likely that this is hard. Changing epsilon is likely impossible (better anti-cold medications?). But the loss of pay or utility from staying at home (or the pay while working sick) is amenable to change. If the loss of pay is low the incentive to work sick vanishes and the equilibrium is healthy stay at homes. Similarly, if the reward for coming sick to work is not near the ordinary reward people also stay home.
If you get paid sick leave, this model suggests a higher likeliehood of staying at home. But adding days without sick pay (5 "karensdagar" in Sweden) will promote going to work for some - and increase the risk of epidemics etc. Maybe a good solution for companies and organisations would be to either fine people who arrive sick - something that sounds quite cruel and would likely be impopular, but it might actually make everybody better off. Maybe these fines could be redistributed as a bonus for people who stayed at home when sick, reducing the bad effect of unpaid days. Still, it is a rather intrusive means and I wonder if it can ever be implemented (without some draconian Singapore style health dictatorship).
Being a libertarian I believe that one can have contracts about nearly everything, and I can certainly see and accept employment contracts that contain things like this. People who infect me should really pay for the losses they cause me. Since it is usually impossible to trace them, it actually makes a lot of sense of making a redistribution as above.
Maybe the same system can be used at the really infectious places in society, the place where the epidemics really start: at kindergartens and schools. Parents get discounts depending on whether they keep their kids at home when sick. Given how kids act as social network bridges that transmit disease across society, this might be a higher priority than fixing companies and other adult organisations.
I used Matlab to run the simulations - my typical quick and dirty simulations.
N=100; % Number of agents xgo=rand(1,N); % Likeliehood of going basesick=1/52; % Base prob of being sick psick=ones(1,N)*basesick; % Likeliehood of being sick next week sick=zeros(1,N); % Who are sick sickwork=0.5; % Loss of utility from work epsilon=1; % How bad is it to be sick? alpha=1.5*1/100; % How much infection does one person cause? meanutility=; meango=;
ff=; % Map of cases
gowork=(sick==0)+sickwork*sickgoers; % They get paid
utility=utility-epsilon*sick+1*(gowork); % And we subtract the illness
psick=(gowork(1,N)>0)*(basesick+alpha*sum(sickgoers)); % Infection
ind=sortrows([utility' xgo']); % Sort the agents after score
xgo(rep)=rand(1,size(rep,2)); % Replace the worst 20%
It was of course only a matter of time: ABCNEWS.com : Suit Seeks to Block Sales of GloFish
The Center for Food Safety's complaint against the FDA has some truly hilarious parts. Among others:
Additionally, the imminent release of genetically engineered ornamental fish into the environment and the consumption of them by other carnivorous fish as part of the foodchain means that such carnivorous fish will be caught or purchased and consumed by CTA Board Members. Such results compel their involunrary consumption of genetically engineered ornamental fish that have not been approved as safe for use as human or animal food.
Just think about it. Out there there are fishes eating stuff that has not been approved as safe. Yucky stuff, dangerous stuff. Unregulated stuff. Just feel the anarchy accumulate in the fat tissue beside the mercury!
Of course, as pointed out in section 9, the deep reason is aesthetics. CTA doesn't like nature with modified creatures in it. So they are injured by them. Being an atheist 1/8th troll, I could probably claim being aesthetically and ethnically injured by church steeples had I lived in the US.
More seriously, this actually is the core of the issue. The practical risks are negligble (besides the potential for spread in the Mexican Gulf ecosystem, but that is true for the original zebrafish too), the antibiotics resistance genes are nothing compared to the plasmids already used by bacteria actively and of course eating something that has eaten a fish that had a fluoroscent protein is not much more dangerous than eating a fish that had eaten a jellyfish with the same protein. It is all about what kind of nature one wants. A nature defined by not having been affected or changed by humanity, or a nature where humanity is a participant in evolution. A glowing fish has increased diversity, something many view as desirable. The big question is of course if the FDA or some other agency (the EPA?) that gets to define that nature, especially since it is both local and global. Just as decency standards and aestetics varies, so does bioaesthetics and philosophical views on nature.
I don't know the likeliehood of a FDA banning the fish (the complaint seems somewhat arbitrary to me, but I know little US law tactics and some agency might want to extend its boundaries and funding a bit), but given that it is up for sale and easily bred this might be the start of a real biotech underground. In many ways it might be worse if fishes are spread in secret between individuals who are more likely to be disrespectful of the law and perhaps traditionalist ecology than if they were just mildly regulated and debated by aquarists.
Hopefully we can get away from a debate where the issue is that a species is genetically modified to a debate where the issue is whether this particular modification is bad, risky or doesn't fit our aesthetics.
I encountered this discussion about whether life extension will produce less ambitious people (originating in another relevant discussion, on whether longer lives will make people more risk aversive) shortly after discussing in another medium about the personality traits that promote longevity.
It seems from my perspective that personality might be the key factor determining long-term longevity of people even given radical medical treatments (let's say full nanomedicine).
First, let's explore the current impact of personality on longevity.
It is known that positive perceptions of ageing promotes longevity. People with more positive self-perceptions of aging, measured up
to 23 years earlier, lived 7.5 years longer than those with less positive self- perceptions of aging (Becca R. Levy, Martin D. Slade, Suzanne R. Kunkel, Stanislav V. Kasl,
Longevity Increased by Positive Self-Perceptions of Aging, Journal of Personality and Social Psychology, August 2002); they appear to have a strong "will to live" factor that decreases mortality.
Helping others might also be beneficial (Brown SL, Nesse RM, Vinokur AD, Smith DM., Providing social support may be more beneficial than receiving it: results from a prospective study of mortality, Psychol Sci. 2003 Jul;14(4):320-7): the mortality was halved among caregivers. Beside the altruist feel-good factor, there might also be a responsibility factor. In a classic study nursing home residents were either given a talk on personal responsibility and a plant to care for, or a talk about how the staff would care for them and a plant the staff would care for. The result was a 50% reduction of mortality in the responsible group, as well as increases in alertness, active participation, and a general sense of well-being (this might not be relevant to just nursing homes and life extension, but to societies too!).
The "Big Five" personality factors have been studied in relation to longevity (and everything else for that matter). According to Eamonn Ferguson high conscientiousness is related to better health and longevity, whereas low agreeableness and high neuroticism seem to be health risk factors (perhaps due to stress). The size of the conscientiousness positive effect is equivalent to blood pressure and cholesterol on longevity. He suggests risk avoidance and health promoting behaviours as a main cause - conscientous people keep healthy, eat good food and don't drive drunk.
Of course, some of this might be due to lack of longitudinal studies. What if personality changes over lifetime? Traditionally personality is assumed to be fairly fixed, but that might be just folk psychology and people explaining away gradual shifts of personality in themselves and others away from their templates of "who they really are". A study (Sanjay Srivastava, Oliver P. John, Samuel D. Gosling, Jeff Potter,
Development of Personality in Early and Middle Adulthood: Set Like Plaster or Persistent Change?, Journal of Personality and Social Psychology, Vol. 84, No. 5.) shows quite a bit of drift of personality over time: conscientousness and agreeableness increases, while female neuroticism decreases (eventually reaching the male mean).
But this was no longitudinal study. So maybe we are just seeing the survivors? But the increase in conscientousness is positive already below 30, and agreeableness starts to grow after 30, at times where few have had the time to die of old age. While there might be some tricky selection effect, it seems reasonable to conclude that personality does change over lifespan. As we grow older we on average grow up: we become more conscientous and agreeable (at least up to 55+, where agreableness starts to dip - old and grumpy), women learn to relax.
So, to sum things up: there are perceptions, behaviors and personality that promote longevity and reduces mortality. Quite a few of them are individually changeable too, but usually change slowly unless external conditions force change. The "ideal" for a long-lived person would be somebody looking forward to ageing, who takes responsibility for their own life and their environment, acts conscientously and in a positive way without getting stressed.
Of course, this might be just personality traits that promote it today, and using advanced technology one's state of mind has less effect than one's medical treatment. But personality still determines what treatments we seek. This can be a huge difference. There are notable differences in health and longevity between social classes in most societies, even when the material affluence levels are so high that they could get about the same kind of treatment. While there are many factors, personality and learned behavior appears to play a key role: the higher the class, the more future oriented people are, the more disposed to care for themselves and believe they can achieve whatever they set out to do (a positive outlook that seems to fit well with longevity in itself). And this helps them care for their health and make sure they get good treatments when at the doctor.
So it seems reasonable that highly conscientous, dynamic people would get themselves life extending lifestyles and treatments. There is going to be a kind of filter effect against people with learned helplessness when significant life extension treatments arrive: even if they can afford them, they will not go for them as strongly. This would likely act as a filter on "the last mortal generation": the survivors will not be similar to the average. The generation after that is likely to survive regardless of personality (and class; it seems unlikely that such a killer app as radical life extension would remain costly and elitist unless the price could artificially be inflated through extreme regulations and certifications, or the need for much skilled manpower).
Now we have a population of long-lived or emortal people. The main causes of death will be accidents or suicide.
Suicide will correspond to people lacking the will to live but with sufficient volition (and cultural support) to end their lives; if we assume that psychological treatments of depression are good it seems likely that few will suicide out of other reasons, and people without the will to live but insufficient motivation to end it (the classic ennui case) might be helped out of either of the problems.
Accidents would be people taking risks. Clearly, adventure-seeking people will be at a higher risk (especially if they are driven to not just have adventure but the special thrill of real danger - there is a difference between an exhillarating experience that is actually very safe thanks to technology even if it once was risky, and an experience that is still risky). Survival curves are exponential in this emortal situation, with a time constant set by expected time to accidents. This means that there will be a constant filtering of the truly risk-taking individuals and if we just look at a single generation this will eventually become mostly non-risk takers. It should be noted that new risktakers are born, so there will be a steady trickle of accidents waiting to happen (unless mental technology or cultural influence is able to reduce the risk-taking behavior).
This sounds like a very stable and staid future. Maybe not extremely risk averse, but not a particularly wild place. On the other hand, this assumes that the conscientous people remain conscientous all the time. Most reasoning about psychology of immortals assume that personality remains fixed or slowly evolves due to surrounding constraints. But in a very safe, relatively stable society there may be evolutionary drift: as the years pass and nothing dangerous happens, the level of acceptable risk may change. One pathway might be perceived risk. If I have done something a hundred times I think I know it and can control the possible variations of the situation, and I will tend to reduce my risk estimates. This would make people grow slightly less cautious, especially if their bodies and minds were otherwise energetic and healthy. Boredom can drive people to the most surprising changes. But even long time and random drift of personality could cause the same changes. In a large population there would always be some risk-takers.
It should be noted that if the society is rapidly changing, then being risk-aversive could be detrimental if new possibilities are opening up: those will be grabbed by the more ambitious and risk-taking individuals. On the other hand, if it is more chaos (i.e. more new bad possibilities), the selection will go the other way.
This drift possibility will be countered if certain aspects of personality are self-reinforcing, especially given mental technology that enables desired changes. An obvious case would be ambition: as an ambitious person one is motivated to keep this ambition (limited only by realism). A more subtle case would be lack of ambition and motivation: this inertia would of course not cause any change, but here other factors need to be taken into account to see whether it is actually self-reinforcing (if social standing is negatively affected by this inertia, this might cause self-reinforcing learned helplessness). A risk-averse person would not necessarily wish to remain risk-aversive, although he might choose a less radical increase of risk acceptance than a risk-accepting person would. Again, other factors are likely needed to cause self-reinfocement (such as the rewards of change).
It is interesting to look at the curves of the Srivastava et al. study. Conscientousness could be self-reinforcing: being orderly and organized could make a person place himself in more orderly situations, where it is more appreciated/rrewarded and so on. But again, it could just be a self-limiting process of learning.
It is notable that variance of personaliy and cognitive measures increase as we age - ageing is highly individual (which makes sense if it is due to lack of selection in old age). This might imply that the personalities of people would also fiverge more and more. Life extension might fix the purely biological aspects, but it seems likely that at least some of this variance increase is self-reinforcement or drift.
In the end, the answer to the questions "do we become risk averse or less ambitious" as immortals still remains unclear. But it seems apparent that we will likely be more risk averse, but perhaps not in the fearful manner often imagined but rather in a more conscientous and rational manner. Extroversion is more linked with risk-taking, and that did not show any strong change over lifespan. And given that ambition correlates with conscientousness, maybe we will become more ambitious as we age with constant energy.
Imagine the combination of life experience, skills, ambition and the energy of having a 25 year old body! The problem might not be cautious elders living in ennui, but too many wise doers (like Lindsay in Sterling's Schismatrix) reshaping the world.
ScienceDirect - Futures : Integrated 1000-year planning by Bruce E. Tonn, Futures Volume 36, Issue 1 , February 2004, Pages 91-108.
A paper describing integrated 1000-year planning as a tool for dealing with global issues such as the survival of humanity. While it has a can-do attitude that I admire, it seems utterly unrealistic. But still, there are some interesting long-range visions there if they could only be released from the prison of planning.
The basic problem is of course that planning only makes sense when you have enough data to predict the effects of your plan with some reasonable accuracy. That makes it hard enough over short timescales.
The real drive for a 1000-year scale is to broaden thinking, to actually challenge the assumptions of the present. But how well does the paper do it? Of the 11 elements mentioned (energy, land us, carbon management, oceans, biodiversity, nuclear and hazardous waste, water, human settlements, near-earth objects, space exploraiton and integration) most are very traditional, close to current discussions about sustainability. While space exploration merits its own element, the possibility that a sizeable part or most of humanity will be living offworld within a few centuries is not discussed - and that possibility totally transforms the meaning of the other elements in that case. Similarly changes in the human condition are not assumed, despite these changing the qualitative and qualitative situation enormously: life extension, modifications of human drives, human-AI symbiosis, uploading and new reproductive technologies are not mentioned.
This is natural, given the relatively small role given to technology in the treatment:
"Advances in technology should be considered in 1000-year plans. However, following the precautionary principle, plans should not be based on technologies that do not exist or are uncertain to come into existence within the time frame under consideration."
While this may avoid making plans that likely fail due to the assumption of certain technologies, the result is instead plans that make the certain mistake of ignoring transforming technologies. That we do not yet know these technologies is no reason not to consider them, just as the rest of the 1000-year plan idea does not abstain from trying to consider the sociology of the far future.
The problem here is of course the love of planning. It is mentioned that goal-directed R&D is behind the success of computing and biotechnology. But that is likely not true, both fields have rather demonstrated the strength of diverse and individual approaches working in parallel. Goal-directed R&D became possible only after the fundamental science and technology had been developed and still draw from this diverse bed. But from the perspective of someone at the School of Planning of course planning works and is a wonderful tool (just as I, a computational neuroscientist, see simulation and learning as the everything-is-a-nail hammer).
Perhaps the problem is that everything is phrased as planning and not vision. 'Vision' is often used too losely, merely a pleasant image of a desired future and no way of getting there, but it can also be something closer to the plans discussed in the text. Surprises will happen, compromises between different parts will have to be renegotiated. In that case the traditional plan breaks down, while a more flexible vision remains, adjusting itself to the new reality.
One area where I think the article has notably good points is the discussion of global risks and risk assessment. Here it would be interesting to combine it with the studies of existential risks.
[astro-ph/0310571] A Map of the Universe by J. Richard Gott III, Mario Juric, David Schlegel, Fiona Hoyle, Michael Vogeley, Max Tegmark, Neta Bahcall and Jon Brinkmann. Extra material at http://www.astro.princeton.edu/~mjuric/universe/.
A map showing a geocentric perspective of the entire universe. The trick is to make one direction logarithmic, which makes it possible to depict everything (more or less) from the Earth's core out to the Big Bang. The other direction represents declination, making this a slice across the universe along the ecliptic.
If I have inherited anything from my father, it is his love of maps. A good map gives a sense of a place and its context. It should have as much information as possible but still "quiet" information: information that doesn't distract the viewer when viewing the map in general but directly available once you look for it. And the more complete the map is, the better.
By these standards this map is very good. It gives a sense of the stuff we find around us. The authenticity created by plotting the positions of satellites, minor bodies and planets at a particular moment in time is reassuring and reveals many interesting patters (for example, look at the asteroid belt and how they are affected by Jupiter). I wonder what Edward Tufte would say about it?
I would probably have rendered it a bit different by applying textures to represent the galactic disc etc as coloured stars (perhaps grey) to make it less abstract. It is also a bit sad that the names of many neighbouring galaxies are not written in full, for popular science purposes it is better to call M31 "The Andromeda Galaxy (M31)" than just M31.
In the paper the authors discuss the intricacies of the mapping. Beside the usual issues of conformality (keeping angles locally identical to avoid distortion of shape) they have to deal with the relativistic effects of a curved space-time. It is a nice loop: geometry in curved space was developed by mapmakers, turned by Gauss (who did geodesic measurements) and others into a mathematical discipline that were to be the seed and engine of general relativity and now returns home to itself to make a map.
I wonder if one could make a good 3D box map by plotting Ascension too? Obviously there are tricky issues of conformality, but it would probably be worthwhile to show how the different planes align (or rather not align).
The authors also suggest plenty of interesting applications and ways of presenting the map, from wallscreens and lasershows on buildings to rugs for astronomy labs. A scientific visualization this good is bound to crop up in many places.
A paper about making hoax papers that I found absolutely delicious:
The Cartesian Conspiracy: How to do Post-Modernism with Marquis de Sade by J. Yellowlees Douglas.
Is academic hoaxing a problem, or actually a way of ensuring the health of scholarship and peer review?
(Based on a diskussion on the Extropians list)
Academic hoaxes like Alan Sokal's are not just fun but a way of keeping academia healthy. The laughter and derision afterwards help point out flaws in the peer review system or other forms of quality control. It is also better with deliberately bad papers, since inevitably the creator will point them out, unlike "accidentally" bad papers where the author won't tell anybody that he cheated, made things up or didn't know what he was talking about. We need the first kind as a kind of vaccination against the second kind.
The Douglas paper it is an attempt to explore hoaxing a bit more carefully than just sending off one hoax that a critic might claim actually is a reasoned paper the author doesn't dare stand for. In this case the author replaced words in de Sade with words from postmodern giants (sodomy becomes theory, murderer becomes descriptive constructivist) and produced a number of nonsense papers. They were submitted to a number of journals, and four out of ten were accepted despite peer review. None were rejected for incomprehensibility.
The really interesting part is the discussion, where the author points out that from an analytical perspective this is of course a sign of bad scholarship from the journals, but from the Continental side it might actually be OK:
"These distinctions between analytic and Continental ideals are crucial to the interpretation of this experiment, since the Continental position would possibly regard the nonsensical articles as being no less valuable than genuinely authored scholarship. For, if the value of scholarship rests in its ability to communicate meaning to its reader (and this meaning being unfixed, interactively determined by author and reader), then nonsensical discourse may be considered valuable so long as its reader finds it to be sensible."
For people coming to the issue from the hard sciences this makes little sense, since we are very much based in a realist enlightenment tradition. But that does not mean the other approach is invalid, it is just invalid if the message is supposed to bear any relationship with the world. If that relationship is instead provided by the interpretative process of the reader's mind it can indeed lead to fruitful ideas and actions in the real world. However, as the author points out, if we are to accept this as scholarship then the humanities are seriously threatened:
"In an age where nonsense is identical to scholarship, text-generating computers such as the Dada Engine are no less useful than postmodern scholars, and the necessity of the latter becomes difficult to substantiate."
The joke may go both ways; the "Bogdanov affair" (see the sci.physics thread and the collection of various partisan entries) appeared to involve nonsense physics papers being published in refereed journals and even giving two Ph.Ds! On the other hand, maybe it was perfectly legitimate theoretical physics papers that got treated by the community as hoaxes! The problem here seems to be more that it is hard to tell, and that is worrying in itself.
Peer review is supposed to filter out nonsense, bad research, academic misconduct and other scholarly dirt. The process need not be perfect, because the hope is that science is self-correcting. That hoax papers pass reviewers is not a sign that the process per se is bad, it just shows individual mistakes and might give a hint that particular journals or fields are getting lax. But if many such papers pass, then the problem might be widespread. Even worse, if people can't agree on whether a paper is a hoax or not, such as in the Bogdanoff affair or some critics of the Sokal paper, then the field itself might have trouble with its epistemology (granted, in the Bogdanoff affair plenty of traditional academic politics seems to be involved, muddying the waters). Maybe one could replace reviewer comments with the dada machine too. Which would mean giving up on the self-correction of science.
Overall it seems that the system of scientific publishing needs new ideas and better ways of disseminating and filtering information. Academic review boards might be one way of achieving a more flexible review process. Various forms of filtering and presentation might enable networked and distributed debate and knowledge growth. This seems to be not just an interesting area for agent modelling and research (applied memetics?) but also of cruicial academic importance.
As science advances our knowledge will grow and diversify, making it harder and harder to just pick out a single fact without its context. And this context of course grows with science, making it harder and harder to get useful information since understanding it requires more information, and so on. Even relevant information becomes noise because there is too much of it. But, as Heylighen argues, we can try to correct that through information management and intelligence amplification, the noise that is just noise is also a deep problem. It grows even faster than the information overload, and we need ways of filtering it out. Without filtering there is nothing for the intelligence amplification to work on.
I think there are ways of achiving this. The current spam problem is producing an arms race of spam filters and new protocols to make spam uneconomical, sometimes based on immunological ideas. Similar methods might be useful (in more developed forms) for filtering academia, as an adjunct to peer review and other more traditional filters. Academic hoaxes are in many ways a form of vaccination against a particular danger. If we could find ways of reward people who discover holes in the scientific process (and reward people/institutions who fix the holes too!), then we would likely see much improvement in the same way open societies remain robust thanks to their constant internal criticism.
Academic hoaxing is a serious and important matter. But it is also fun, and so far the main driver for writing hoaxes seems to have been a sense of mischief. Let's hope we keep that in the future too.
Alex Kirby at the BBC wonders about the recent spate of warnings of doom (Doom warnings sound more loudly). His main wonder seems to be that people are not taking these warnings more seriously, perhaps because the problems are so gradual.
But maybe it is because people are somewhat rational. By this I don't mean that everybody who ignores the latest climate alarm has gone through the literature, run a few simulations themselves and concluded that the alarm was wrong because it didn't take the fluorite feedback loop into account. Rather, people learn to ignore the doomsayers because 1) they have a bad track record, and 2) it is impossible to live a doomed life.
One can make a good living from being a doomsayer. People naturally orient themselves to any information that may reveal a threat (a very rational response in itself), which means that as a doomsayer you can get attention - the hard currency of modern economy, and pretty profitable in the past too. Disasters sell, and if it is something that may involve me I am extra interested to hear about it in order to figure out how to deal with it. That strategy worked well for past sects (where you had to join to get the necessary salvation), it worked for newspapers and it works well for environmentalist organisations and researchers.
The problem is of course that this has no bearing at all on whether the prediction is right or not. Some predictions may be entirely correct, but that will not give them any particular advantage compared to less correct predictions. In the competition for attention truth is not a powerful selling feature; the sincerity of the claimant, how they fit with other prejudices, what authorities support them and so on are far more important. Which leads to a plethora of bad disaster predictions, which in turn leads to many failed predictions. And people learn to take them with a grain of salt.
A real disaster creeping up on us might of course fool people into complacency. But I think people have in general been right even about the slow disasters they have in general (not) worried about - dysgenics and the population explosion are good examples. Only a few people reacted strongly to these fears, but in the end they proved largely irrelevant.
Seing a direct example of something disastrous is good at convincing us. We believe in examples, often far beyond their value as proof (just witness the testimonials that sell herbal remedies). A freak storm, and everybody blames the greenhouse effect - suddenly it seems more plausible. This causes an opposite effect, where suddenly the doomsday predictions seem more likely.
But it is very short lived. When the findings that acrylamide existed in everyday food and especially fried high-carb food like chips were published in Sweden 2002, chips sales dropped radically. For three weeks. Then they returned to roughly normal. In the meantime there had been no new findings on whether the acrylamide was carcinogenic in food, no new information. But people returned to their old habits despite somewhat diffuse warnings from the department of food that it might be a good idea to avoid it.
This is due to the second reason. It is impossible to live a doomed life. If you really had reason to believe the Earth, your nation or your way of life was doomed you would need to change everything. That is an extremely tall order due to the enormous cognitive, emotional and practical costs of following it. The evidence (as you see it) has to be enormous. Usually the best way of getting people to take a doomsday prediction seriously is to create a collective panic so that everybody reinforces everybody else. That is sometimes enough to get people to flee their homes or change their way of life. But just an everyday doomsday prediction, or a prediction with no easy way of avoiding it (the Earth is going to collide with a comet!) seldom manages that.
This is why people temporarily gave up on chips. Had results shown that chips actually gave cancer they would perhaps have avoided them more, but remember that we know the dangers of smoking, deep fried food and too much fat and still pursue them; the pleasure is worth more than the fear. But the alarm from the food board had been about all food with carbohydrates that was strongly cooked, like bread, potatoes and many vegetables. There was almost nothing 'safe' to eat. The cost of changing habits is high, the danger appeared uncertain. So people ate bread anyway. And each time they did that, it undermined the worry about the chips too. In the end the old habits won. They appear to have been right too, later research has not found any strong link between the acrylamide and cancer.
Similarly, what can we do about the greenhouse effect personally? While there are plenty of cheerful advice on it (see for instance the amazing suggestions from the Swedish environmental department, which include showering a bit shorter, using the lid of saucepans during cooking and heating less), this advice is obviously not particularly relevant. We know that the real issues involve cars, powerplants and industry and that any changes there are matters for engineers and politicians. So there is at best symbolic actions to do individually.
Kirby ends by wondering if some sufficiently dramatic event might change our complacency. Maybe, but that change is not likely to be rational either. He also hopefully suggests that people can change, and maybe some form of critical mass could bring about change. But the aspects of human nature (or rather, psychology) I have described here do not change. And again, critical mass effects that set large number of people into movement are often irrational too.
Maybe the real way of getting good at maintaining our environment is to get away from the doomsayers. As long as any environmental issue is a matter of disaster and the end of life as we know it, it is going to bring forth the worst sides of human thinking. But what if we looked at it as a practical matter, as we were trying to maintain a beautiful but weedy and junk-strewn garden? Then we would start to discuss different things to do, how we might best achieve them and why we want those particular results. We might even notice different views rather than blaming each other for ignoring the apocalypse each of us believes the most in.
The Bush administration will announce plans for a permanent human settlement on the moon and to set a goal of eventually sending Americans to Mars. The response among pro-space groups are predictably enthusiastic. Which makes me, a pro-spacer myself, wonder: why didn't we learn anything from the last space race?
A friend wrote that a space race between the US and China would be a great start of the 21st century. Others have speculated about a race between China and India, and it is quite possible to have a three-part race (I doubt the EU will participate alone, it will most likely try to join up with one of the other groups).
But what was the permanent results of the last space race? Beside some invaluable experience (now ageing away in engineer brains or bit decaying on magnetic tapes) and scientific results, it mainly left behind cumbersome bureaucracies in the US and the Soviet Union. The race promoted the idea of national space programs, and led to the creation of many national space monopolies even among third party nations like Sweden - since nobody else than governments could afford space, somehow that legitimized the creation of laws enforcing monopolies. It was the breakup of the Soviet union and the desperate need for money that drove the Russian space program to open up for private space tourism.
The problem with a space race is that the goal is not to colonize space but to win over a competitor in order to score international political prestige or to unify the nation. The later goal appears especially relevant with the current administration, according to the WP article: "Sources involved in the discussions said Bush and his advisers view the new plans for human space travel as a way to unify the country behind a gigantic common purpose at a time when
relations between the parties are strained and polls show that Americans
are closely divided on many issues". The assumption is of course that Americans will not be divided over this too.
If the goal is prestige or unity, that means the goal of actually getting humanity permanently into space (which we need for long term survival) is secondary. If you fail and somebody else gets to the moon or Mars first - or if you succeed - the goal has been met and there is no longer any impetus to continue the expensive project and it will likely be stopped (or live on an unlife as a bloated organisation justifying its existence). Similarly if other concerns become more pressing, require more money or will give more prestige/unity. It is amusing to note that the space announcement was made the days after the IMF criticizes the US for its budget deficit.
I think the X-price has a far greater beneficial effect on getting mankind into space. It is immensely more modest, but it makes use of a diverse set of approaches, each of which has to consider its business plan. The space movement has to think more about the business plan of getting into space and less about destiny, heroism and the appeal of huge engines.
Of course, even with a proper business plan it is hard to get into space. I ran a scenario planning experiment / roleplaying game where the participants were doing their best to build permanent space settlements assuming some very pressing needs of going to space. It still turned out that it was extremely hard to make a viable manned space habitation program assuming current or near-current technology. We will likely need some new impetus to get out of the gravity well. Space tourism might pay for going suborbital or put a hotel in LEO but not much more, space power systems become relevant only when a lot more space infrastructure is in place so they can be built, the survival motive is great rhetoric but hard to get money from, space mining has to compete with ground and even sea mining (Julian Simon showed why we can't expect such resources to become expensive enough to motivate it; the exception might be Helium 3). Even stopping killer asteroids might just motivate having a spaceship parked in LEO ready to intercept, it is not obvious that we need permanently manned bases or anything. And telepresence is likely the best way to maintain our communications satelites anyway.
In the end, we need something new to make space viable. An order-of-magnitude drop in launch costs. Something very profitable to do in space. A very good reason to go there that nearly everybody agrees on - or a diverse space industry finding a way despite all naysayers. But a space race does not make space more viable.
S. A. Cavigelli and M. K. McClintock, Fear of novelty in infant rats predicts adult corticosterone dynamics and an early death, PNAS, December 23, 2003 vol. 100 no. 26 16131–16136
Some individuals react to novelty with fear ("neophobes"), others accept or gladly embrace it ("neophiles"). But any living being will encounter novel things across life, and hence the neophobes will be stressed. The Cavigelli & McClintock paper studies the effect of being neophobic on lifespan in rat, and found that neophobes have shorter lifespans than neophile relatives.
In the paper rats were placed in a novel environment and allowed to explore. Rats that did not explore much were labelled as neophobes (and often showed signs of fearfulness). Neophobia is believed to be linked to increase of activity in the amygdala and the hypothalamic-pituitary-adrenal axis: a release of glucocorticoids (the long-term stress hormones cortisol and corticosterone). The neophobic rats also showed higher and more long-lasting levels of stress hormones after the novel experience.
As everybody knows, long-term stress is bad for health. It decreases the immune defenses, it makes hippocampus shrink and in general weakens the body (the rapid response fight-or-flight stress mediated by adrenaline is more OK). And indeed, the neophobic rats died on average younger than the neophilic rats. Their lifespan was 20% shorter, and their mortality was always 60% higher. Being a neophobic seems to stress you to an early grave.
While this sounds like a wonderful argument for us neophiles to use to convince neophobes to come out of their hiding places and embrace the future, there is a problem here. Because the neophobia discussed here appears to develop at a very early age and be a persistent personality factor. It is not just something you decide to throw off.
In the rats it was found that testing young rats for neophobia could be used to predict their neophobia and hormone levels as adults; neophilia and neophobia appears to be rather stable traits that do not change. The same seems to be true in humans: neophobic children grow up into neophobic adults. The origin may be natural genetic variations, but other factors are clearly important. Among the rats pairs of brothers exhibited different levels of neophobia despite their close relation (especially since lab rats are rather similar genetically from the start). Instead early environment and experiences likely play a large role. In the same issue of PNAS there was another paper:
Gerd Poeggel, Carina Helmeke, Andreas Abraham, Tina Schwabe, Patricia Friedrich and Katharina Braun, Juvenile emotional experience alters synaptic composition in the rodent cortex, hippocampus, and lateral amygdala , PNAS, December 23, 2003 vol. 100 no. 26 16137-16142
that showed how repeated separation of degu (Octodon degus, or "brush tail rat") from their parent led to distinct changes in the synaptic connections in the limbic system of their brains - exactly the system that is likely to influence emotional and stress levels. Of course, there are many, many, many, many other studies that have shown that a bad upbringing with little contact with the mother or siblings produce a more "highly strung" individual, be it a rat or a human.
As an aside, I also found a paper about food neophobia in girls (the unwillingness to try new kinds of food):
Amy T. Galloway, Yoona Lee, Leann L. Birch, Predictors and consequences of food neophobia and pickiness in young girls, J Am Diet Assoc. 2003 Jun;103(6):692-8.
that saw a definite correlation between food neophobia and anxiety. Of course, there are many other factors (such as mothers with food neophobia). Also of interest was the negative correlation between pickiness (being unwilling to eat familiar food) and being breast-fed; a bit surprisingly there was no such link between being well breast fed and lack of neophobia. But as usual, psychological data seldom fit neat schemes.
As humans we have a lot more self-regulation options than rats. We are amazingly able to re-train our reactions. But that still does not make us able to easily overcome core personality factors, at least not through direct psychological means. It is an interesting question whether we can deal with this kind of self-limiting personality trait through medication. It would seem that it should not be that hard to lower stress hormone levels chemically, and by reducing that feedback neophobia might be ameliorated (if anything, the health damage could be limited even if the personality remained the same).
But in the meantime it is probably a good idea to hug one's children and make sure they get to enjoy science fiction.
How do we achieve cooperation without coercive central organisations? The classical example is Axelrod's analysis of strategies in the evolutionary iterated prisoner's dilemma, where he could show how reciprocal strategies could eventually dominate and form largely cooperative societies. Based on the idea of hortators, individuals acting as coordinators of social information and reciprocal rejection of defectors, I discuss some simulations and ideas around the formation of networks for voluntaristic enforcement of cooperation in this somewhat rambling account of unfinished hobby experiments in simulated sociology.
In the novel The Golden Age (and sequels) by John C. Wright a future libertarian society is described. In this near-utopia there are very few restrictions on individual freedom and nearly infinite possibilities of abusing the advanced technology available. In order to manage this problem the institution of the Hortators has developed. Essentially the Hortators are busybodies maintaining the social order by pronouncing different things desirable or undesirable, and if somebody breaks the social order or behaves unethically they ostracise him or her. Since the vast majority supports the Hortators they will also follow their lead, making an ostracism unpleasant, a serious economic hurt or even near-fatal if it is total and indefinite. Of course, the Hortators are not perfect (and most of the novel deals with one individual's struggle against them when he thinks they are wrong), but Wright makes an interesting point: an entirely voluntary organisation can likely maintain social norms with a high degree of compliance if it is broad enough. But what about smaller organisations, or societies with several different groups?
Axelrod's Norm Game (Axelrod,R. (1986). An evolutionary approach to norms. American Political Science Review 80, 1095-1111) is a variant of this issue. Axelrod is maybe most well known for his studies of the evolution of strategies in the iterated prisoner's dilemma. The Norm Game is a somewhat simpler abstraction but still with nontrivial effects. Individuals can choose to behave well or badly, and will gain a certain reward for acting badly while incurring a global penalty on everybody else (it could be overgrazing of a common, polluting or maybe spamming). If strategies are adopted depending on how successful they have been in the near past we get an evolutionary game. Without any other assumptions the only stable strategy here is of course to defect all the time. What if other individuals who sees the defection can strike back, incurring a penalty for the wrongdoer (but also losing out a bit themselves)? It turns out again that defection reigns. By introducing meta-punishments where individuals punish others who see wrongdoings but do nothing Axelrod found a stable cooperative state.
Others have explored changing his assumptions, introducing bounties for norm-enforcers, exploring different kinds of networks etc. In general the effect of the social network seems to be complex. If everybody is connected to everybody else many will react to a norm breaker and punish him, producing a strong normative effect. Real social networks are "small world networks" where most of my friends know each other (they are local, like a network based on being close to each other in space) but there still are enough long-range links to make it possible to find a short path from one person to another: the classic "six degrees of separation" (I discovered to my amusement that I am just three social steps from Saddam Hussein myself). In a local network there are fewer people who directly know me, and hence norm management is going to be local. While local links such as in a family are likey stronger than long-range links, the effect weakens the normative effect. When the network is a small world information about a transgression can spread widely, but it is unlikely to exist a direct social link from the vast angry majority to the transgressor.
I did some simulations myself of the Norm game, assuming different kinds of social networks, sizes of rewards and punishments and how information about a norm breaker spread. Very interesting but hard to say anything general about, since there are so many interacting parameters. Dense social networks promote cooperation, since many are likely to notice norm breaking. One thing I noticed was that an efficient way of achieving norms was to have punishments to be global: regardless of social distance others incurred costs for the norm breaker. While meta-normativity may still be a stabilizing factor, often the combined effect of many people punishing an offender individually weakly had enough a strong deterring effect to make everybody behave themselves stably. Fortunately this makes sense given the assumption of global costs incurred by the offender; he is behaving badly in the public sphere, and while information might travel according to social networks, the punishment also takes place in the public sphere.
In a small world network certain widely linked individuals can help spread information (or a rumor or epidemic; from a modelling perspective it does not matter if it is memes or genes). Hence if these act as Hortators they would enable concerted action. Also, by being strongly linked they may be efficient at convincing people about the rightness of their actions: some other interesting simulations of social networks with 'leaders' who seek to convince others suggest that the virtue of being well connected is more powerful than persuasive 'strength'. In simulations with deliberately introduced widely connected individuals these tend to act as mediators of information and make sure the offender is broadly punished, even when the cost of punishment for the punisher was noticeable.
So it seems that having Hortators might make norm enforcement more powerful. But how do we get Hortators? They have to pay enforcement costs, and since they can be expected to react more strongly than the average population (which contains individuals that may not be fully cooperative) they might have a disadvantage. In my simulations I often noticed the spontaneous appearance of Hortators that persisted for a few generations but then vanished, simply because they scored less than the less conscientous population. If hortatorship was something agents could decide on, it usually seemed to decline in the population over each generation.
In Wright's world people subscribed to the Hortators, apparently paying a small fee for their maintenance. They also had support from several major monopolies that gave them not just economic support but significant social credibility. One could imagine subscribing to a hortator service being an useful social signal. Members of the service would display membership, showing that they are less likely to defect and hence gain an advantage over non-members when another individual seeks a trading partner. It would be similar to how ISO9000 is sometimes used: in order to ensure quality, certified companies often seek out other certified companies as suppliers, making it an advantage to join. If the advantage of joining is greater than the cost of occasionally punishing defectors and maintaining the hortator(s) at the core, then the network will grow. As it grows, the cost of defecting increases and defection tends to decrease. Eventually an equlibrium likely develops, where the cost of being part of the network exactly equals the benefit. Occasional fluctuations occur due to random defections. This seems to be a very fruitful area to model. Also, this kind of scheme would allow multiple competing (or cooperating) hortator networks.
Preliminary simulations suggest that there are strong threshold effects in when hortator networks manage to enforce norms; they need to be sufficiently large initially. When individuals can change their social links depending on past experience with others many interesting patterns can develop (c.f. partner selection in the iterated prisoner's dilemma), and it seems likely that these are necessary to fully model how a working hortator structure might work.
Hortator networks are an interesting case in the middle between individual distributed norm enforcement and having a government or other dedicated organisation perform it. Under the right conditions hortators can emerge as spontaneous orders. However, we still need to understand their dynamics in order to find out under what conditions they can be used as effective tools, and how they in that case should be organised, paid for and what their contracts would include.
Note that the values transmitted by Hortators do not have to be good values. It is quite possible to imagine Hortators sustaining stifling values in a society. This is why it is important to model interactions between hortator groups and how they affect multiple norms - just as we want to reduce defection from social values we want to reduce overconformism. Maybe there is a meta benefit in the relative instability of most self-organised schemes to deal with defection.
Or more simply, the frailty of human nature saves us from overly powerful institutions.
Robert Carlson, The Pace and Proliferation of Biological Technologies, Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, Volume 1, Number 3, 2003
A very readable paper arguing that biotechnology skills and equipment are proliferating not just internationally but also percolating into society. As technology advances, older but still powerful devices are sold off (like mainframe computers once were), and new equipment becomes smaller and more capable. Several trends not unlike Moore's law appear to go on in biotechnology. This makes many of the current calls for tighter regulation and even relinquishment unrealistic - while at the same time the risk for misuse or unexpected uses and users (which might frighten lawmakers even more by being out-of-context problems). Carlson argues that bans would only promote even riskier black markets and suggests that an open and expansive research community might be the best way to deal with crises.
This flexible approach sounds very similar to the one suggested by Arthur Kantrowitz for nanotechnology development in The Weapon of Openness (Foresight Background No. 4, Rev. 0).
The idea that central control can prevent proliferation and misuse of dangerous technology is popular (it is simple to understand, appeals to planners and lawmakers, impresses citizens with the mystique of action and so on). But it may only work when the central control has ability to control some key element (like enriched radioactives for nuclear weapons), the field is relatively well defined with few players and the control does not entail the huge costs of broad surveillance, administration or the risks of rampant organisation growth that is so common. Centralized top-down organisations also work best when they can solve well-defined problems, such as dealing with a particular adversary.
It is often unclear what the goal of control should be. There is no consensus on what would constitute misuse of biotechnology. Some consider the whole field a misuse of technology, others have differing ethical opinions on applications or goals - and these opinions coexist both within societies and between them. When biohacking becomes more feasible the biohackers will also have very different goals.
The threat from biohacking is manifold and distributed. The real risks are not likely escaped modified E coli making cocaine in the gut, bioweapons or glow-in-the-dark aquarium fishes but something completely unexpected not in anybody's contingency plan. The best way of dealing with such threats is also a distributed and manifold approach - a diversity of researchers sharing information, alerting each other about threats and discoveries, trying different approaches and competing at being the first to find solutions. A centralized control regime would prevent much of this web of open and robust protection, making the remaining tatters of the web of researchers work within narrowly defined national or organisational compartments.
I remember inventing the prion for one of my childhood comics. It was, if I remember right, about a space expedition to some alien world. One astronaut ingested some colorful alien chemicals (are there no dull alien chemicals? not when you draw with crayons) and they combined to form some kind of self-replicating green-red geodesic sphere in his bloodstream. The other astronauts somehow drew it out with a syringe. Anyway, I recalled that even at this tender age (perhaps around 8 years) I thought the scientific premise was a bit shaky. I felt vindicated when I heard about prions several years later. Now I get the feeling we should expect the colorful alien chemicals everywhere.
Kausik Si, Eric Kandel, Susan Lindquist et al. suggest in two papers in Cell that prions may play a role in memory. An Aplysia protein ApCPEB is upregulated by short term facilitation and necessary for long-term facilitation. It also exhibits prion-like behavior in that it can convert ApCPEB (at least in yeast) into a new conformational state that can convert ApCPEB into the new state. The suggested function is that the prion state acts as a synaptic marker activating mRNA to do long-term changes.
If this fascinating possibility works out and plays a role in memory Kandel will likely get another Nobel prize. And we will have to care about the conformome beside the genome, proteome, kineome, transcriptome and the others.
There is a lot of elegance in having a self-reproducing prion state - it is resistant to noise, can persist even if no unchanged ApCPEB arrives for a while and activate mRNA at leisure. It helps explain the problem Upinder Bhalla discussed when he visited us at SANS: how can you get reliability when the number of proteins at a synaptic terminal is so low? By having the effect persist for a long time one can likely get a lot of reliability. However, it requires compartmentalization to retain specificity. If the prions were allowed to run around they would make the neuron non-specific. Which makes it likely that ApCPEB in the activated state should not be allowed to diffuse out of the spines, or that there is something eating it in the dendrites.
It suggests a lot of fun modeling. First the basic ApCPEB dynamics, especially how necessary it is to prevent spontaneous prionization - do we need a serotonin trigger? How fast can it be broken down?
Then the compartmentalisation mechanisms. As well as the interesting issue of what happens if these break down: do we get a prion disease, or some form of memory impairment? Having synaptic markers overload individual cells is likely not noticeable, but what if the compartmentalisation went overboard on a large scale? Cells would become non-Hebbian and likely develop a lot of LTP or LTD; sounds like a fairly distinct pathology that might be observable. Also, one would expect different genetic influences on the vulnerability for this.