June 25, 2004

Screen them all

I blogged about Bush's mental health initiative at the
CNEhealth.org Blog

My basic argument is that 1. massive screening for broad ranges of illnesses is becoming feasible, 2. it is irresistible from a public health perspective, 3. that such programs are likely to turn into expensive projects benefitting decisionmakers, pharmaceutical companies and involved organisations far more than the citizens. Not to mention that they will not be the ones suffering from the integrity and privacy problems.

Very timely Reason took up the issue of how medical regulators oppose self-diagnosis. The idea seems to be that people cannot handle screening and diagnosis of their own (or their children's) health themselves but broad screening implemented within non-medical organisations such as schools will be beneficial. It really lays the paternalistic and organisation-centric outlook bare: don't trust people, trust organisations and expert groups.

It bears reiterating: as medical sensors become cheap, small and expendable they can be integrated into everyday life. We already have home pregnancy and AIDS tests, and home DNA tests are on the way (mail order tests already exist). Add to this bacterial and viral detectors, and within a few years it is quite feasible to have a microlab within one's mobile phone signalling the presence of pathogens. Screening can be implemented by people themselves, integrated into the normal caring for children and family rather than centralized through enormous programs and databases.

Worried that the disadvantaged will not get access? Spend the money on subsidies or send out test kits. Worried about how people will react to the information? What guarantees do we have that people will react better to centralized testing? Handling a potentially serious diagnosis requires some maturity and is helped by the presence of a caring professional. But home pregnancy tests does not seem to ruin that many lives, and help many more. A home dyslexia test might be even more useful.

It will take a while before understanding of the enormous potential of ubiquitious medical sensors will sink in among people, both health consumers, doctors, health organisations and politicians. It is revolutionary in the sense that it undermines monopolies of diagnosis and suggests a very different approach to maintaining public health. Perhaps one could call it P2P - patient to patient.

Posted by Anders at 02:31 AM | Comments (18)

June 22, 2004

Design Patterns of Nature

Mathematics in Nature: Modelling Patterns in the Natural World by John A. Adam, Princeton University Press, 2003.

As a kid I used my Sinclair ZX81 computer to simulate things: overlapping cratering on the moon, planetary orbits, population ecology and plant growth. Simulation was just a natural outgrowth of my interest in creating worlds. As I grew up I learned about other ways of understanding reality and creating worlds: mathematical models and physical theories.

Why does the refrigerator door resist opening? Because the lower temperature inside causes the air molecules to move more slowly, reducing the pressure and producing a force inbalance. How large is it? Can you get a pressure difference that makes it impossible for even a strong human to open the door?

Adam's book is all about exploring reality through mathematical models. It is about the enjoyment of thinking about the small observations we make - that waves in puddles behave differently from waves in lakes, that trees change shape as they grow and clouds seem to like particular shapes. We see these things all the time, but actually trying to understand them brings new wonder. The book shows why both ducks and boats leave wakes that are angled at 33 degrees composed of waves angled 35 degrees regardless of the speed of the duck or boat. And why river meander lengths are close to 4.7 times their radius of curvature, why halos appear att 22 and 46 degrees from the sun and why pavement cracks into polygons. A world of elegant interactions between opposing forces and mathematical facts opens up.

The book starts out by discussing the ideas of mathematical modeling and then moves on to making estimation (so-called Fermi problems) and dimensional analysis. Much of mathematical modeling is about braving the problem. Just because we don't know all details, all physical laws and how to solve it from the start doesn't mean it is intractable. We can start out with a rough approximation and refine it when we see discrepances; seeking perfection from the start will just prevent us from ever starting.

The book later examines various phenomena. The selection has mainly been about things we can experience ourselves, like the weather, waves, geography, biological pattern formation and minimal surfaces. This helps to anchor the mathematics in everyday reality.

My main complaint is just that the narrative jumps between so many different things that it never has the time to get into them deeply. Often a model is introduced, applied to the original problem, and then applied to other interesting problems. This helps to show how the world hangs together mathematically, but it leaves the analysis often a bit shallow. Of course, going into the detailled electromagnetism of wave refraction in droplets might be too much to ask for, but the book leaves you hungry for more.

The text was apparently the result of the evolution of course notes, and sometimes that shows up. A bit more editing would have handled the occasional repetion and re-presentation of a concept that had already appeared in a previous chapter. This origin might also explain the limited depth of analysis - there is only so much one can tell in a single lecture, and expanding it into a book still carries with it a bit of limitation.

Overall, Mathematics in Nature is a wonderful complement to the plethora of books looking at the world in terms of dynamical systems and their resulting chaos and fractals. This is a book anchored in "classical mathematics" and classical physics, showing that they still are highly relevant.

And what about the refrigerator? Atmospheric pressure is about 1 kg/cm^2 (about 10 N/cm^2) and a fridge door about one square meter. Hence a total vacuum inside would produce a force greater than 100,000 N. The world's strongest man can lift on the order of a few hundred kilograms (human muscle generates 16-30 N/cm^2, so with a bulging 10x10 cm muscle one can get about 3000 N). That is still about two orders of magnitude too little. So an absolute zero refridgerator would be unopenable (as long as it doesn't leak).

But my fridge is not that cool. Recalling the ideal gas law PV=kT (http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/idegas.html) The volume is about a cubic meter, and at 0 degrees C k is about 371. Using this, I get a pressure difference of 7140 N/m^2 between my 21 degree room and the fridge. 7000 N is equivalent to a 700 kg weight. So how come I can open my fridge without being the Hulk?

Posted by Anders at 05:44 PM | Comments (12)

June 19, 2004

Mapping the News

stamen: google news is an interesting site using flash to visualize how news stories develop. By clicking on an item one can see the volume of news stories about the subject over the last few days. A nicely calm design that appeals to me as a newsjunkie and infographics addict.

This display is quite similar to something I have been thinking of myself for some time, a way of monitoring the growth and decay of threads in an online forum. But they got there first, and the result is inspirational.

The page also links to the newsmap, a graphically far more busy site. Unlike Stamen's map, this tries to show what news are big right now. This visualization is rather similar to the Marketmap. But while the stamen visualization and the marketmap are fairly calm, the newsmap blares information right at you. The large type headlines attract attention and makes the gaze jump around arbitrarily, and the changes in size strain focus. The "objective is to simply demonstrate visually the relationships between data and the unseen patterns in news media", but the main pattern seen is the relative size and the category of news, not what is linked to what.

In that respect the marketmap does much better, by having both horizontal and vertical divisions for the main fields. Once can at a glance see that finance and technology is bigger than health care and consumer cyclicals, what fields are growing and shrinking, find the top gainers and losers. This is a bit like the stamen plot.

(for some more interesting stock market visualizations and discussion of how to evaluate them, see Guidelines and Metrics for
3D Interactive Representations of Business Data
by Richard Karl Brath).

Overall, a good map of the news should not just show that is a hot, emerging story but how stories relate to each other. Ideally stories that have ties should be connected or nearby, helping to get an overview not only of what is big but where it fits in. Maybe this could be done by checking how many keywords that are used in both stories.

One of the best things about google news is that it allows one to experience news from many angles through different news agencies. It doesn't matter that most journalists just write their own version of the Reuters statement, it is the small differences that makes it fun to compare Xinhua , Al Jazeera, San Jose Mercury News and Reason. That tells one much about the world and the different outlooks. The newsmap allows one to switch between news from different areas, but it is still not very transparent. Maybe the next step ought to be to allow one to see the similarities between different news and the differences between the reporting of them, perhaps as some 2D clustering map. An interesting visualization challenge.

Posted by Anders at 08:35 PM | Comments (10)

June 10, 2004

Smurfy Nanoethics

I attended a conference on nanoethics arranged by the Swedish Research Council. On the whole it was an interesting and constructive day, but the smurfese problem bothered me.

Smurfese, the language of the Smurfs (where some smurfs are replaced by derivatives of 'smurf') is usually understandable thanks to word context. But what about the language of the Nanos, where the word 'nanotechnology' can mean everything from nuclear physics to microelectronics to magic?

When originally coined by Eric K. Drexler in 1986 it referred to molecular nanotechnology, systems on or under the nanometer scale. Later, as funding appeared and researchers flocked under the banner of nanotechnology it came to mean far larger systems. A bit cynically, one could say that any kind of material technology, chemistry or physics can be called nanotechnology if at least some part can be measured in nanometers - even if it is hundreds of them. And if funding is made more likely by adding the prefix nano- to one's research, the temptation is great to do it. There is also the positive side: by seeing many diverse fields as converging into nanoscience, great synergies and interdisciplinary adventures can be started. But the N-word still gets diluted.

Then the opponents joined the game, doing their best to tar nanotech with the same brush as biotech. Perhaps the most extreme example is the ETC group that deliberately calls it "atomical modification" to give the impression that nanotechnology, biotechnology and nuclear technology are all the same (and of course, bad). In response to fears of such negative word slips, the glass manufacturer Pilkington decided to avoid mentioning the N-word in marketing of its self-cleaning windows (based on free-radical producing 15 nm titanium dioxide particles).

But for the nanoethical discussion maybe this semantic confusion does not matter much. Maybe because there is no need for the word "nanoethics" in the first place. At the same time the ethical discussion we are getting into is of extreme importance, a smurfy paradox.

Bioethics is in many respects a fruitful philosophical (and rhetorical) research area since biotechnology and medicine poses many new issues and possibilities not previously found in ethical debates (definition of death, modification of life, changes to human nature etc). These new issues stand on their own and make bioethics a somewhat independent field (it is still part of ethics, of course). One could imagine a field of 'autoethics' that studies the ethics of cars and traffic, but that field would likely just be an application of already studied ethical principles. In the same way nanoethics might not be a real field that can stand on its own, but rather the application of ethics to nanotechnology. Perhaps an important application, but hardly an independent field.

This claim would be challenged if there were any truly different issues discussed in respect to nanoethics. At least at this conference (as well as the Trieste conference last year I attended) there were no unique ethical issues dealing with nanotechnology. Among those discussed were:

  1. Risks
  2. Possible health and environmental effects of nanoparticles
  3. The use of nanodevices for behavioral control
  4. Nanoenhanced genetic testing
  5. Information and consent
  6. Public trust and transparency
  7. Privacy
  8. Preventing overcommercialization
  9. Costs
  10. Equity and fairness
  11. Military use
  12. Human enhancement

As professor Gisela Dahlquist from Umeĺ University said, the problem is not the technology itself, it is the individual applications. I agree, but before I go into them a quick note about core issues:

Biotechnology has, deep down, a core issue: is it OK to change life itself? If one answers this issue negative all applications become immoral. It precludes the need for further debate about applications except in order to convince the adherents of the other view that other ethical or practical concerns make each possible application immoral too. Nanotechnology does at first not seem to have this basic issue. Manipulating atoms does not threaten the ethical order of nature in the same fundamental way as manipulating life; nobody considers the natural arrangement of molecules as having a value in itself. Hence most of the debate will be about applications. While I think this view is correct, I think there is a kind of "shadow core issue". This issue is not nanotechnology itself, but rather the use of nature (i.e. anything) for human aims. There is a strand of thought that thinks any instrumental use of nature is inherently wrong (or at least questionable), and to this anti-instrumentalism nanotechnology is of course inherently wrong. This strand of thought is seldom expressed clearly, both because it is itself rather questionable (humans and other animals all use the world instrumentally and cannot survive without it) and because it is often eclectically, inconsistently or opportunistically applied. But that does not make it less powerful, and a public ethical discourse that does not deal with the issue of using nature will be weakened. In my opinion we should be clear about that we want to use nature, for what aims and with what considerations. Bringing up such perspectives will help avoid endless debates about side issues that are actually just rationalisations of deeper values.

Of these categories 1-2 deals with risks. The nanoparticle toxicology issue seems to be turning into a popular field (and was well on its way of becoming it long before the greens started to sound the alarm; major conferences were held before 1998). Here the problem seems to be eminently solvable through technological fixes and design practices: since nanotechnology deals with designing nanoscale structures, dangerous nanostructures can hence be designed away (or be contained, made degradable etc) with reasonably advanced technology - or just avoided. A good thing we looked into it well ahead, but no showstopper.

In general, there are known risks and problems, foreseeable risks and unknowable risks. But even the later can be somewhat constrained and estimated using known characteristics of the field. We cannot say for certain what dangerous chemical explosives can be invented in the future, but given what we know of the energy of chemical bonds we can put an upper limit on how powerful they could be. The fears surrounding the Brookhaven heavy ion collider could similarly be dispelled. Similarly other forms of theoretical applied science can be used to constrain risks, helping us to focus on the important issues. But that requires an openness to theoretical applied science, something which is often absent: it is seen as scence fiction, idle speculation or a diversion from the "real" applied science (and theoretical science, that at least doesn't get into troublesome debates).

Current nanotechnology initiatives seldom refer to molecular nanotechnology and the origins of the field during the 80's an early 90's (as I have argued elsewhere) because they want to disown the more "science fiction" or "hyped" aspects of nanotechnology in order to appear as a legit scientific field and avoid incurring the wrath of luddites or a fearful public. The fact that the strategy does not work well from a PR standpoint (as recent nanodisaster stories show) is of little concern. Of course, sometimes there is just plain ignorance. Several participants (including some who ought to) had never heard of Eric Drexler, which shows that the field has a weak sense of history.

Several good points were made at the conference about the problem of both promoting a field as important (and in need for funding and recognition) without hyping it. Especially professor Alfred Nordmann did a good job. He sketched the problems with the "Caricature US approach" (benefits seen as revolutionary, risks are ordinary, traditional norms and values hinders the revolutionary promise, so the ethicists better get on board) and the "Caricature EU approach" (benefits likely to be gradual and non-revolutionary, but risks can be radical, novel and serious. Ethics represents traditional and public concerns, so getting ethicists involved early will control the risks). Instead of these, Nordmann argued for "normalizing nanotechnology" - show how it fits in with the big picture, with the history of technology, contextualize it and debunk the stupid stuff. It may or may not be revolutionary or dangerous, but that is something to be discovered rather than assumed by policy. Getting rid of mythical animals at the heart of science policy and ethics discourse is a good thing. He then supported more monitoring of nano research and policy from a science studies perspective, more forums for allowing the public and other stakeholders to discuss and especially having "vision assessment" where people did explicate their implicit promises, views on nature and human nature etc. Rather than speak of nanotechnology we need to discuss particular nanotechnologies and systems and what we seek to use them for an why.

To some extent his view was that this would happen no matter what we do, as a natural result of history. We have normalized electricity and may one day normalize radioactivity. But of course, just letting it happen may take a long time and lead to great losses.

Concerns 3-4 deals with medical integrity and autonomy, and 5-7 the same in a social setting. Again, the issues brought up are hardly new. We already have potentially motivation-controlling brain implants and drugs, genetic testing can be done and patient (or citizen) integrity threatened in a multitude of ways. There is no need for nano-integrity or nano-autonomy, plain oldfashioned integrity and autonomy will do fine. In medicine we have the usual ethical boards, and there is no reason to distinguish between a nanosurgical procedure from a microsurgical one from an ethical point of view.

An interesting issue brought up was invisibility. Again, hardly a new concern, but nanotechnology makes it visible (pun intended, sorry). What to do about technologies that can affect our lives without us knowing about them? Even if they are not malicious, the feeling of lack of control as invisible systems affect our lives in ways we cannot follow is disruptive enough. One way of dealing with it may be good design practices. All systems involving active nanodevices should be designed to signal it, ideally giving the user a chance to follow their activity. Just a lit LED telling you that the nanocleaners are busy might be reassuring. I think one cannot stress how important this is for risk perception. We overestimate and dwell on risks we feel we cannot control, making many afraid of flying while cheerfully driving down the highway in a metal casing they think they control. An on/off switch tells the user that he is trusted to make decisions. This helps to create trust between the manufacturer and the consumer, and makes the discussion of risks less polarised.

Of course, there are applications that do not seek trust, like spyware and surveillance bugs. Here design practices will not be followed, but that does not make the design practices bad. Dealing with invisible nasties is a technical, legal and ethical issue - interesting, but not something unique to nanotechnology.

The 5-7 concerns are all good targets for "normalization" - get people and groups talking, and let's find compromises, practices, laws and norms that we can agree on or at least accept. Professor Göran Hermerén had several fruitful suggestions for multi-layered approaches beyond just binary legislation/no legislation choices - technologies can be controlled through social mores, advisories, oversight groups with different jurisdictions, formalized practices and laws. Each have their own costs and benefits. The problem is how to get people together in a good way, and get the discussions to percolate through the different strata of society.

Areas 8-10 were economic. The first is a sentiment sometimes encountered in the stem-cell debate (at least the Swedish one) that money mustn't be the goal of research. While sounding very noble, there is a double problem in that it consigns all science to science for its own sake done by selfless scientists and that it opposes application of the results. Pharmaceutical companies make money due to the suffering of people, but they are highly desirable since they make money on ending the suffering and without them medical research would be far weaker. Nanotechnology is not a value in itself (except for some of us scientists), and if we want any benefits from it we need to have it commercialized. Seeing commercialization as a problem at this early stage may preclude many good uses. What should be resisted is stupid, monopolistic, biased or non-transparent commercialization, but usually the best way of doing that is to ensure that there exists smart, diverse and transparent efforts.

The equity concern to some extent represents the usual social liberal and post-Marxist discourse: new technologies should not be developed unless they can benefit everyone or at least do not increase income gulfs. I do not feel the need to get into this discussion from my own libertarian perspective, since the discussion applies to any general technology.

One thing worth noting is the worry from some participants that the benefits of nanotechnology will not be cheap and accessible. While this may sound absurd to adherents to the "abundance through MNT" view, it should be noted that even if the technology could physically produce cheap abundance rigid or excessive regulations could stifle it. After all, most pharmaceuticals could be produced for a fraction of their current cost if the additional costs of approval, regulation and safety were not added (also compare to Freeman Dyson's discussion about the nuclear power lock-in in Imagined Worlds). If we do not get good nanoregulations we might not get good nanotechnology.

Dr Jürgen Altmann is a very productive writer and debater about the military uses of nanotechnology, and could be said to represent concern 11 (and to some extent 12). His presentation covered a vast range of potential military and security applications of nanotechnology. Unfortunately, quite a bit of it was more tuned towards triggering the "yuck" factor (remote controlled mice?! scorpion robots?!) and americophobia (look at how much they are spending!) than the core ethical and political problems. Some of the problem here might be my fundamental disagreement with him about human enhancement (where he seeks a 10-year moratorium for non-medical enhancements), but I think this presentation would be far more effective at a red/green meeting than an academic forum. This makes it easy for some to disregard his concerns as mere alarmism.

I think he did have a good point about the destabilizing potential of many systems requiring even minimal nanotechnology, such as autonomous kill vehicles, infiltrative microbots and programmable bioweapons. We shouldn't worry about them being nanotech, but rather autonomous systems that could erode treaties, cause incidents, promote arms races and tempting first-strike scenarios. Unfortunately his suggestions for dealing with these were rather standard: international bans, moratoria and giving the UN a greater role. Given the huge dual-use potential, the diffuse areas between clearly legitimate and clearly illegitimate uses, the "small science" research needed unlike the "big science" approach needed for nuclear weapons and the usual problems with the UN, these suggestions would at best be partial solutions. That may be better than nothing, but I think we can do better than that.

First, we should recognize that top-down solutions like these work only for certain kinds of problems, well-defined problems where most participants have parallel interests and information can be gathered centrally. Bottom up solutions can help with other types: ill-defined problems with many different interests and local information. A world with the weapons discussed by Altmann is also a world where similar equipment can be used by people to ensure safety and stability. Distributed monitoring of bioweapons (e.g. by using cheap portable and networked pathogen-detectors), "immune systems" for dealing with infiltrating software and hardware (e.g. the building has its own microbots to look for intruders that otherwise does tasks like pest control or cleaning), weapons with mandated tracking and recording (mandated by civil society to ensure accountability and eventual transparency from law enforcement and the military) and so on are quite useful technofixes. Similarly one can invent a multitude of local and non-centralized instiutions to keep track of other threatening uses of nanoweaponry. None of these can individually solve the entire problem, but the union can be very powerful.

Unfortunately, this kind of distributed security thinking is today still rather alien to most decision-makers (of course, since they are centralized) and thinkers (since they have grown up in a centralized world; compare with the discussion in Resnick's Turtles, Termites, and Traffic Jams). Probably the only one's who have grown used to it are the networkcentric defense people in the military, the very ones that are promoting many of the things Altmann fears.

The final issue was human enhancement. It was not brought up by the professional ethicists but rather implicitly by the technologists by showing new brain-computer interfaces, by Altmann through his criticisms of it and by transhumanists like me who considered it a beneficial use of nanotechnology. I think it was to some extent avoided in the discussion because it is so close to the embarrassing "science fiction" of nanotechnology. At the same time it is probably the most ethically complex and interesting issue linked to nanotechnology. But it also broadens the discussion far beyond just nanomachinery into the deep issues again, rather than the narrow debate about particular applications and aims. Professor Hermerén stated that it does not per se introduce any need for any new ethics - we can deal with it using utilitarianism, rights or what have you - (I agree) but seemed to prefer to discuss what values we seek to impose on technology and in what order.

One area that was often mentioned was expanding the use of ethical pre-approval and making social impact statements. While sounding very noble the idea is rather problematic: what is a good social impact? If a technology strengthens individualistic tendencies in society, is that a reason to avoid it or support it? Different ideologies and groups would come to different answers. The answers I got about this issue were somewhat vague, but I got the impression that people in general thought that having many different groups represented would make this analysis possible. But that only leads to the question of what groups to include. Is for example transhumanism a valid position in the debate and in need of being heard in this kind of analysis? Determining who gets represented where gives gatekeepers tremendous power. And if a position - regardless of how small or weird - is not heard or made part of the analysis, does that imply that the position ought to accept the mainstream consensus? It is messy already for the political philosophy of law, but when dealing with what is essentially value statements of possible societies it seems that by accepting the values of liberal democracy one must accept that certain groups are in their right of pursuing social outcomes at least for themselves that diverge from the social outcomes desired by the majority. This makes the social impact statement either weak (merely an advisory forethought), oppressive (forcing minority views to conform) or just an extra delay. Personally I think advisory forethoughts are quite useful, but one should ensure that they are never seen as normative.

Nanoethics might be needed simply to show people that researchers and companies do indeed care. But to really become something more than greenwashing or reframing the usual ethical debates in yet another form, the discussions need to dare to go into the more murky reaches of the aims of technology, views of human nature and different visions of humanity's future. It would broaden the debate far beyond nanotechnology, but it would also make the nano-smurfese irrelevant.

Posted by Anders at 01:57 PM | Comments (9)

June 03, 2004

Food Safety Through Cloning

Cloned cows get sane future: Genetic technique may yield BSE-proof calves. (based on a paper by Kuroiwa et al. in Nature Genetics)

A nice demonstration of how cloning can be used to reproduce a successful transgene, in this case cows that do not produce prion protein and hence are resistant to BSE.

As the comment above mentions, it is unlikely there is a big market for this guaranteed BSE-free beef at present given people's worries about cloning and genetic modification. In the case of BSE it might also be such a low-risk threat it solves that it is simply not worth it. But imagine Salmonella resistant poultry? In the US alone, 1.4 million people are infected and 600 people die each year. Even if the modification carried some risk, it is hard to imagine it being worse than the present state.

But the "natural is good" ideology will likely keep food poisoning on the menu. Especially people eating organic foods are susceptible to these bacterial infections. Animals are not given antibiotics and manure is used as fertilizer. GMOs are probably the most versatile and through way of not only making organic food safer but to improve its efficiency (the lower yields per hectare of organic farming requires more farmland, offsetting the ecological benefits from the farming itself). But since organic food has been defined in terms of avoiding modern chemical and biological enhancements rather than aiming at enhancing ecosystem health through any means, it is unlikely to happen. Once again a good aim (improving biodiversity and sustainability) becomes hi-jacked by an anti-modern perspective, essentially locking out the possibility of pro-ecological GMOs and chemicals.

The first blog entry in this blog was about prions. The cow experiments are interesting in this respect: if the Si, Kandel and Lindquist hypothesis is right these cows ought to have learning disabilities. They provide a powerful test of the theory, despite being intended for BSE-free pharmaceutical production rather than research. Who knows what neuroscientific discoveries will emerge from our dinner plates?

Posted by Anders at 12:46 AM | Comments (18)