May 27, 2004

Does Cyborgs Dream of Digital Pigs?

Some comments on Ghost in the Shell 2: Man-Machine Interface by Masamune Shirow.

"Man-Machine Interface" is self-describing: the manga is to a large degree an orgy of information graphics and complex cyberspace jargon. Shirow's ordinarily information dense style goes into overdrive with the possibilities of computer generated 3D objects and visual effects. Add to this the usual high-context corporate political intrigue and Shinto-style philosophical speculations and the result is at times nearly impenetrable. Which maybe doesn't matter; this is a work that can be read or watched simply for its details.

The basic story involves the protagonist (an instance of the AI-cyborg hybrid Mokoto that emerges at the end of the original manga) working as investigator for a multinational corporation, trying to protect it from terrorists and infiltrators. But of course there are more to it than just animal liberation when pigs with clone human organs get slaughtered, and soon her investigations branch off into the labyrinthine.

Shirow loves design and detail, and the worldbuilding is impressive. While the television series Standalone Complex gives much more sense of how society might work, here the story has moved off to a less defined setting in the middle of the pacific. But many details remain, like the circular infoscreens, fascination with crustacean robots and tricks like hiding one of one's bodies inside a larger body.

Of course, the border between technobabble and technological speculation is diffuse. Quite often the cyberspace assumptions are highly suspect - this is a world where everybody seems to have bionic interfaces in their brains, but they can be cracked with a speed and ease that reminds me of script kiddies breaking open un-updated windows machines. While people might be sloppy about the security of their computers I doubt they would be as careless (even when clueless!) with their minds and lives. Just like in Gibson's cyberspace novels much of the terminology is rather an evocative expression of what is going on dramaturgically (wild fighting, barriers breaking down, stress or diversion).

There is poetry in the information overlays. They turn the characters into information saints, surrounded by AI-putti and billowing messages like baroque religious paintings. Reality is obscured by overlaid information. In many ways this is the opposite of Edward Tufte's clean and informative infographics. This is info-baroque in strong colors, obviously requiring posthuman attention systems to be manageable.

Perhaps the most interesting and technologically plausible aspect is the penguin-like AI agents used by Mokoto (replacing the fuchikomas/tachikomas as the resident cute AI). She is constantly surrounded by a myriad of agents implementing her wishes, from stress analysis of a chain to complex viral defense strategies to finding clothes. Are they external to Mokoto or parts of her exoself or even parts of her core mentality? They appear to be not quite part of her (she still 'speaks' to them) but still so close that they collaborate with a minimum of fuss (perhaps due to long training and adaptation on both sides). This is how agent support should be. Implementing it this well for real is going to be another matter.

Identity becomes fluid. In the original manga the protagonist mainly kept a single body, just interfacing through other bodies. Here there is no clear central body: she keeps decoys and extra bodies stashed across the world, and while sitting in her office she is also organising things in cyberspace and jumping around buildings using another body. There is a wonderful sense of her being physically unbound. The only constraints that bind her are limits to knowledge and information.

As a transhumanist I of course cheer for the vision of radical human-AI symbiosis envisioned here, even when phrased in Shirows homebrew shinto-quantum metaphysics (as mystical transhumanism goes, it is a fun complement to David Zindell's ideas in the Neverness books). But in the end this manga does not really lead anywhere: a momentous transformation has happened, but most characters are either unknowing or passive observers. The big struggle turns out to be irrelevant, and the core issue - how to handle emerging truly artificial life - is still placed in the future. Of course, that hints at the possibility of a sequel.

Posted by Anders at 12:02 AM | Comments (12)

May 21, 2004

Bacterial Reverse Bioengineering

Science: A New Tack on Herbicide Resistance, Discovery and Directed Evolution of a Glyphosate Tolerance Gene by Castle et al. (Science, Vol 304, Issue 5674, 1151-1154, subscription required).

A new and elegant way of creating herbicide resistant crops. Monsanto has made plenty of money on crops that resist the herbicide glyphosate, but now the patent on the herbicide has expired. So, how to reverse-engineer the resistance of the plants (which is still patented)?

Nature can find a way. Castle et al. started by looking for common bacteria that could detoxify glyphosate. After finding one (there are almost always one bacterium somewhere that does any desired chemical operation) they made its detoxification enzyme more powerful by directed evolution. Shuffle the enzyme gene, add it back to the bacteria, expose them to the herbicide, pick up the best strains and repeat. After 11 rounds of selection they had a 10,000 times more efficient enxyme. This gene was then added to corn and worked well, with no toxic byproducts from detoxification or adverse effects on the plant health and reproduction.

While this will hardly make the greens any happier (it is still a GMO, even if it means Monsanto will get lower profit margins) it is a wonderful example of how to use the power of evolution to improve a product. In fact, it demonstrates a promising way of having bacteria do reverse engineering of patented genes, which might be a way of getting out from the invention-restricting effects of gene patents without removing them as incentives for creativity. Unless this method ends up heavily patented and restricted, of course.

Posted by Anders at 09:04 PM | Comments (18)

Isn't it Obvious?

New Scientist: Europe revokes controversial gene patent

The European Patent office has overturned Myriad Genetic's patent on the nreast cancer genes BRCA1 and 2. This enables European breast cancer tests that do not pay licence to Myriad, and mirrors the legal troubles the company has with Canada.

Also this week, the European Council has approved software patents, something that has been fiercely opposed by many software developers.

(This is a rambling railyard of thoughts; your mileage may vary on their philosophical, legal and intellectual quality)

What is at stake here? If we ignore paranoia about companies forcing us to licence our metabolism, general arguments about the unpatentability of life or code, the core problem seems to be obviousness.

The patent system is intended to ensure that creators are rewarded for their work and publish it, in turn leading to greater creativity and competition. However, today patents today can often be used to block competitors rather than turn profit through their own value. This is especially true for very broad patents or patents on obvious, efficient ways of achieving something.

Detecting breast cancer by looking at a marker gene is obvious once we become awareness that such a gene exists. There are many examples of similarly obvious software patents, such as using XOR to draw cursors (I invented the XOR method independently as a kid; I even used it in a Sinclair ZX-81 drawing program I sold to a friend: had I lived in the US at the time I would have been liable for patent infringement). The problem with such basic and obvious patents is that if they are enforced they either require programmers to use non-obvious or inefficient methods, or to pay licencing fees. In both cases the overhead of programming increases enormously. Inefficient, non-obvious software is the bane of maintenance and further development, and even if the licencing fee is reasonable it can increase the cost of a piece of software from zero to a finite value - what would have been a contribution to the cultural commons of humanity now becomes limited. Even worse, given the complexity and size of the patent corpus finding if software infringes requires sophisticated legal help which is even more expensive, and requires lawyers to look over the shoulders of programmers and biologists. Planning a software project or experiment becomes both a technical and legal challenge, where both sides get in the way of each other.

One can argue that software should be unpatentable, but one still has to consider how to reward creators. A situation where most commercial code is closed and deliberately obfuscated might not be the best. It would slow development and reduce maintainability, although reverse engineering would thrive.

"Info-socialists" might suggest that the government uses tax money to reward useful creations: creators register what they create, making it available to everyone and getting paid for its value. As a libertarian I have many philosophical problems with this idea, and there are also many serious practical problems. The worst problem is how to measure how much to reward a creator: there is no market pricing, usage measurements can both be manipulated and be irrelevant (an emergency evacuation system might be extremely valuable even when it is almost never used) and centrally decided rewards run into the usual information problem, corruption and biases of governments.

Shorter patents for software and other fast-moving areas might reduce some of the problems, but it only reduces them and increases the incentive to sue the hell out of everyone before the patent expires, as well as to rush patents through the overworked sieves of patent offices.

If we need patents, then we must make sure the patents only cover non-obvious solutions. And that requires a reform of the patent offices and how they handle applications. It might be just as dramatic as abolishing them in the first place, because the changes required are profound. Somehow prior art spread across the scientific literature and infosphere must be detected. Even a skilled patent engineer is unlikely to succeed. Clearly we need to change the information flow so that prior art information or whether a suggested approach is obvious reaches the office very early in the process.

One could do this by using broader panels of reviewers (anonymous relative to the party seeking the patent), perhaps borrowing from the scientific peer review process. Setting up such a panel is nontrivial since it must not be a leak while at the same time likely involving people involved in closely related areas. This might be doable through non-disclosure agreements and funding from both the patent office, the patent party and perhaps stakeholders in the public commons (governments, NGOs etc). Perhaps a good place for engineers to earn some extra money, keep an eye on what is happening in the field and do a civic-technological duty.

A clever way of making patent offices more efficient and reliable would help a lot, both by making shorter length patents feasible and removing the worst patents. But it is not obvious how to do it, or how to get the political impetus to implement any change.

A deeper and more philosophical problem is defining obvious. Given that humans have different skills, outlooks and intelligence what is blindingly clear to one is a stroke of genius to another. Legally the solution seems to have been in most jurisdictions to establish a praxis. This can create enormous confusion internationally, as they reach different conclusions. It is unlikely that this will change. But what would make a reasonable definition of "too obvious to be patented"?

Maybe one approach would be contingency. If something is unlikely to be discovered/invented/constructed the same way without any communication or other information transfer between two people, then it is not obvious. It is contingent on many different factors, and involves some choices that may be arbitrary, aesthetic or due to complex balancing. This is close to how artistic intellectual property is handled.

If contingency is the basis of patentability, then fundamental discoveries will not be as rewarded as implementations of them. I'm not certain this is a good thing, since the natural reaction would likely be to keep the basic discovery secret or unpublished and spring off a number of contingent patents from them.

Maybe core discoveries is an area where it would be beneficial to use a public commons funding. If part of the licencing income from patents based on the discovery was channeled to the discovering party useful discoveries (those producing lots of wealth through their applications) would be rewarded, while useless discoveries would not. Here we get away from the info-socialist approach of centrally decided rewards (often unjust, and easily subject to corruption and public choice problems) although the solution of course still requires centralized patent licence taxation systems (ouch!) and could still get free software development into trouble. More work needed on that idea too.

The interesting thing about the intellectual property debate (once one gets away from the "piracy is theft"/"software should be free as in beer" shouting match) is that it is so non-obvious. It is a meeting ground between technology, philosophy, law and politics. We need to respect that. Simple solutions are unlikely to be workable since we seek to achieve a multitude of goals, many of which contradict each other and are ill-defined, and "we" actually are a large number of stakeholders from different cultures, economies and legal systems. At the same time whatever practices we develop need to be formulated in simple ways so that they can be explained to everyone: intellectual property is by its nature a cultural phenomenon, and as long as it is isolated to a small subgroup the theory will not mirror the practice.

So what we need to look for is an obvious (at least in retrospect) basic idea to construct a non-obvious praxis on.

Posted by Anders at 11:41 AM | Comments (9)

May 18, 2004

What We Can Learn from the War on Cancer

The war on cancer was started by Nixon in 1971. It is still ongoing, but the NIH is optimistic that we will see a victory of sorts around 2015. The War on Drugs was started in 1981 by Reagan. It is still ongoing, with doubtful success. The War on Terror was started in 2001 and it is just as broad, just as unbounded as the others. If we assume the NIH is actually right, and that a 45 year span is normal for a “War on XXX”, then we might see the last detainees at Guantanamo released in 2055 or so.

What can we learn from the successes and failures of these three "wars"?

The problem with declaring this kind of war against something is that it is not really a war. Wars usually have fairly clear objectives and well-defined foes. But cancer turned out to be a multitude of causes leading to runaway cell growth, drugs are produced, distributed and used in an adaptable black economy and terrorism is caused by many complex factors among different people. In the first case the “foe” turned out to be the regulatory weaknesses of our genome, and in the third the human foes are constantly generated by other conditions. This is why the traditional centralist way of waging a war does not work. A huge “Manhattan project” is unlikely to handle ill-defined, shifting problems requiring a multitude of solution attempts. Building the atomic bomb – the original Manhattan project - was a well-defined finite problem, finding a cure for cancer was not. A centralist approach has a narrow range of approved solutions that get enormous resources, but it cannot explore a wide array of possibilities (even if it officially wants to, there is usually only so much room for dissenting voices and radical ideas within the same organisation). Usually the result is that old methods are re-used, and if they do not work it is believed that they will work if more resources and effort is put into them. This is very similar to the War on Terror: traditional military and intelligence methods have shown themselves adequate for solving all traditional military and intelligence problems. Unfortunately, they do not seem to be very effective against non-traditional problems like widespread suicide-bombings, networked foes within third-party countries or the troublesome feedback loops of ethnicity, media and inter-cultural politics.

I personally think the NIH is right and cancer will become just a chronic disease and not a killer within a foreseeable future. It is a problem that can be solved thanks to the broad reach of biomedicine. Cancer research has been going along a broad front, ranging from studies of the causes to palliative medicine in hospices. This has enabled a huge range of solutions to be explored, and thanks to the cumulative nature of science the experience has been passed on to the benefit of further experiments. 1971 the biochemistry of DNA and computers were unrelated fields to cancer research, today genomics is a key weapon. Nanotechnology was not on the horizon even ten years ago; today many see it as another key weapon. But these weapons in the war against cancer were not discovered thanks to the huge effort aimed at the goal itself. They emerged organically from other fields.

In the same way the other broad “wars” are unlikely to be winnable thanks to a directed effort at the apparent problem. Drug-use is exists in the animal world, and addiction is deep down an issue of mislearning and lack of control in our motivation systems. It is likely fundamentally related to other kinds of addictions, from overeating to religious cults. It will not go away because of a cut drug supply, since people will invent new drugs and new ways of supplying them. Maybe the mistake is to assume that it is drugs that are the problem. Maybe a “War against Addiction” is more useful: finding ways of preventing life-destroying addictions instead of going after the symptom (drugs and drug trade). This is like going after the gene network causes of cancer. It is likely a very complex problem requiring input from other fields, but it probably has greater chances of success than destroying coca farming. The neuropsychology of motivation is not actively and intelligently fighting back and trying to retain its grasp of our brains.

The war on terror is similarly a war on a symptom. Terrorism is to a large extent caused by hopelessness, poverty, lack of ways to change society peacefully, deep resentment and institutions that have formed to aim these dark sentiments into action. Even wiping out these spontaneously forming institutions would not solve the problem since they would re-form if the other social fuels were present. It is like resecting metastasising tumours.

Attempts to detect and prevent terrorist acts is like early detection of cancer: useful and saves many lives but does not get rid of the underlying cause. One can do preventative medicine too, by supporting the formation of open societies with possibilities for advancement in poor and oppressed regions. Again, this is important and cost-effective, but it will just reduce the incidence. Even if the entire world was open, democratic and wealthy there would be people carrying grudges and using available means to strike out. And given the exponential growth of technology the destructive power of individuals is getting very worrying.

So what would be needed to win the war on terror? Just like the war on cancer and war on drugs, the solution is likely something entirely out of the blue. And it will not be a single “cure for cancer” but a large toolkit of methods. Blunt force is probably in there, just as preventative methods. But the key weapons will be different. What they are we can’t tell right now, and this is why centralist attempts to win the war will fail. But a broad research front trying many approaches rather than a single “either you’re with us or against us” attack against the apparent problem, that is likely to finally find the keys we need.

It might take until 2055 before we get there. But better late than never.

Posted by Anders at 11:47 PM | Comments (21)

May 08, 2004

A Sparkle In the Eye

JewelEye - Nederlands Instituut voor Innovatieve Oogheelkundige Chirurgie - Melles - Ververs
Jewelry for the eyes, literally! The technique implants decorations beneath the coroneal layer, enabling people to show off with a glance. Finally an alternative to eyeball tattooing and contact lenses.

[From Lorbus]

I predict this will be the next split tounge scare - there is something viscerally worrying to many about manipulations of the eyes (quite likely a built in reflex reinforced by the low pain thresholds of the eyes, since they are some of our most sensitive and fragile organs). Pictures of eyes draw attention thanks to our human-interested attention systems (just look at all the surrealist art) and eye jewelry is even more visually catching than split tongues. And of course it is risky like any treatment changing the body, making many doctors worried and willing to speak out against it.

Still, I hope it catches on. We need more human diversity, and maybe what implant research need is an influx of different new ideas. Implant science has always been directed towards bone replacement, biocompatible surfaces and a bit of reconstructive surgery. All medically unobjectionable uses, but always directed at restoring function or shape. Now we are seeing promising advances in tissue engineering too, but still within the curative/palliative paradigm. This perspective might be too narrow to find completely new and creative solutions. Adding decorations to implants poses many problems, and in solving them we might learn many useful things about the interactions between the body and artificial materials we would not see if the goal was just functionality. As an analogy, just look at how painting and coating were first mainly developed for aesthetic purposes and later became the basis for making materials more resistant to wear and corrosion and even adding new properties such as anti-glare optics and self-cleaning. These developments would never have come about if we had only cared for the basic function of our products.

Posted by Anders at 01:00 PM | Comments (16)

May 05, 2004

A Complex Stand Alone

Reason: Anime Dreams: The strange but familiar world of a Japanese TV cartoon.

Reason has added my little text about the Ghost in the Shell: Standalone Complex television series. Space constraints prevented me from going totally postmodern in the analysis of The Laughing Man, which may be a good thing. :-)

Posted by Anders at 10:28 AM | Comments (3)