April 18, 2009

Moral intuitions are the pirates' best friends

A bugBeing a Swedish blogger, I of course have to blog about the Pirate Bay trial, or lose my blogging licence: Practical Ethics: Intuitive pirates: why do we accept file sharing so much?

My basic claim is that the non-intuitive nature of intellectual property makes it extremely hard to enforce morally and socially. Attempts to control it legally or technologically will tend to overshoot the target. I think the best way to get around the problem is either to make access to content cheaper and easier (the Spotify approach) or to reconsider the structure of content production (the long causal and legal chains from creator to consumer). I doubt the later will occur voluntarily from the content side, so they better start streaming music, books, films, games, statistics etc. quickly.

Or we could try to re-engineer our territoriality system in the brain to handle intangible rights. But I wouldn't bet on that for the next few years.

Posted by Anders3 at 07:06 PM | Comments (0)

April 16, 2009

Monumental egos

Upward!There are some people who are much more confident in their importance than others, a trait we can call ego. Of course, in what way importance is measured differs: academics may consider their research important, a politician their power or historical legacy, an athlete may measure it in speed, strength or wins. That the athlete and academic may not recognize each other's greatness does not matter for the size of their respective ego: everybody measures it based on what traits they think are the most important.

This enables a rough measure of relative ego:

Compared to all living people, what fraction of them are inferior to you in respect to the traits you regard as truly important?

Someone thinking themselves to be better than *everybody* would get 1.0 on this scale. Somebody thinking themselves inferior to *everybody* would be zero. The Lake Wobegon effect means that the average person will be above 0.5. Given the size of the effect plus that we usually care more about the things we are good at, I would expect the average person to score about 0.75 or more.

During the dinner discussion leading up to this definition the foreword to one of the Mathematica books was mentioned, where Stephen Wolfram (in third person) wrote "Stephen Wolfram is the creator of Mathematica and is widely regarded as the most important innovator in scientific and technical computing today." In honour of this self-assessment I suggest we call the unit of ego the Wolfram.

I don't know if he would consider being an important innovator the most important aspect of himself: maybe he would rather be recognized for his scientific work. But since he does seem to compare himself favourably to Newton in A New Kind of Science, and Newton tends to be on any "greatest scientist ever" list we can probably assume that he thinks he has a chance to be among the top 100 scientists at the very least. So we should expect his ego to be between 0.999999985 and 1.0 Wolframs.

A clear Wolfram 1.0 case is Gene Ray, the discoverer of the Time Cube. He seems to regard himself as the wisest human *ever*, which suggests an extended version of the ego measure (the Ray?) where one compares oneself with all humans who have ever lived (about 90-110 billion or so). We also briefly considered comparisons to all humans who will ever live, but that measure seems unworkable except for people like Gene Ray who would doubtless claim 1.0 on that scale too.

Hentakoi Thumbs UpAn interesting case might be athletes, who actually have objective measurements of their abilities relative to everybody else: the world champions *know* they are the best, and hence are entirely justified if they reach 1.0 Wolframs. In practice I would expect them to take time averages into account and tend to be slightly below 1.0.

As for myself, when I consider my achievements, I seem to be about two standard deviations above average (whether this is actually true does not matter; we are looking at subjective ego here). This would put me in the top 5% of all people. So I'm probably about 0.95 Wolframs.

In fact, so many of us humans crowd close to 1.0 that it might be more sensible to use a logarithmic scale, the logWolfram: -log10(1-W). 1.0 Wolframs is infinity logWolframs, 0.999999985 Wolframs is ~7.8 logWolframs, my 0.95 Wolframs is ~1.3 logWolframs, the modest 0.5 Wolframs is ~0.3 logWolframs and the "I'm the worst in the world" 0 Wolfram person is 0 logWolframs. Roughly it counts the number of nines in the decimal expansion when you approach 1.

Note that subjective importance is of course separate from real importance. The world has many 1.0 Wolfram people, but very few of them matter (Gene Ray is definitely in the top quartile in regards to fame, competing with a few self-proclaimed messiahs with personal cults and a bunch of happy world champions). Compared to most others at the same ego rating Stephen Wolfram is clearly enormously successful: he is a successful and well-known innovator, scientist, businessman, author etc. So one could define a dimensionless ratio between the actual level of importance in some field (measured as fraction of mankind being less able) and self-estimated importance. Many depressed and pessimistic people may have ratios far above 1, and thanks to the Dunning-Kruger effect we should expect incompetent people to have rather low ratios.

So while it might not matter much how many Wolframs of ego we have (humility is overrated in my opinion), we really should aim for mediocre (but healthy!) ratio scores close to one. The more important you think you are, the harder you should strive for this.

Posted by Anders3 at 10:14 PM | Comments (0)

April 11, 2009

Memory modifiction

DisputationThe Messy Future of Memory-Editing Drugs - I was interviewed by Brandon Keim about memory editing. His angle, which I agree with, is that it is unlikely to work as neatly as in sf (which resulted in this short list of memory-sf stories as an aside). We will have side effects, limited efficiency, affecting other memories and, most importantly, emergent effects that we cannot currently predict. This doesn't mean it will be bad or dangerous, but it is likely to be much messier than we like to think. For example, my and Matt's paper on memory editing ethics explicitly assumes that the technology works as advertised - because to do otherwise would make the philosophy hard to do. In stories, unless the flaws of technology are used to add to the setting or form a plot point, it is the flawless working of memory modification that usually is the key plot point.

Maybe science fiction is biasing us to think that technology either works exactly like it should or that it is totally flawed. Think of the Star Trek transporter, which seems to be perfectly safe except for when it kills, merges or dimensionally translocates people. Real technology is different: it acts up every day, often in minor ways. Cars have become safer in many ways, but airbags can hurt you anyway. I'm using a fantastically advanced word processor that cannot (easily) paste text without formatting it like on the originating webpage, and suggests mistaken grammatical errors. Keycards work - except when they stop working for obscure reasons. And so on.

In reality we muddle through. We can use imperfect technology, find ways around the problems (the main door doesn't let me in? I use the side door, which does), accept some limitations and often solve problems by adding new layers with their own problems (how much of our processing power is used by antiviral software?) Usually new technologies introduce completely logical but hard-to-predict problems (spam follows logically from the email model we choose; airplanes are by their nature potential projectiles) that we then spend energy solving. There is no reason to think we can design ourself out of this except in very special cases - design takes much time, intelligence and energy, and is limited by the combinatorial explosion of possible interactions in the real world.

In the end, I think Joel Garreau was right in Radical Evolution when he predicted that instead of a clean singularity or apocalypse we are just going to muddle through the transition to a technologically transformed species. But that should not be a comforting thought: muddling through still requires a lot of hard work.

In the case of memory editing there is a lot of research that needs to be done, not just in neuroscience but in applied psychology. The best ways of using it are probably something like hybrids of cognitive behavior therapy and judicious applications of memory-altering drugs. But how to do that, how to measure results, how to decide what alterations produce the best outcomes - that is going to be very tricky. And we are going to learn a lot simply by failing in various ways.

But who wants a sf story where the protagonist has a non-working memory treatment, followed by four others that kind of work before finally finding a solution using a completely different method? Almost Total Recall?

Posted by Anders3 at 02:10 PM | Comments (0)

April 09, 2009

Cyborg morality as usual

Self portraitMind Hacks: The unclear boundary between human and robot mentions a letter to Nature that is concerned with blurring the boundary between man and machine. Blanke and Aspell write:

It may sound like science fiction, but if human brain regions involved in bodily self-consciousness were to be monitored and manipulated online via a machine, then not only will the boundary between user and robot become unclear, but human identity may change, as such bodily signals are crucial for the self and the 'I' of conscious experience.

Maybe I'm just a bad philosopher, but I do not see that much problem with the blurring per se. Is there a problem if my sense of where my agency is located is affected? If I think "I" exist within a telepresence robot rather than seated in the control chair I might behave in somewhat irrational ways to protect the "life" of my (actually disposable) robot body since it feels like "my" body, but that is unlikely to be a major ethical problem. Similarly if I regard myself being distributed over systems beside my biological body, it might require extending the concept of privacy and integrity to encompass my extended self - but this is not too different from what we already do with our houses and files.

Jeroen van den Hoven did a great lecture here a while ago about ethics of technology from a design perspective. He discussed "wideware engineering": Clark & Chalmer's extended mind view says that if a part of the world functions as a process which we would call cognitive if it occurred in a head, then it is (at least for the moment) a cognitive process. This means that when we design things, we actually design epistemic and cognitive environments.

This in turn leads to various moral responsibilities of the designer - including a responsibility for allowing the user of the system to be able to be a good moral agent. Systems should not force them into moral dilemmas, moral overload or regret. If the system forces them to trust what the sensors are saying (because not trusting them would be a moral risk that cannot be justified at the moment) then the user will have reduced responsibility - yet if a disaster occurs, the user will feel regret and blame over not checking the sensors, although it was not feasible. Conversely, a system that allows users to check the quality of the wideware and "make it theirs".

These considerations seem to meld into the concerns in the letter above. Uncritically using BCI-robotics might indeed impair our ability to act as responsible, autonomous agents because too much is taken for granted - aspects of our choices are run by systems we do not understand, have control over or any reason to trust. But the problem is not worse for neurorobotics than it already is for airline pilots or people running complex industrial systems. They already have extended bodies and agencies where the available choice architecture impinges on their moral responsibility. The real issue is if they will be given the chance to make their extended bodies and minds "theirs".

Even the issue of "self" might be similar: during flow states people forget about self-consciousness, and no doubt the pilots and process engineers do experience that from time to time. Drivers of cars often project their self onto the car rather than their body inside, yet we do not view this as a terrifying cyborgisation.

Posted by Anders3 at 05:00 PM | Comments (0)

April 08, 2009

In Russia, papers translate themselves!

Varhol warningAlexey Turchin has translated our paper on risks into Russian. At least I *think* he has done it, since my Russian is a bit limited.

Language barriers are so annoying! I'm convinced there is a lot of useful literature in other languages that I am missing. So either we figure out how to translate automatically, or we recognize that English is a lingua franca (oops!) and all learn it. If ESR says it makes sense, then it probably does. (I don't think Esperanto has that much of a chance).

Posted by Anders3 at 04:46 PM | Comments (0)

April 06, 2009

Technofixes are fixes too

Man on roofPractical Ethics: Be mindful of results, not the method - I blog about why sometimes technological/biological fixes to complex social problems make sense.

In general, technological fixes have a pretty bad reputation. But as Sarewitz & Nelson point out, if they 1) embody the cause-effect relationship between the problem and solution, 2) the effects can be assessed well, and 3) R&D can improve the technical core of the solution rather than theoretical underpinnings. By this standard, I think many "social fixes" are in trouble at least on point 2 and 3.

Fixes are also about perspective. As Stewart Brand put it in the preface of Unbounding the Future:

“A technofix was deemed always bad because it was a shortcut—an overly focused directing of high tech at a problem with no concern for new and possibly worse problems that the solution might create.

But some technofixes, we began to notice, had the property of changing human perspective in a healthy way. Personal computers empowered individuals and took away centralized control of communication technology. Space satellites—at first rejected by environmentalists—proved to be invaluable environmental surveillance tools, and their images of Earth from space became an engine of the ecology movement.”

This form of positive or game-changing side-effects exist for many technological fixes. The introduction of contraceptives empowered women, changed social mores and made demography culturally controlled. Safer cars lead to higher demands for road safety, not less. Technological fixes can be part of or even drivers of social change.

Posted by Anders3 at 09:19 PM | Comments (0)

April 03, 2009

The pink cassock

Cherry ChurchPractical Ethics: Ecclesiastical gaydar: should churches be allowed to discriminate priests? - I blog about the latest instance of Catholic backwardness.

As a libertarian I must tolerate that various institutions discriminate, it is actually often within their rights. That doesn't mean they are right or that we should keep silent about it. But if we allow people to believe pieces of bread to be (for strange meanings of "be") divine or that performing certain rituals will purify or sully you in ways that cannot be detected yet are supremely important, then it is not stranger to think that arbitrary properties like gender, sexual orientation or being of a certain height could be highly relevant for one's ability to be a priest.

However, it is very likely that the "testing" of suitability is going to be inaccurate and prejudiced, and this is something the Vatican is unlikely to want (of course, if they had real faith they could use any method - dice, tea-leaves or polygraphs included - and expect Providence to ensure the right outcome).

A curious aspect is that as far as I know the church thinks homosexuality is a choice. So while it might be reasonable to tell would-be priests that it is doctrinally incompatible, it would not make much sense to test for it. And how to prevent priests from changing their choice once they are appointed?

The anti-gay stance is largely a response to the abuse cases that has shook the church (they tend to mix up paedophilia with homosexuality). But even here they seem to mess up their signal detection theory. Unless the majority of priests are gay, or that gay priests are extremely likely to molest congregation members, most molestation is going to be done by non-gay priests. It might not cause equal uproar, but it has a long historical tradition (In Swedish there is even an old word for a bastard child of a (catholic) priest, "prästkläpp"). As a caveat, this report (pdf) does seem to imply that more homosexual abuse takes place; however, a reporting bias could well be causing it. In any case, what would make much more sense than any attempts to determine sexual orientation of candidates would be to attempt to determine their "sexual predatoriness". But given past experience it seems unlikely: surface symbolism is more important.

Posted by Anders3 at 11:29 PM | Comments (0)

Credo Tarot

I: The TranshumanistFor various reasons I decided to put the so far finished images of my personal tarot deck Credo Tarot on Flickr. This is a project that has languished far too long.

The suits are matter (disks), information (swords), processes (cups) and forces (wands). There is an infinity card for each suit, perhaps corresponding to an ace, although I'm also considering that there could be a unity card for each suit. So if the unit of matter is the atom (or a quark) infinity of matter is a black hole. I *think* the major arcana will not strictly fit the traditional major arcana. While the Dyson shell is a good counterpart to the Sun and Entropy obviously corresponds to the Death card, I'm not at all sure Yog-Sothoth fits as Judgement. So probably I'll just deviate from tradition. After all, it is my tarot.

What I like about making these little scenes is that it is fun to cram symbolism together. Especially if it is your own symbolism - I doubt many others regard the triangle inequality as relevant for the Hermit card.

Posted by Anders3 at 03:54 PM | Comments (0)