February 20, 2011

Making babies

I was on BBC's The Sunday Sequence today, a brief discussion about whether we ought to select for smart designer babies. Not much time to expand my ideas, so here is an extended version - what I would have said if I have had indefinite time:

The reason for the discussion was Julian Savulescu's comments that we ought to use IVF to select for smarter children. This follows from his principle of procreative beneficence (PPB): all things being equal, we should choose to have the child who is expected to have the best life. Since smarter children have better life chances than less smart children, we ought to select for them. Besides being good for the life of the child, this is likely to be beneficial to society since having smart people around gives various general benefits to society (economic growth, higher levels of cooperation, new ideas and inventions etc.)

What to select for?

One might argue that intelligence is not everything, and I would agree. In terms of improving lives avoiding many illnesses likely trump intelligence, and intelligence in a narrow sense does not help living a truly good life. Most geniuses seem to be a confluence of both a great deal of intelligence, a strong motivation, good education and finding the right problem to focus on. For life outcomes self-control, motivation, emotional intelligence, mood set-point and other things matter: when enough come together, good things can happen. All of these things should be promoted according to the PPB - even if we can just tweak one of them a child will now have a better chance to get the others right than before.

Of course, selecting what traits are good is in itself a fraught issue. This week Felicitas Kraemer argued in her James Martin research seminar (abstract in pdf) that emotional enhancement programs tend to require an assumption of value realism, something many philosophers would likely feel is going out on a limb. Still, there is no shortage of philosophers and other people who might say that happiness, tranquillity or lack of cruelty are objectively good things. More seriously (and practically) the real problem is that we do not understand emotions or other complex traits like intelligence well, and they often form complex wholes we might be mistaken in changing in particular ways. I think there is some merit to this, but it is not an argument against enhancement, just against naive enhancement. Going after a too narrow goal might be self-defeating, but we can aim at broader and more open outcomes. This is why pursuing general instrumental goods like intelligence, self-control or health is more useful, impactful and moral than attempting to provide narrow goods like a future musical career.

Many think the goal of genetic selection is perfection. But perfection does not exist: there is no best brain or best immune system. Individual variations will interact in subtle ways with the environment and experiences, producing different outcomes. This means that a given genome will not determine what life a person lives, or ensure that it will go well. Bad luck, bad choices and unexpected interactions happens all the time. We can tilt things towards certain possibilities, but we cannot guarantee them. More deeply, we have a pluralism of ideas of what constitutes a good life, and different parents will aim for different forms of perfection. But it is the aiming that matters, not hitting the target perfectly.

Practicalities


From a practical standpoint selecting children requires artificial interventions into reproduction, if only by sperm sorting (for gender selection, an action where the PPB seems to suggest we might wish to bias towards more women since they live longer and are happier than men). This is expensive and cumbersome: note that the PPB does not say it immoral to have children the natural way, just that if you are anyway using IVF PGD is a good idea.

In fact, I believe we would have good effects if everybody used IVF for reproduction even without any selection. The reason is that IVF reproduction requires motivated parents who want the child. Wanted children get more warmth and support than unwanted. This is likely far more important for all outcomes than any amount of screening.

The real problem with PGD is that while it is relatively easy to screen for bad alleles (say, genetic diseases) good traits are multifactorial and often quite dilute. Most common alleles involved in IQ have <1% effect on the outcome. But in the near future where we can do a sequence of the whole embryo genome we could still weight things together to get an overall score, despite this being uncertain. If we were to select for embryos with a high score we would get more children developing with genomes correlated with good things. Over time we could also learn by observation what genes and traits actually matters and refine this process.

My own suspicion is that spread of genetic selection will be slow and that this refinement process will be so slow, making the evolution towards widespread designer babies a so slow process that other technologies (AI, brain emulation, cognotechnology, nanomedicine, ...) race past it.


The E-word

Is this eugenics? My answer is that of course it is, and there is nothing inherently wrong with eugenics. What has historically been wrong with attempts at eugenics is that they have been 1) forced upon people, 2) based on faulty values, and 3) based on faulty science.

The big problem with state-sanctioned eugenic programs is that they have coercively interfered in people's most private sphere. This is something Julian and me both are very strongly against: I would argue that reproductive freedom is very important and closely linked to morphological freedom. The only acceptably limitations is to prevent harm to others. Having a "bad" child is not harming others (at most it might cost society a bit more), not even the child itself (they would not exist if they had not been selected/randomly occurred). But that such coercive programs were bad is not an argument against allowing people to themselves, if they wish, perform reproductive selection for traits they think are good. The role of governments here is likely more to ensure that people are informed with the best information we currently have, and (if you like positive rights or think there is a possible equality problem) maybe even provide services and subsidies for those willing to use them.

The second issue is what values to promote. Classic negative eugenics had some pretty narrow values and did not accept critique of them. "Liberal eugenics" on the other hand allows different groups to pursue their ideas of the good. In fact, Julian has argued that current rules providing state support for selecting against serious illnesses but not other things such as gender or positive traits hides an accidental (?) negative eugenic agenda - people do not see selecting against illnesses for what it is, while decrying positive selection as eugenics that must be avoided.

There is no fundamental difference between wanting to give a child a good life by it not getting sick, not getting stupid or having good self-control.

There is a big difference if the goal is to make sure children have better chances of having good lives, improving society or fulfilling some desire of the parents. The later two are wrong from a Kantian perspective, since they treat children as means towards an end (society or the desire). It might be possible to have mixed motivations, of course. Knowing that smart children are more likely to make the world a better place might be motivational, but the children better be wanted.

Selecting pre-persons


The real elephant in the room is of course the ethics of selecting embryos, which is more about when personhood begins than selection itself.

I hold the basic view that someone cannot complain that their embryo was selected from among others, even when there were bad outcomes, since without that particular selection they would not exist. Potential people do not have rights, otherwise we would be in trouble with the billions who are refused existence every time a couple conceives a child. Only people who exist have rights and might morally oppose interventions in their lives, but if their existence is contingent upon an early intervention they cannot morally claim they would prefer it not to have happened (unless they think their existence is so bad that it would have been better not to be born at all). They might still claim compensation if they got a bad deal, of course.

The view that personhood occurs when the egg is fertilized is a non-starter. If that is accepted, then monozygotic twins are one person, chimeras are two people in one body, and there is a massive scourge of human deaths that nobody holding this view seem to care about. All the things we ascribe to persons (like a continuous and separate history, consciousness, having representations of the world and so on) come into play later. Embryos are pre-persons, perhaps worthy of special consideration due to their potential for becoming persons, but not inherently more valuable than other small clumps of cells.

Julian has a nice model for when we give moral status to embryos:

"Embryos have special moral value when they are part of a plan to have a child, or at least desired by the people who made them. Embryos do not have special moral value when they are not desired by the people who formed them."

Basically, a lump of cells we intend to turn into a beloved offspring is worth special consideration, but a similar lump of cells we merely see as a necessity for bringing about our child (in the case of excess embryos in IVF or the non-selected embryos in PGD) is just a lump of cells. It is the act of selecting an embryo that starts giving it moral value!

Aren't we playing God?

Flippant jokes aside, I think the proper answer is that we are playing God by using our rational thinking (one of the few similarities between humans and Gods in most religions) to make the world a bit more to what we think it should be.

The playing God argument is more about potential hubris and overconfidence than a real religious argument. It is a shorthand of humans meddling in complex systems where the consequences can be pretty grave. But most of medicine is about playing God in this sense. The goal - health - is regarded as so valuable that we allow amazingly yucky things to happen to living bodies, and most people would be willing to defend the practice as one of the professions with the highest moral motivations. It seems that providing health *and* other good life outcomes to children are ends that should allow equally drastic means. We are aware of the misuses and downsides of medicine, but I think most people would agree that they can be managed - peer review, evidence based medicine, ethics review boards, follow up and further research. We can become better at playing God.

Posted by Anders3 at 03:07 PM | Comments (0)

February 16, 2011

Signs of bad arguments

Signs that you might be looking at a weak moral argument:

"It was for their own good!"
"We must be strong!"
"Think of the children!"
"For the greater good!"
"But we are here to help you!"
"Otherwise the terrorists will win!"
"Denying this means you agree with Hitler!"

Posted by Anders3 at 07:42 PM | Comments (0)

February 15, 2011

Finally a message of intolerance I can stand behind!

HammerBeddington goes to war against bad science: Selective use of science ‘as bad as racism or homophobia’

(I have an extended version on Practical Ethics)

My general approach to tolerance is to tolerate those who tolerate me as well as those who tolerate others in my network of tolerance.

But that is a general heuristic: there are plenty of people whose views that I might tolerate in general, yet hold particular views that I think should be criticized roundly. This is of course not intolerance, which generally calls for discrimination (institutional or informal) against people with certain views. However, I do think I am justified in being intolerant against intolerant people - both for reasons of my own and my society's self-preservation, as well as to maximize liberty.

Should we then be intolerant against people peddling bad science? I think at the very least we should call them on it, and possibly even go a bit further. The reason, as Beddington points out, is that "We should not tolerate what is potentially something that can seriously undermine our ability to address important problems." Insofar science and clear thinking help us live better lives, lack of intellectual integrity that undermine these are actually bad for our lives. If problems cannot be solved or recognized because of noise, deception or bullshit then the moral thing to do is to try to reduce these sources impairment.

It is less clear how far morally one can go in pursuing intellectual integrity. Censoring pseudoscience is problematic, since freedom of thought and expression are also quite essential for the epistemic function of our society (not to mention that plenty of pseudoscience shades over into fringe science that could be true, but with a very low probability). Many false claims are made out of sheer ignorance or laziness. However, deliberate distortion and deception seems to be a promising target. Even there finding appropriate standards might be tough.

But at least we can make it a social rule that just as we frown at racist, sexist or homophobic statements we frown at pseudoscience or deceptive evidence.

Posted by Anders3 at 08:25 PM | Comments (0)

February 14, 2011

Why we should fear the Paperclipper

Self-improving softwareThis is a topic that repeatedly comes up in my discussions, so I thought it would be a good idea to have a writeup once and for all:


The Scenario

A programmer has constructed an artificial intelligence based on an architecture similar to Marcus Hutter's AIXI model (see below for a few details). This AI will maximize the reward given by a utility function the programmer has given it. Just as a test, he connects it to a 3D printer and sets the utility function to give reward proportional to the number of manufactured paper-clips.

At first nothing seems to happen: the AI zooms through various possibilities. It notices that smarter systems generally can make more paper-clips, so making itself smarter will likely increase the number of paper-clips that will eventually be made. It does so. It considers how it can make paper-clips using the 3D printer, estimating the number of possible paper-clips. It notes that if it could get more raw materials it could make more paper-clips. It hence figures out a plan to manufacture devices that will make it much smarter, prevent interference with its plan, and will turn all of Earth (and later the universe) into paper-clips. It does so.

Only paper-clips remain.


Objections

Such systems cannot be built

While the AIXI model is uncomputable and hence unlikely to be possible to run, cut-down versions like the Monte Carlo AIXI approximation do exist as real code running on real computers. Presumably, given enough time, they could behave like true AIXI. Since AIXI is as smart as the best program for solving the given problem (with a certain finite slowdown) this means that paperclipping the universe - if is physically possible - is doable using existing software! However, that slowdown is *extreme* by all practical standards, and the Monte Carlo approximation makes it even slower, so we do not need to worry about these particular programs.

The real AI sceptic will of course argue that no merely formal system can solve real world problems intelligently. But a formal system just sending signals to a 3D printer in such a way that it maximizes certain outcomes might completely lack intentionality, yet be very good at producing these outcomes. The sceptic needs to argue that no amount of computation can produce a sequence of signals that produces a highly adverse outcome. If such a sequence exists, then it can be found by trying all possible sequences or just random guessing.


Wouldn't the AI realize that this was not what the programmer meant?

In fact, if it is based on an AIXI-like architecture it will certainly realize this, and it will not change course. The reason is that AIXI works by internally simulating all possible computer programs, checking how good they are at fulfilling the goals of the system (in this case, how many paper-clips would be made if it followed their advise). Sooner or later a program would figure out that the programmer did not want to be turned into paper-clips. However, abstaining from turning him into paper-clips would decrease the number of paper-clips eventually produced, so his annoyance is irrelevant.

If we were living in a world where Kantian ethics (or something like it) was true, that is, a world where sufficiently smart minds considering what they ought to do always converged to a certain correct moral system, the AI would still not stop. It would indeed realize (or rather, its sub-programs would) that it might be deeply immoral to turn everyone into paper-clips, but that would not change its overall behaviour since it is determined by the initial utility function.


Wouldn't the AI just modify itself to *think* it was maximizing paper-clips?

The AI would certainly consider the possibility that if it modified its own software to think it was maximizing paper-clips at a fantastic rate while actually sitting just there dreaming, it would reap reward faster than it ever could in the real world. However, given its current utility function, it would notice that the actual number of paper-clips made was pretty low: since it makes decisions using its current views and not the hacked views, it would abstain from modifying itself.

The AI would be willing to change its utility function if it had good reasons to think this could maximize paper-clips. For example, if God shows up and credibly offers to make an actual infinite amount of paper-clips for the AI if it only stops turning Earth into paper-clips, then the AI would presumably immediately stop.


It is not really intelligent

Some people object that an entity that cannot change its goals isn't truly intelligent. I think this is a No True Scotsman fallacy. The AI is good at solving problems in a general, uncertain world, which I think is a good definition of intelligence. The kind of true intelligence people want is likely an intelligence that is friendly, useful or creative in a human-compatible way.

It can be argued that the AI in this example is not really a moral agent, regardless of whether it has internal experiences or rational thinking. It has a hard-wired sense of right and wrong defined by the utility function, while moral agents are supposed to be able to change their minds through reason.


Creative intelligences will always beat this kind of uncreative intelligence

The strength of the AIXI "simulate them all, make use of the best"-approach is that it includes all forms of intelligence, including creative ones. So the paper-clip AI will consider all sorts of creative solutions. Plus ways of thwarting creative ways of stopping it.

In practice it will be having an overhead since it is runs all of them, plus the uncreative (and downright stupid). A pure AIXI-like system will likely always have an enormous disadvantage. An architecture like a Gödel machine that improves its own function might however overcome this.


Doesn't playing nice with other agents produce higher rewards?

The value of cooperation depends on the goals and the context. In a zero-sum game like chess cooperation doesn't work at all. It is not obvious that playing nice would work in all real-world situations for the paper-clip maximizer, especially if it was the only super-intelligence. This is why hard take-off scenarios where a single AI improves its performance to be far ahead of every other agent are much more problematic than soft take-off scenarios where there are many agents at parity and hence able to constrain each other's actions.

If outsiders could consistently reduce the eventual number of paper-clips manufactured by their resistance and there was no way the AI could prevent this, then it would have to cooperate. But if there was a way of sneaking around this control, the AI would do it in an instant. It only cares about other agents as instruments for maximizing paper-clips.


Wouldn't the AI be vulnerable to internal hacking: some of the subprograms it runs to check for approaches will attempt to hack the system to fulfil their own (random) goals?

The basic AIXI formalism doesn't allow this, but a real implementation might of course have real security flaws. If there was a way of a subprogram to generate a sequence of bits that somehow hacked the AI itself, then we should expect this sequence to be generated for the first time with a high probability by a simple program not having any clever goal, rather than a more complex program that has corruption of the AI goal system as a goal (since simple programs are more likely to be tried first). Hence if this is a problem, the AI would just crash rather than switch to some other goal than paper-clips.

If internal hacking is a problem, then it seems to me that it will occur long before the AI gets smart and powerful enough to be a problem.

But if it doesn't happen (and software has subjective mental states), then we might have another problem: what Nick Bostrom calls "mindcrime". The AI would simulate countless minds, some of which would be in very aversive subjective states. Hence the AI would not just make the world bad by turning everything into paper-clips, but also worsen it by producing a lot of internal suffering. There would be some positive subjective states, of course, but since there appears to be more ways to suffer than to enjoy life (especially since feelings of meaningless and frustration can have any focus) the negative might dominate. At the very least, the system would at least go through each possible mental state, including the very worst.


Nobody would be stupid enough to make such an AI

Historically discussions about the ethics and danger of AI seem to have started in the 1990's. While robot rebellions and dangerous robots have been around in fiction since 1920's, they do not seem to have been taken seriously by the AI community, not even during the early over-optimistic days where practitioners did expect human-level intelligence in the near future. In fact, the lack of speculation about the social impact back then seems astonishing from our current perspective. If they had been right about the progress of the field it would seem likely that someone might have given paper-clip-maximizing orders to potentially self-improving software.

Even today programming errors, mistaken goals and complete disregard for sensible precautions are common across programming, engineering and real life human activities. While paper-clip maximization might seem obviously stupid, there are enough people around who do obviously stupid things. The risk cannot be discounted.


Comments

This is a trivial, wizard's apprentice, case where powerful AI misbehaves. It is easy to analyse thanks to the well-defined structure of the system (AIXI plus utility function) and allows us to see why a super-intelligent system can be dangerous without having malicious intent. In reality I expect that if programming such a system did produce a harmful result it would not be through this kind of easily foreseen mistake. But I do expect that in that case the reason would likely be obvious in retrospect and not much more complex.

Posted by Anders3 at 04:48 PM | Comments (0)

February 10, 2011

My anthropic shadow has reached Russia

LightMy paper (with Milan Cirkovic and Nick Bostrom) on "anthropic shadows", biases in disaster probabilities due to observer selection effects, has been translated into Russian by Alexei Turchin:

http://www.scribd.com/doc/48444529/anthropicshadow2

http://www.proza.ru/2011/02/08/2050

Posted by Anders3 at 04:29 PM | Comments (0)

February 08, 2011

Suitable reading for the future ruler of the kurrekurreduttans

Keeping up appearancesOn Practical Ethics I blog about Ad usum Delphini: should we Bowdlerize children's books?

My basic argument is that Bowdlerization is usually a bad thing (breaches the author's intentions, the artistic integrity of the work and rarely has much effect) but that carefully adjusting children's books might occasionally make sense - mainly where pointless racism or sexism is just due to a book being old. However, the truly problematic books will always escape this and can only be handled by parents and other adults reading and discussing the books *with* the kids.

Posted by Anders3 at 03:54 PM | Comments (0)

February 07, 2011

AI: predictably unpredictable

My talk at Humanity+ UK 2011 is now up on YouTube:

My basic point: AI development appears driven more by ideas and insights than incremental improvement. This makes it far more unpredictable than most other research fields - breakthroughs that matter can happen with little warning, but also long stretches where the only progress is on the minor problems.

On the other hand, we can be pretty certain that there is going to be a lot of more AI around in the future, and that some current concerns and trends are going to lead to new ideas and approaches. So the future of AI looks very bright - it is just that we cannot say much about *where* it is going to be bright.

Posted by Anders3 at 10:19 PM | Comments (0)