September 24, 2011

Winter intelligence videos

Ah, the Winter Intelligence conference lectures are finally up!

Everything from Friendly AI to collective intelligence to holism.

Posted by Anders3 at 11:13 PM | Comments (0)

September 23, 2011

Linking minds

BCINESTA - Mind over matter - yesterday I participated in an even organised by NESTA about the posibilities and impact of brain-computer interfaces. We had a fun discussion about everything from integrity to facial expressions. The link has a video of the event.

While preparing I started thinking about what I was carrying. At least in my case most objects I carry extend or enhance my function:

  • Clothes - "super-skin", better able to keep me warm, safe, and dry than normal skin. Plus social signalling functions and enhanced carrying ability (pockets).
  • Glasses - enhancement of my vision, especially if they had been sunglasses.
  • Smartphone - a wide range of outsourced functions, such as sense of place, communication, identity, knowledge etc.
  • Cryonics necklace - ability to communicate intentions while unconscious. Plus social signalling.
  • Camera - enhanced vision and memory.
  • Keys and credit cards - identity access.

It was amusing to notice that the same day the Gallant Lab at UC Berkeley published the paper Reconstructing visual experiences from brain activity evoked by natural movies demonstrating just how amazing neural interfaces can become. Although their system is even less practical than the temperamental EEG-interface that was showcased at our event.


Posted by Anders3 at 03:55 PM | Comments (0)

September 13, 2011

Plagiarism arms races

MeetingBuying authenticity: plagiarism checking and counter-checking. I blog about the ethics of using plagiarism detection software to detect plagiarism and to tweak texts to pass it.

Basically, it looks like a parasitic co-evolutionary arms race to the increased concerns about plagiarism. It might draw away focus from the core aims of the issue (maximize learning and originality) to the surface characteristics (not being detectably plagiarized, having lots of references).

Posted by Anders3 at 07:02 PM | Comments (0)

September 12, 2011

Droidlike technologies matter more than death stars

The droids I am looking forLt. Col. Dan Ward, USAF has written a very amusing and relevant essay, Don’t Come to the Dark Side: Acquisition Lessons from a Galaxy Far, Far Away (pdf) in Defense AT&L, September–October 2011. (via Wired's dangerroom)

His main point is that the Death Star program (and real world projects like it) are a pathology: too complex to be reliable, too expensive to build several, too big to be finished on time and hence requiring strong management - yet that kind of strong management is also the kind of management that tends to like having big empires, reinforcing the bad cycle.

But the most interesting point in my opinion is his praise for R2D2. He points out that the droid saves the day more often than anybody else.

A Death Star is an Empire weapon that aims to intimidate opponents into submission. Droids are Republic technology. They don’t intimidate anyone. Instead, they earn their keep by being useful and practical. Droids are about finesse, while Death Stars are about brute force. And given the current world situation, finesse is clearly what we need.

A classic example of droid-like technology might of course be the AK47. Reliable, robust, does not aim to do anything beyond the basic function but people will still find new ways of using them anyway.

I would also argue that the droids (at lest the independent minded "hero" ones - the centrally controlled ones are death star systems again, with an even more obvious vulnerability) are far more powerful weapons than death stars. If we accept the prices mentioned in the Wired blog, you can buy about 10^21 droids for the price of a death star. Let's assume you just use 10^20 droids and spend the other 90% on landers, weapons and such equipment. Then you can invade a planet with 10 billion inhabitants at a force ratio of 1:10 billion. In principle the droids could just crowd the enemy to death. Or you could go for a 1:10 force ratio and invade a billion planets. Even if you lost many of these battles you would now have the resources of hundreds of millions of planets to throw at the remaining holdouts.

While Lanchester's laws are not perfect by any means, it is not unreasonable to think that vastly outnumbering your opponents is a very good strategy, even if each fighter is pretty weak. The lethality per dollar (or republic credit) is likely optimal for fairly cheap systems. The death star wiped out 1.97 billion Alderaani in a few seconds, giving it a theoretical lethality index of about 5,000,000. This is actually less than a nuclear weapon! Given that in the Star Wars universe it is also possible to disperse troops enormously (which is of course not done in the films, which obey cinema tactics) it is fairly easily foiled by a widely dispersed enemy.

(When I tried to estimate the best TLI per dollar for real world weapons, it looked like it was somewhere between the AK47 and the uzi, by the way. There might be another optimum for tanks.)

If we try to extract a kind of real world moral from this, I think the conclusion is that technologies that are cheap, robust, disperse and can show network effects always will tend to win over the solitary supertechnologies. There might be things you need a death star to do and droids cannot do it for you, but those will tend to be few and specialized. The things droids can do that death stars cannot, on the other hand, are manifold. That is why droid-like technologies will change the world.

Posted by Anders3 at 10:06 PM | Comments (0)

Reproducible irreproducibility

The hunter and the mad scientist Marginal Revolution: How good is published academic research? links to some worrisome findings supporting the pharma industry rule of thumb that at least 50% of all published academic studies cannot be repeated in an industrial setting.

This is not just a pharma problem. My brother told me about his struggle to make a really good garbage collector for the Java engine his company developed. He surveyed the literature and found a number of papers describing significant improvements compared to other methods... except that a paper describing method A proved it was better than method B and C, yet papers on B and C showed them to be better than A. The reason is likely that the authors had carefully implemented their own methods and then ran them against fairly unoptimized versions of the other methods or on test data that fitted their own brand better.

Improving reproducibility is very important. Any claim has a finite and non-negligible risk of being wrong. In the case of research, the probability of error P=P1+P2+P3 is a sum of the probability of a "honest" mistake P1, a dishonest or biased interpretation P2 and the probability that the method is flawed P3. Repeating the experiment yourself will just reduce P1. Having somebody else repeat the experiment typically reduces both P1 and P2 (at least if they are disinterested in the result). And if you are lucky, they do not just repeat it but do it in a different way, reducing P3 too.

The amount of information we gain about a hypothesis by trying to reproduce it is surprisingly large. The probability of that both you and N-1 reproducing experiments get things wrong even if you have the same P is P^N - it doesn't take that many experiments to show a discrepancy. Recognizing that it is a discrepancy and not just bad experimental luck is much harder, of course.

Since there is systematic under-reporting of negative findings, the status of repeating a finding is lower than making the claim first, and people often do not pay attention to updates of claims, it is not enough to do the repeats. We need better ways of rewarding reproducing research and to collate the current research state clearly.

In the end, I suspect it comes down to science funding bodies to realize that they should reward maximum amount of convergence towards truth (weighted by the importance of the question) rather than maximum coolness.

Posted by Anders3 at 07:39 PM | Comments (0)

September 06, 2011

I'm in the experience machine today

Olga Jafarova testing out Psychotik RoomOr rather, my voice might be heard 21.00 on BBC4's radio program "The Philosopher's Arms", where I am one of the regular barflies.

Last evening we discussed Nozick's experience machine and what that told us about happiness, and whether governments could help make us happy. We have a survey up on Practical Ethics about related questions: The Experience Machine: A Survey

One of the big problems with pursuing happiness (besides that it might not be the goal in the first place and we tend to become unhappy if we pursue it too strongly) is that we are so bad at doing what makes us happy.

One of my favorite papers on the topic is Christopher K. Hsee and Reid Hastie's "Decision and experience: why don't we choose what makes us happy?" Trends Cogn Sci. 2006 Jan;10(1):31-7. They review the evidence that we 1) often fail at predicting future experience accurately, and 2) fail at follow our own predictions.

Some rules of thumb I have derived from the paper:

You will not feel as joyful or bad as you think you will feel about a future event. That gizmo will not make you enormously happy even if it is really good, and a big personal failure will not feel as bad as you think it will. Dreading something is usually worse than the thing itself, and it will often last longer.

Don't shop when you are hungry We project our current state into the future and onto others (and the past: we do not remember things as they were but as we are). This makes us buy more food when hungry, even if we are going to eat it at some point when we are not hungry. Before going out drinking we are soberly considering whether to take the car or not, but at the point in time when we decide whether to drive home we are not sober.

If you cannot make up your mind about alternatives, just choose one randomly: they are about equal. We tend to overdo comparing minuscule differences between choices that do not matter. This is especially true for quantitative differences, which often overshadow more important differences like what you want the thing for.

Hard numbers are not hard evidence. Numerical evidence looks impressive but can be misleading, uncertain or plain wrong. Especially about what makes us happy: it is not the size of the monitor that matters but whether it fits my desk, it is not the exact reviewer score but whether the reviewer thinks like me that matters.

Leave the party just after it peaks. We remember the pleasure or pain of something based on the peak intensity and how it was at the end. So if you do something unpleasant, make sure it trails off.

Don't trust lay theories of "how things are". We often overgeneralise from the cases where we learned them.

Don't trust standard rules like "don't waste", "seek variety", "don't pay for delays". Just like lay theories they have their domains of applicability - learn them, but do not expect the rules to always work. Today it is better to throw away excess food that fall for unit bias and become obese. If you really like something you do not need to try variants just because.

Don't pile on options. The confusion, excessive comparisons, contrast between them etc. will make things feel worse.

We are often too impulsive, but not always. Learn how to maintain the right kind of long-term outlook: don't sacrifice long-term happiness for short-term pleasure, but don't expect to become happy because you are long-term.

Don't mix up the instrument with the goal, the medium and the reward. We tend to think the expensive thing is better than the less expensive. We often maximize the medium rather than the aimed for experience: we often acquire more money than we need to get what we want, because the money (the medium) reminds us of the reward.

These are things we can train. Not always easily, but it is doable. We can become better at being happy.

Posted by Anders3 at 05:49 PM | Comments (0)

September 04, 2011

They are playing my song

Eclipse phase frontThe greatest honour an author can receive is that somebody reads their work. The greatest honour a rpg author can receive is somebody playing their scenario, turning into something new. So you can guess my excitement over RPPR Actual Play having recorded a game session of my adventure Think Before Asking.

Lots of interesting things here for me to learn from. It feels good to see that somebody actually understood my point and turned it into a fun game session.

My only quibble is of course that antimatter doesn't work that way, but maybe that was due to the weird device... my players were never brave enough to ask two questions.

Posted by Anders3 at 01:27 AM | Comments (0)

September 02, 2011

Playing God: because somebody has to

Metropolitan CathedralYesterday I was interviewed on BBC5 about whether scientists should play God. The reason was a cluster of news stories over the past days: UK stem cell stroke trial passes first safety test, Ethics of creating meat in a laboratory, Giant pipe and balloon to pump water into the sky in climate experiment, and Wales Gets Dedicated UAV Airport, Leaves U.S. in the Dust.

When first asked about whether I could participate the radio person called the stories "bizarre", and I think that contributes to the unease of people: these are things from outside the sphere of everyday experience that are going on, apparently important and supported by important people, yet disconnected from the logic of everyday life and possibly threatening it. But the real "playing God" aspect is of course that they seem to change the natural order of things.

I of course think that there is no more a natural order that has to be obeyed than there is an order to a pile of gravel: it has a shape due to history and gravity, but any other shape is equally OK. As humans we might want certain outcomes, and some outcomes hold ethical importance, but they get value not from conforming to a given order but through other aspects - the amount of happiness or suffering, how we go about implementing them, risk and resiliency, and so on.

Still, many people have deep intuitions that there is some kind of natural state one should not transgress on. A case in point is Clive Hamilton's criticism of geoengineering which seems to boil down to that we should be humble 'in the face of the superior power, complexity and enigmatic character of the earth'. But (as discussed at the link) this appears to either boil down to the commonsensical 'be careful when messing with complex and important systems' that doesn't carry much ethical weight (but some practical weight), or a very hubristic assumption that nature is guaranteed to be beyond us, a kind of claim that has historically failed again and again. It is hubris just when you fail, while success means it was just ordinary progress.

As Haldane put it, every advance in physics starts out as blasphemy (lightening rods! how hubristic to think you can evade Gods punishment... except that they usually work fine) and every biological advance as perversion (drinking cow's milk... yuck!). Then people get used to it and make them part of their everyday life, or even religion. Playing God is something we always do, and whenever it works out we then redefine it as normal.

Posted by Anders3 at 03:39 PM | Comments (0)

How extinct are the drysalters?

Library capitalsThe Guardian has an article on how Collins is dropping 'endangered' words from their smaller dictionaries (via TYWKIWDBI). Among the words mentioned are aerodrome, charabanc, wittol, drysalter, alienism, stauroscope, succedaneum and supererogate. Apparently they are using a big corpus of text and monitoring usage, and if something drops out enough it is likely extinct.

However, Google ngrams is an alternative corpus. It turns out that while aerodrome certainly has lost much usage compared to its heyday in the 1940s it is still hanging on, occurring at about a twelfth of the maximum rate (in 0.00024% of all texts). There are still books with it in the title and text.

The other words have always been less common, but looks less extinct than one might expect. Succedaneum had its heyday in the early 19th century, but it is still more common the charabanc (popular in the 1920-1940 period). Alienism is hanging on, although the grand peak around 1985 is likely never coming back. Wittol has declined far, but still seems to be in use as is drysalter. And supererogate may be very uncommon in the corpus, but I certainly encounter it a few times a year (even use it myself) in discussions in the philosophy department. The only word that seems to really have gone out of use for longer periods is stauroscope; people have found better ways of measuring extinction angles in crystal sections.

I don't know if I trust Google ngram completely; a vast autogenerated corpus might contain mistakes. But looking at the links to books with the words suggest that indeed there are some usage. I guess it is more a matter of having an outer cut-off for Collins than true word extinction: no point in having the utterly unusual words taking up space in a dictionary.

It is interesting to see that transhumanism is now about as common as wittol. Yet I suspect more people understand the former than the later word. And it has 14 times more Google hits.

In the end, languages are unlike biospheres in that extinction isn't as permanent. Words encoded in text can be revived decades, centuries or millennia hence (consider Egyptian religious terminology), like seeds awaiting their chance. Of course their meanings might mutate with every revival, but that is another thing. "That is not dead that can eternal lie..."

Posted by Anders3 at 12:14 PM | Comments (0)