August 30, 2012

My little conjugate minimal surface

In 2006 I found a minimal surface based on the tanh function:

[a,b,c]=Re[tanh(z),i*(2z-tanh(z)), 2ln(cosh(z))].

Maybe not too useful - it is self-intersecting, and since you can get minimal surfaces from the Weierstrass representation from nearly any two functions they are a dime a dozen. But it is still mine (I think).

Today I realized that it had to have a conjugate minimal surface. The Weierstrass representation is the real part of the complex vector function inside the brackets. If you take the imaginary part you get another minimal surface (and by multiplying with exp(it) you can make a seamless morph from one to another).

The conjugate of the catenoid is the helicoid. My surface had a lot of catenoid ends, so I suspected it would have helicoid ends. Conjugation also doesn't change the Gauss map (the map to the unit sphere by taking the normal), so I expected the planes to remain. But it was still non-intuitive how the result would look.

The result is fun: it is a surface that has one planar and one helicoidal end. Here are a few pictures:

conjugatemin

twoside6b

When plotting it branch cuts matter in order to avoid nasty erroneous triangles. Near Re(z)=0, Im(z)=±pi/2 the surface blows up into the helicoid: this is literally the counterpart to the catenoid ends of the other surface, each catenoid turning into one period of the helicoid. These pictures make use of the domains Re(z)>0, -pi<Im(z)<pi and Re(z)<0, -pi<Im(z)<pi, avoiding Re(z)=0. Plotting just one domain shows a surface that has a plane end and half a helicoid: those straight lines are places where the surface can be continued using reflection.

onepart

Posted by Anders3 at 11:52 PM | Comments (0)

What innovations ought we aim for?

Industrial mandalaRobert Gordon has written an interesting paper, Is U.S. economic growth over? faltering innovation confronts the six headwinds about future economic growth. He argues that there is no fundamental reason to assume economic growth is going to remain indefinitely: in fact, there was little growth before 1750 and it could be that the last 250 years have just been a temporary exceptional period. The growth we have had has been due to three linked industrial revolutions - the steam one, the diesel one and the computing one - producing economic growth as their inventions become available but eventually petering off. Finally, even if he is too pessimistic about the near future innovation rate, there are growth-reducing "headwinds" (demography, education, inequality, globalization, energy/environment and debt) that at least are likely to slow the US for a long while.

In many ways this is a complement to Tyler Cowen's thesis in The Great Stagnation: we got a lot of growth out of low-hanging technological fruits that enabled us to build a great standard of living, but now it has become much harder to innovate something that truly matters.

What inventions do we need?

The pipes of springGordon points out that a lot of innovation is not really innovative or important: they do not really change how things are done in society or how it is to live in it. The first and second parts of the industrial revolution drastically reduced the amount of unpleasant work and need for human effort, and increased life expectancies enormously. The computer revolution has improved productivity and efficiency in many ways, but the biggest gains seem to have been in entertainment and communication: more opportunities for consumption rather than replacing human labour with machines.

He makes a thought experiment:


You are required to make a choice between option A and option B. With option A you are allowed to keep 2002 electronic technology, including your Windows 98 laptop accessing Amazon, and you can keep running water and indoor toilets; but you can’t use anything invented since 2002.

Option B is that you get everything invented in the past decade right up to Facebook, Twitter, and the iPad, but you have to give up running water and indoor toilets. You have to haul the water into your dwelling and carry out the waste. Even at 3am on a rainy night, your only toilet option is a wet and perhaps muddy walk to the outhouse. Which option do you choose?

His argument is rhetorically good, but also shows some interesting issues of what we want from technology. We do not care how we get a particular function (a steam-powered ipad is still OK) but we care about the function (having to go to the outhouse is not OK, even if it was a miracle of green engineering). The functions we care deeply about are the ones that interface directly with how it is to live or enable it: it fits with a Maslowian hierarchy of needs with food and shelter near the base and convenient entertainment further up.

Gordon points out that "A common feature of this innovative revolution was that many of the improvements could only happen once." Once things get better - indoor temperatures at a comfortable level rather than fluctuating, free and fast travel introduced, food hygiene achieved - there is not much that can be done in these domains. We might want greener air conditioning, faster air travel or nutritionally perfect food, but compared to the already made improvements they are going to be minor advances. I would assume that performance improvements in many domains - in terms of actual utility - are highly convex and quickly level off. A modern pocket calculator is faster than my first one from the 1980s, but for most calculations this speed difference is unimportant: I only care about it performing the basic arithmetic. We rarely care about Moore's law directly, just what it enables.


Stagnation as the old/new normal?

MarionI can't judge his analysis of economic trends: productivity measurement remains mysterious to me. But this kind of argument is an interesting challenge to the accelerationist school of thought. The basic caricature view of theirs would be that productivity ought to increase faster and faster as we get technologies that not only replace or complement human labour, but also help develop better tech. The most extreme form is of course recursive self-improving AI: a small part of the the economy can make itself grow more or less arbitrarily fast (and hence, in principle, the whole economy could take off).

This likely underestimates the importance of particular technologies unleashing new domains of activity: great inventions pushes the productivity frontier in a lumpy manner. The stagnation people think we are getting too few innovations, either because they are getting harder to find or because we are doing something wrong.

One can also make a lumpy long run view like Robin Hanson's growth mode theory: yes, industrial revolutions are fairly discrete things and do peter out. It is just that they happen from time to time, and in the long run they tend to build on each other: growth is not just due to inventions, but to the kinds of societies they enable. One open question is of course if they come more and more often, or whether they can be arbitrarily far apart in time.

What to invent?

Nautilus gearsIt is also interesting to consider what kinds of inventions would produce something important for the human condition, as desirable and transformative as indoor plumbing, personal vehicles or antibiotics. Here is my tentative list:

  • Automated medicine: since the cost of services is proportional to salary costs, it tends to remain high. Automated systems can be made cheaper since once invented they can be replicated. Hence, automation of parts of medicine would be important in both lowering healthcare costs and making it affordable for poorer people.

  • Anti-ageing technology: beside the huge (mainly positive) economic impact it would also affect the structure of life and how it is lived. It is likely to be a fairly desired technology, leading to big efforts to make it cheaper and more widely available.

  • New manufacturing methods: automated micromanufacturing like 3D printing - once good enough - makes customisation and experimentation easier, places control over the means of manufacturing in the hands of people directly, avoids the need for long-range transport of many goods etc. There are no doubt economies of scale for making many things, but this will open the production frontier for many objects that currently are fairly niche.

  • Molecular precision manufacturing: nanotechnological manufacturing would enable a large range of new materials that would in turn transform other domains or allow new possibilities. While MPM itself is unlikely to be personally important, it would enable a large number of technologies that are likely to be intimate and essential.

  • Education technology: just like medicine, if education could be automated effectively (or we could figure out better or faster ways of doing it), more human capital would become available more cheaply. That would be individually and collectively good.

  • Cognitive enhancement: Since cognition underlies so much of our lives, improvement in mental abilities can be a big deal individually and collectively. Boosting societal cognition likely has a measurable effect on economic growth through network effects, improved cooperation and reduced friction.

  • Emotional enhancement technologies: Obviously, if they work, important for human well-being. There have been some studies of the bad economic effects of divorce, and clearly pair bond enhancing methods would be helpful. But presumably happier societies with more mental health would also have other network effects: depression not only leads to individuals suffering and not being productive, but also to losses in their social network.

  • Brain emulation: this is an obvious way of jumping straight into a posthuman state. It would have enormous effects - backup copies, customizable bodies, easy neurohacking, speed regulation, copyable human capital.

  • Artificial intelligence: even fairly mild AI with merely natural language understanding and human-level intellectual abilities could make labour costs far lower. Just consider how many jobs are merely about following fairly well-defined rules in standardised situations. Higher grade AI, able to do any job a human could do, would obviously have a far bigger impact. Like brain emulation it also allows skill capital to be copied and distributed at nearly no cost.

  • Space colonisation: Space industrialisation likely requires space colonisation, since the primary market for much of what is manufactured up there is going to be up there. However, just fixing the energy situation through space power satellites and creating extra potentially available living space would likely be a big deal. In addition, it would provide a large expansive frontier of colonisation and exploration - a culturally important function. Similar arguments might be advanced for sea colonisation or Arctic colonisation (which, as Charles Stross has pointed out, are far easier).

  • Existential risk reduction: Perhaps the ultimate public good. Technologies that reduce the risk of mankind going extinct are morally extremely important and do affect the collective human condition.

The point of these is that once they exist, rational people would not want to go back to the previous state even for a fairly high compensation.

Note that early technologies do not have to be about fixing early steps on the hierarchy of needs. Yes, we tend to try to solve those first, but the earth-shattering innovations are still random. Writing, one of the biggest leaps, occurred very early but did not solve the perhaps more pressing food, health and shelter problems. We still haven't solved ageing, despite it producing 100% mortality.

I suspect that, looking back from the far future, the above list will seem rather naive. But the important angle is to realize that there are things to invent or find solutions to that would change the human condition. They are worth pursuing if we want more growth, and they are worth investigating if we want to have a say in how the human condition changes.

Posted by Anders3 at 05:14 PM | Comments (0)

August 29, 2012

The Bayesian immortals

The ImmortalXKCD notes:

A hundred billion or so humans have ever lived, but only seven billion are alive now (which gives the human condition a 93% mortality rate)

While this is true given the sample, it reeks of frequentism. What should a Bayesian say?

Assume the true mortality of humans is p. We know that out of 100 billion trials 93 billion are dead ("success") and 7 billion have not died ("no success"). So assuming a uniform initial prior for p and that the above data is an outcome of a binomial distribution gives us a Beta(93 billion, 7 billion) distribution as the posterior for p. The mean is 0.93, agreeing with XKCD. This is also more or less the median or mode. But we also get the standard deviation, telling us how uncertain we should be: 8*10-7! So obviously we should be really confident in that 7% of people are immortal, right?

The mistake is of course using still-living people as evidence. Clearly some might, hypothetically, die. So this can be viewed as an experiment in lifetime analysis suffering from type I censoring.

One approach might be to say that there is a certain probability q that someone is born immortal. What is the probability distribution for q given that all immortals are included in the set of people born less than 120 years ago? Bayes tells us:

P(q|all immortals recent) = P(all immortals recent|q) P(q)/P(all immortals recent)

We can break up the first term:

P(all immortals recent|q) = sum P(N immortals recent|q) P(N immortals|q)

If the probability of being immortal is q, then P(N immortals|q)= (100 billion-N over N) qN (1-q)(100 billion-N). It is of course strongly peaked around 100 billion times q.

If we assume human history to be the last 200,000 years, then the chance that a random person is born in the last 120 years is r=0.0006. If there are N immortals the probability of them all being in this range is rN. So, again assuming a uniform prior for q (I doubt the Jeffreys prior changes things) and ignoring normalisation,

P(q|all immortals recent) = sum rN Binomial(100 billion-N, N) qN (1-q)(100 billion-N)

This is unfortunately a somewhat hard to evaluate sum because of the big numbers. If we make the assumption that N is going to be 100 billion times q with probability 1 and 0 otherwise, then it simplifies to P(q|all immortals recent) = r(100 billion * q). This is a rather sharp exponential: essentially all probability mass lies at q=0. So, sadly, Bayes doesn't believe in immortals very much.

Of course, what I am betting on is that we can change the probabilities.

Posted by Anders3 at 12:57 PM | Comments (0)

August 27, 2012

How to life forever

Ed's Martian Book 3Indefinite Survival through Backup Copies by Anders Sandberg and Stuart Armstrong.

A little technical report for a cute little result: it is possible to have a finite chance to survive an infinite time even if there is a finite chance of getting destroyed per unit of time, if you make backup copies (that are also destroyable) at a high enough rate. The number of backup copies needed only grows logarithmically with time, a surprisingly slow growth.

There are of course complications in adapting this to the real world. Our model can handle time-varying risk, and even some forms of correlated risks. Unfortunately there are risk patterns that are not survivable at all, and the real issue will be common mode risks against the whole backup system.

It is also not a new idea by any means. Mike Perry wrote a paper about it in the 80s: R. Michael Perry, A Mathematical Model of Infinite Survival, Abiolysist Macroscope 3 6-9 and 4 4-9 (1986), and presented it in 1994 at the first Extropy Institute conference. He focuses on the hierarchies of records

David A. Eubanks also had an earlier paper arXiv:0812.0644v1 [q-bio.PE], although his focus seems to be more about looking at the complexity of survival: how does an agent that wants to survive have to think in an unknown universe? (Some blog posts on this)

I think our paper is the first that proves the logarithmic bound, but it is a fairly simple result.


Posted by Anders3 at 03:06 PM | Comments (0)

August 16, 2012

Big data analytics

Human rightsAsking the right questions: big data and civil rights - I blog about big data and civil rights. Big, messy issue with no obvious stable answers.

The most interesting issue in my opinion is the impossibility of predicting what data or questions will produce problematic answers. As OKTrends discovered, using "England" in your profile text tells something about what kind of sex you like (if you are a woman).

Weak data ownership means we will get nasty surprises of where our data ends up and how it is used. Strong data ownership/control puts the cost on data users and gives an advantage to powerful data user groups while reducing the many positive applications of big data. Reciprocity is hard to maintain. Maybe transparency is just the evolutionary stable strategy: not optimal from many perspectives, but a strong attractor.

Posted by Anders3 at 09:18 PM | Comments (0)

August 15, 2012

The end of the Maes-Garreau Law?

AI probability density (skew gaussian and triangular)How much should we trust predictions about future technologies? There are some technologies that are always a few decades away (fusion is the canonical example), there are others that seem to strike without any prediction (stem cells? the WWW?) and others that seem to progress reliably so far (Moore's law).

Stuart Armstrong and Kaj Sotala has produced an excellent post analyzing a set of predictions about the future of AI. Among other things, they looked for evidence of the Maes-Garreau law, that people predict AI somewhere about when they retire. Somewhat surprisingly, they found that this was not true. Instead, over a third of predictors claim AI will happen 16-25 years in the future, irrespective of age. There was no strong correlation between age and expected distance into the future.

This is not necessarily good news for the predictions of AI, since they also found that there is little difference between experts and non-experts, and there is little difference between current predictions, and those known to have been wrong previously. While this doesn't prove that people don't know what they are talking about, it should give us a certain skepticism against confident claims.

It is also notable that the "law" has been relatively widely cited, but the underlying data supporting it seems to be a handful of claims. Kevin Kelly gives a list of 17, some of which are apparently erroneously cited. I think we should add another law of human nature to the laws discussed in the New Scientist article: as soon as a claim has "law" added to it, people will start taking it far more seriously than it should.

There is a bit of irony that a law intended to make people more cautious about confident claims about the future is actually so rickety itself.

Posted by Anders3 at 03:51 PM | Comments (0)

August 10, 2012

There is an ill wind that blows no minds

Time to apply math to important problems: the acoustics of flatulence.

It all started when I came across the light-hearted article "The mysterious forces of flatulence" which discusses the pressing question about whether flatulence could possibly propel a person. It also analyses the sound, reaching the conclusion that the characteristic frequency should be in the ultrasonic range. As the article noted, this is obviously not the case (despite occasional expressions like "he is so uptight that when he farts the dogs howl"). The post suggests that dynamical aperture cross section changes play a role - let's analyse it!

The anal sphincter can be viewed as an elastic toroid or cylinder. This is actually how it is modeled in biomechanics, see for instance the review Modelling the biomechanics and control of sphincters by Heldoorn, Van Leeuwen and Jan Vanderschoot, The Journal of Experimental Biology 204, 4013–4022 (2001). (No, the researchers are not as frivolous as me - understanding sphincter muscles is important for making prosthetics, treating various conditions and for biomimetic systems - just consider how powerful and versatile valves they are in the body!)
The simplest models are based on the three-element Hill muscle model where there are components representing the contractile, connective and dampening parts of the tissue.

Empirical data can be found in C.P. Gibbons, E.A. Trowbridge, J.J. Bannister, N.W. Read, The mechanics of the anal sphincter complex, Journal of Biomechanics, Volume 21, Issue 7, 1988, Pages 601–604 which shows that the circumferential force is to a good approximation linear in dilation. The paper also shows the importance of the anal lining for maintaining continence: there are some nontrivial engineering constraints here.

OK, let's (over)simplify things.

The tension in the muscle determines its diameter, and resists (if the diameter is small enough) anything passing through. However, pressurized contents will also force the muscle to expand.

The sphincter force will be F_s = k*d+F0 where k is a spring constant, d is the diameter and F0 is the force when the muscle is at "rest" and closed (actually a combination of background muscle tonus and the above lining mechanics).

We can model the diameter as a dampened oscillator:

m d'' + mu*d' + k*d + (F_c - F_0) = 0

where m is the mass, mu is a dampening constant, F_c is the pressure from the contents. We also decree that if d<0 it is just set to 0 - at this point incompressible properties of the tissue come into play.

What should F_c be? Clearly it depends on the internal pressure, but there might be a dependency on d too. Let's start with the simple case.

If the pressure is constant the above equation has two types of solutions:

If F_0 > F_c, d=0 is clearly an attractor: the sphincter stays closed.

If F_c > F_0 then there will be a d>0 fixed point at d=(F_c-F_0)/k. If mu < 2 sqrt(mk) there is under-damping and the approach to this point will be oscillatory with frequency sqrt(k/m-mu^2/4m^2).


Making the crude assumption that m is on the order of a few grams, k is around a few hundred N/cm (say 300) and dampening might be about 1 Ns/m. Then I get a frequency of 166 Hz. Not too implausible. Or in any case, not in the ultrasound range.

Note that this oscillation will decay over time in this simple model. But the flow of gas could help maintain it: as the aperture narrows the pressure increases, doing work to expand the aperture and hence injecting energy into the system. So the tone can be maintained for a long time.


Still, this model only explains the high frequency pure tone sounds. It does not explain the "whoopee-cushion" sound, which clearly has a far lower frequency. Getting down there either requires a suspiciously low k or another mechanism.

An obvious extension is that the gas pressure is not constant and that we have relaxation oscillations.

The theory of orifice plates is clearly applicable. Fortunately we do not have to deal with choked flows (since they represent supersonic speeds, which the original blog post shows are unlikely). The volume flow rate through a circular orifice of diameter d is

Q = C_d pi d^2/4 sqrt(2 DeltaP/rho)

where C_d is the discharge coefficient (0.61 for a flat plate orifice, 1 for a short tube), DeltaP the pressure difference in Pascals and rho the gas density.

Modelling the pressure and density seems to be a bit complicated; one can model the rectum as a balloon where the pressure depends on radius and wall tension. (Real balloons and organs have fairly complicated inflation behaviour with stiffness and hysteresis, see for example W. A. Osborne and W. Sutherland, The Elasticity of Rubber Balloons and Hollow Viscera, Proceedings of the Royal Society of London. Series B, Containing Papers of a Biological Character Vol. 81, No. 551 (Nov. 23, 1909), pp. 485-499). Using the ideal gas law it is possible to link pressure and density and model their change as gas is let out. However, things get rather nonlinear.

I decided to use a far simpler model where the pressure is reduced by the flow and is restored at a constant rate:

DeltaP = -k1 Q + k2

where k1 and k2 are constants.


This produces relaxation oscillations. The sphincter lets out gas, the pressure declines and it closes (fast). The pressure increases, but the sphincter opens relatively slowly and the flow depends on the square diameter, so the pressure will get high enough to bring it back to the original open state.

Depending on parameters the oscillations can have multiple sizes. The most important thing is that it doesn't seem hard to get sufficiently slow oscillations to explain the low frequency noises.

Obviously, for predictive purposes (i.e. when investigating how to achieve a silent one, or to deduce bodily parameters from sampled sound spectra) a full model of rectal tension and gas density, as well as more carefully chosen material parameters should have been used. But I think this serves as a preliminary analysis of how different frequency embarrassments are produced. It is all about the eigenvalues.

Posted by Anders3 at 11:36 PM | Comments (0)

August 02, 2012

See me shoot myself in the leg

The human engineering paper is now officially up:

Taylor & Francis Online :: Ethics, Policy & Environment - Volume 15, Issue 2 - complete with commentary. Somewhat unsurprisingly it is mostly sceptical.

Posted by Anders3 at 05:45 PM | Comments (0)