April 23, 2007

What is this?

This is my quick-and-dirty way of making a forum for filling in the big holes in my review of brain emulation. I will add a series of dummy posts corresponding to the different sections, and you can add comments to them. (I added some of the raw text from my manuscript to them). I will then take the results and incorporate into the master document.

Posted by Anders3 at 04:22 PM | Comments (61145)

Draft 1 of Brain Emulation Review

Download file

Posted by Anders3 at 04:22 PM | Comments (5712)

Introduction

Introduction, The Concept of Brain Emulation, Emulation and Simulation, History and Previous Work

Brain emulation, the possible future one-to-one modelling of the function of the human brain, is academically interesting and important for several reasons:

• Philosophy
o Brain emulation would itself be a test of many ideas in the philosophy of mind and philosophy of identity, or provide a novel context for thinking about such ideas.
o It may represent a radical form of human enhancement different from other forms.
• Research
o Brain emulation is the logical endpoint of computational neuroscience’s attempts to accurately model neurons and brain systems.
o Brain emulation would help understand the brain, both in the lead-up to successful emulation and afterwards by providing a perfect test bed for neuroscience experimentation and study.
o Neuromorphic engineering based on partial results would be useful in a number of applications such as pattern recognition, AI and brain-computer interfaces.
o As a research goal it might be a strong vision to stimulate computational neuroscience.
o As a case of future studies it represents a case where a radical future possibility can be examined in the light of current knowledge.
• Economics
o The economic impact of copyable brains would be immense, and have profound societal consequences.
• Individually
o If brain emulation of particular brains is possible and affordable, and if the concerns of individual identity can be met, such emulation would enable backup copies and “digital immortality”.

Brain emulation is theoretical technology so far. This makes it vulnerable to speculation, “handwaving” and untestable claims. As proposed by Nick Szabo, “falsifiable design” is a way of curbing the problems with theoretical technology:

…the designers of a theoretical technology in any but the most predictable of areas should identify its assumptions and claims that have not already been tested in a laboratory. They should design not only the technology but also a map of the uncertainties and edge cases in the design and a series of such experiments and tests that would progressively reduce these uncertainties. A proposal that lacks this admission of uncertainties coupled with designs of experiments that will reduce such uncertainties should not be deemed credible for the purposes of any important decision. We might call this requirement a requirement for a falsifiable design. (Szabo 2007)

In the case of brain emulation this would mean not only sketching how a brain emulator would work if it could be built and a roadmap of technologies needed to implement it, but also a list of the main uncertainties in how it would function and proposed experiments to reduce these uncertainties.

This paper is an attempt to list some of the known facts, assumptions and ideas for how to implement brain emulation in order to facilitate developing a real falsifiable design roadmap.

The Concept of Brain Emulation

Brain emulation, often informally called “uploading” or “downloading”, has been the subject of much science fiction and also some preliminary studies. Some approaches to emulation are described below.

Brain emulation would in one sense be the conclusion of neuroinformatics, the science of handling and processing neuroscience data: a database containing all relevant information about a brain, together with an update rule that would allow this information to change in time as in a real brain. It would not require or necessarily imply a total understanding of the brain and its functions, only that the details and components are well understood. A functional understanding is logically separate from detail knowledge; it may be a possible result or it may help gather only the information truly needed, but it is entirely possible that we could acquire full knowledge of the component parts and interactions of the brain without gaining an insight in how these produce (say) consciousness.

Even a database merely containing the complete “parts list” of the brain, including the morphology of its neurons, the locations, sizes and types of synaptic connections, would be immensely useful for research. It would enable data-driven research in the same way as genomics has done in the field of cell biology (Fiala 2002).


Computational neuroscience attempts to understand the brain by making mathematical or software models of neural systems. Currently, the models are today usually far simpler than the studied systems, with the exception of some small neural networks such as the lobster stomatogastric ganglion (Nusbaum and Beenhakker 2002) and the locomotor network of the lamprey spinal cord (Kozlov et al. 2007). Often models involve a combination of simplified parts (simulated neurons and synaptic learning rules) and network structures (subsampling of biological neurons, simple topologies). Such networks can themselves constitute learning or pattern recognizing systems on their own, artificial neural networks (ANNs). ANN models can be used to qualitatively model, explain and analyze the functions of brain systems (Rumelhart, McClelland and the PDP Research Group 1986). Connectionist models build more complex models of cognition or brain function on these simpler parts. The end point of this pursuit would be models that encompass a full understanding of the function of all brain systems. Such qualitative models might not exhibit intelligence or the complexity of human behaviour, but would enable a formalized understanding of how they come about from simple parts.

Another approach in computational neuroscience involves creating more biologically realistic models, where information about the biological details of neurons such as their electrochemistry, biochemistry, detailed morphology and connectivity are included. At its simplest we find compartment models of individual neurons and synapses, while more complex models include multiple realistic neurons connected into networks, possibly taking interactions such as chemical volume transmission into account. This approach can be seen as a quantitative understanding of the brain, aiming for a complete list of the biological parts (chemical species, neuron morphologies, receptor types and distribution etc.) and modelling as accurately as possible the way in which these parts interact. Given this information increasingly large and complex simulations of neural systems can be created. Brain emulation represents the logical conclusion of this kind of quantitative model: a 1-to-1 model of brain function.

Note that the amount of functional understanding needed to achieve a 1-to-1 model is minimal. Their behaviour is emergent from the low-level properties, and may or may not be understood by the experimenters. For example, if coherent oscillations are important for conceptual binding and these emerge from the low-level properties of neurons and their networks, a correct and complete simulation of these properties will produce the coherence.

In practice computational neuroscience works in between quantitative and qualitative models. Qualitative models are used to abstract complex, uncertain and potentially irrelevant biological data, and often provide significant improvements in simulation processing demands (in turn enabling larger simulations, which may enable exploration of domains of more interest). Quantitative models are more constrained by known biology, chemistry and physics but often suffer from an abundance of free parameters that have to be set. Hybrid models may include parts using different levels of abstraction, or exist as a family of models representing the same system at different levels of abstraction.


Emulation and Simulation

The term emulation originates in computer science, where it denotes mimicking the function of a program or computer hardware by having its low-level functions simulated by another program. While a simulation mimics the outward results, emulation mimics the internal causal process. The emulation is regarded as successful if the emulated system produces the same behaviour and results as the original (possibly with a speed difference). This is somewhat softer than a strict mathematical definition .

According to the Church-Turing thesis a Turing machine can emulate any other Turing machine. The physical Church-Turing thesis claims "Every function that can be physically computed can be computed by a Turing machine.” This is the basis for brain emulation: if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine.

In the following emulation will refer to a 1-to-1 model where all relevant properties of a system exist, while a simulation will denote a model where only some properties exist.

By analogy with a software emulator, we can say that a brain emulator is software (and possibly dedicated non-brain hardware) that models the state of a brain to a high degree.

In particular, a mind emulation is a brain emulator that is detailed and correct enough to produce the phenomenological effects of a mind.

A person emulation is a mind emulation that emulates a particular mind.

What the “relevant properties” are is a crucial issue. In terms of software this is often the bits stored in memory and how they are processed. A computer emulator may emulate the processor, memory, I/O and so on of the original computer, but does not simulate the actual electronic workings of the components, only their qualitative function on the stored information (and its interaction with the outside world). While lower-level emulation may be possible it would be inefficient and not contribute much to the functions that interest us.

Depending on the desired success criterion emulation may require different levels of detail. In the computer example, emulating the result of a mathematical calculation may not require simulating all operating system calls for math functions (since these can be done more efficiently by the emulating computer’s processor) while emulating the behaviour of an analogue video effect may require a detailed electronics simulation.

A widely reproduced image from (Churchland and Sejnowski 1992) depicts the various levels of organisation in the nervous system, running from the molecular level to the entire system. Simulations (and possibly emulations) can occur on all levels:

Molecular simulation (individual molecules)
Molecular simulation (concentrations, law of mass action)
Genetic expression
Compartment models (subcellular volumes)
Whole cell models (individual neurons)
Local network models (replaces neurons with network modules such as minicolumns)
System models

For the brain, several levels of success criteria for emulation can be used.

Level Success criterion Relevant properties
1
“Brain database” The emulation contains a 1-to-1 mapping of neural structure, chemistry and dynamics down to a particular resolution.
Low level neural structure, chemistry, dynamics accurate to resolution level.

2
“Brain emulation” The emulation produces emergent activity of the same kind as a brain.
Correct causal dynamics
3
“Person emulation” The emulation produces emergent activity of the same kind as a particular brain. Outsiders would recognize the person.

4
“Mind emulation” The emulation produces conscious states of the same kind as would have been produced by the particular brain being emulated.
Consciousness
5
“Identity emulation” The emulation (consciously) regards itself as continuation of the original mind.
Self-consciousness

Achieving the first success criterion beyond a certain resolution would, assuming materialism, imply success of some or all of the other criteria. A full quantum-mechanical N-body or field simulation encompassing every particle within a brain would plausibly suffice even if “quantum mind” theories are correct. At the very least a 1-to-1 material copy of the brain (a somewhat inflexible and very particular kind of emulating computer) appears to achieve all five criteria. However, this is likely an excessively detailed level since the particular phenomena we are interested in (brain function, psychology, mind) appear to be linked to more macroscopic phenomena than detailed atomic activity.

HYPOTHESIS: At some intermediary level of simulation resolution between the atomic and the macroscopic there exists one (or more) cut-offs representing where criteria 1 implies one or more of the other criteria.

An important issue to be determined is where this cut-off lies in the case of the human brain. While this paper phrases it in terms of simulation/emulation, it is encountered in a range of fields (AI, cognitive neuroscience, philosophy of mind) in other forms: what level of organisation is necessary for intelligent, personal and conscious behaviour?

Given the complexities and conceptual issues of consciousness we will not examine criterion 4-5, but mainly examine achieving criterion 2-3. It should however be noted that if philosophical zombies are disallowed, criterion 3 seems to imply 4-5.

History and Previous Work

The earliest origins of the mind emulation idea can perhaps be traced back to J.D. Bernal’s The World, The Flesh, The Devil (1929), where he

Men will not be content to manufacture life: they will want to improve on it. For one material out of which nature has been forced to make life, man will have a thousand; living and organized material will be as much at the call of the mechanized or compound man as metals are to-day, and gradually this living material will come to substitute more and more for such inferior functions of the brain as memory, reflex actions, etc., in the compound man himself; for bodies at this time would be left far behind. The brain itself would become more and more separated into different groups of cells or individual cells with complicated connections, and probably occupying considerable space. This would mean loss of motility which would not be a disadvantage owing to the extension of the sense faculties. Every part would not be accessible for replacing or repairing and this would in itself ensure a practical eternity of existence, for even the replacement of a previously organic brain-cell by a synthetic apparatus would not destroy the continuity of consciousness.

Finally, consciousness itself may end or vanish in a humanity that has become completely etherealized, losing the close-knit organism, becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light.

Bernal’s vision corresponds to a gradual replacement of biology with artificial parts, gradually making it unnecessary to keep the brain in one location.

In the science fiction novel City and the Stars (1956) Arthur C. Clarke described a far future city where bodies are manufactured by the central computer, minds stored in its databanks downloaded into them, and when an inhabitant dies their mind is stored yet again in the computer, allowing countless reincarnations.
Other early science fiction treatments were Roger Zelanzky’s Lord of Light (1968), Bertil Mårtensson’s Detta är verkligheten (1968) and Rudy Rucker’s Software (1979). Since then mind emulation (“uploading”) has become a staple of much science fiction . Of particular note in terms of technological and philosophical details are the novels and short stories by Greg Egan (Permutation City, Diaspora, Learning to be Me, Transition Dreams etc).

Brain (and mind) emulation has also been widely discussed in philosophy of mind, although more as Gedankenexperimente than possible actual practice (e.g. (Searle 1980; Parfit 1984; Chalmers 1995)).

The first attempt at a careful analysis of brain emulation was the technical report (Merkle 1989b), predicting that “a complete analysis of the cellular connectivity of a structure as large as the human brain is only a few decades away”. The report reviewed automated analysis and reconstruction methods, going into great detail on the requirements needed for parallel processing of brain samples using electron microscopes and image analysis software. It also clearly listed assumptions and requirements, a good example of falsifiable design.

The first popularization of a technical description of a possible mind emulation scenario was found in Hans Moravec’s Mind Children (1990), where the author describes the gradual neuron-by-neuron replacement of a (conscious) brain with software. Other forms of emulation are also discussed.

(Hanson 1994) was the first look at the economical impact of copyable minds, showing that brain emulation (even if it is not true person emulation) would likely cause significant economic and demographic changes.

One sketch of a person emulation scenario (Leitl 1995) starts out by the cryonic suspension of the brain, which is then divided into cubic blocks < 1mm. The blocks can individually be thawed for immunostaining or other contrast enhancement. For scanning various methods are proposed: X-ray fresnel/holographic diffraction, X-ray or neutron beam tomography (all risking radiation damage, might require strong staining), transmission EM (requires very thin samples), UV-abrasion of immunostained tissue with mass spectrometry, or abrasive atomic force microscope scan. While detailed in terms of the cryosuspension methods the sketch becomes less detailed in terms of actual scanning method and implementing the emulation.

Papers and pages:
http://compbiol.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pcbi.0010042
The Human Connectome: A Structural Description of the Human Brain

http://www.nature.com/nature/journal/v445/n7124/full/445160a.html
Industrializing neuroscience

http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=/iel5/9855/31040/01443305.pdf?arnumber=1443305&code=2
Modeling and Simulating the Brain as a System (abstract)

Some websites related to brain emulation:
http://minduploading.org/
http://www.ibiblio.org/jstrout/uploading/
http://www.aleph.se/Trans/Global/Uploading/
http://www.foresight.org/Nanomedicine/Uploading.html
http://en.wikipedia.org/wiki/Downloading_consciousness

Posted by Anders3 at 04:29 PM | Comments (57769)

Structure of a Brain Emulator System

Structure of a Brain Emulator System, Requirements of Brain Emulation, Brain Data, Complications and Exotica

Structure of a Brain Emulator System

Brain emulation will at the very least include a brain model, a body model and an environment model. The brain model is the main goal but likely requires at least a passable body simulation linked to a passable environment model in order to provide the right forms of input and output.

The brain emulator performs the actual emulation of the brain and closely linked subsystems such as brain chemistry. The result of its function is a series of states of emulated brain activity. The emulation produces and receives neural signals corresponding to motor actions and sensory information (in addition some body state information such as glucose levels may be included).

The body simulator contains a model of the body and its internal state. It produces sensory signals based on the state of the body model and the environment, sending them to the brain emulation. It converts motor signals to muscle contractions or direct movements in the body model. The degree to which different parts of the body requires accurate simulation is likely variable.

The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc.

The overall emulation software system (the “exoself” to borrow Greg Egan’s term) would regulate the function of the simulators and emulator, allocate computational resources, collect diagnostic information, provide security (e.g. backups, firewalls, error detection, encryption) and so on. It could provide software services to emulated minds (accessed through the virtual environment) and/or outside experimenters.

A variant of the above system would be an embodied brain emulation, in which case the body simulator would merely contain the translation functions between neural activity and physical signals, and these would then be actuated using a hardware body. The body might be completely artificial (in which case motor signals have to be mapped onto appropriate body behaviours) or biological but equipped with nerve-computer interfaces enabling sensing and control. The computer system running the emulation does not have to be physically present in the body.

It is certainly possible to introduce signals from the outside on higher levels than in a simulated or real body. It would be relatively trivial to add visual or auditory information directly to the body model and have them appear as virtual or augmented reality. Introducing signals directly into the brain emulation would require them to make sense as neural signals (e.g. brain stimulation or simulated drugs). “Virtual brain-computer interfaces” with perfect clarity and no risk of side effects could be implemented as extensions of the body simulation/interface.

Requirements of Brain Emulation

Key questions are:

How much of the various capacities are needed for:
a. intermediary applications (neuroscience research, software prosthetics etc)
b. sufficient emulation to enable easy construction of human-level AI
c. any functional upload at any speed
d. any functional upload at human speed
e. identity preserving upload

Resources and prerequisites
• computational demands
o memory
o bandwidth
o processing
• software demands
• neuron emulation
• body simulation
• world simulation

Brain Data

An average human brain has width 140 mm, length 167 mm and height 93 mm, with a volume of about 1400 ml (Rengachary and Ellenbogen 2005).

A common estimate of the number of neurons in the human brain is about 100 billion, with 10-50 times that number glia cells. The main cause for uncertainty in human neuron number is the small granule cells in the cerebellum, that may equal or even outnumber the other neurons. According to (Lange 1975) the total neuron number is 85 billion, with 12-15 billion telencephalic neurons, 70 billion granule cells and fewer than 1 billion brainstem and spinal neurons. A more recent study gives 15-31 billion neurons in the cortex with an average of 21 billion, 5-8 billion neurons in other forebrain structures and a total number of 95-100 billion neurons (Pakkenberg and Gundersen 1997). This study also found 16% neuron number difference between men and women, and a factor of 2 difference between the richest and poorest brains. The average neuron density was 44 million per cubic centimetre.

Approximately 10% of all neocortical neurons are lost across life (which may be relevant for estimating the amount of ‘lossiness’ emulation may be able to get away with without loss of individuality). The average number of neocortical glial cells in young adults is 39 billion and in older adults 36 billion (Pakkenberg et al. 2003).

Neuron soma sizes range between 4 μm (granule cells) to 100 μm (motor neurons).

Traditionally neurons have been categorized after their shape and location (e.g. pyramidal cells, basket cells, Purkinje cells etc.), expression of substances (e.g. cholinergic, glutaminergic, calbindin expressing), function (excitatory, inhibitory, modulatory), behaviour (e.g. fast spiking, regular bursting, chattering etc), cell lineage or other properties.

Number of neuron types
http://www.nervenet.org/papers/NUMBER_REV_1988.html#1

Of the neurons, 75-80% are pyramidal cells and 20-25% are inhibitory interneurons.

The average number of synapses per neuron varies more than an order of magnitude between species, between 2000-20,000, with man averaging about 8000 synapses per neuron (Braitenberg and Schuz 1998). Some kinds of neurons such as Purkinje cells may have on the order of 200,000 synaptic spines. (Pakkenberg et al. 2003) found 0.15e15 synapses in the neocortex.

cortical connectivity relevant?


Synapses release neurotransmitters by fusing vesicles with the membrane. A study in cultured leech neurons found that small vesicles release on the order of 4700 transmitter molecules with a time constant of 260 μs while large vesicles released 80,000 molecules with a time constant of 1.3 ms (Bruns and Jahn 1995). The amount of transmitter appears to depend only on volume of the vesicles (Bruns et al. 2000).

Individual neurons transmit not just one neurotransmitter, but often contain multiple messengers (often a small molecule neurotransmitter and a peptide neuromodulator) that may be released simultaneously or depending on neuron state (Millhorn and Hokfelt 1988; Salio et al. 2006).


The brain has a high degree of locality. Neocortical neurons tend to be organized into columns with high internal connectivity, sending out collaterals to other columns and subcortical targets. Up to 70% of the excitatory synapses on pyramidal cells come from less than 0.3 mm away (Calvin 1995). Cortical areas do not form a fully connected graph on the macroscopic level (although it is possible that a few rare axons connect any area to any other). The overall impression is a small-world network where there is a high degree of local connectivity and a small amount of long-range connectivity that nevertheless makes most modules close to most other modules (or indeed fully connected, qv. the network model in (Fransen and Lansner 1998)).

The most narrow neural structures in the brain are unmyelinated axons, ≈100 nm in diameter (Shepherd and Harris 1998) and necks of dendritic spines ≈50 nm in diameter (Fiala and Harris 1999). Scanning methods for creating brain emulations needs to achieve better than this resolution or have a method of tracing the processes.


Conduction velocity of action potentials varies between 0.6-120 m/s (depending on myelinisation and axon width).


What ion channels exist
What receptors exist
What properties fo they have
What are the internal systems of neurons
How many neuron types are there
Discrete, continous
If discrete, is it enough to just know type?

Is cortex circuit stereotypical: if so, help repair errors in scanning

Beside the small molecule neurotransmitters (suitable for fast transmission across synapses) there is a large class of neuropeptides that serves a modulatory or hormonal function (beside occasionally acting as transmitters). They are often co-released together with neurotransmitters (Salio et al. 2006). A large number (41+) of families of neuropeptides are known (Hokfelt et al. 2000) .

Complications and Exotica

Beside straight neural transmission through synapses there may be numerous other forms of information processing in the brain that have to be emulated. How important they are for success in emulation remains uncertain. An important application of early brain emulations and their precursors will be to enable testing of their influence.

Dynamical State

The methods for creating the necessary data for brain emulation discussed in this paper deal with just the physical structure of the brain tissue, not its state of activity. Some information such as working memory may be stored just as ongoing patterns of neural excitation and would be lost. However, loss of brain activity does not seem to prevent the return of function and personal identity (e.g. coma patients waking up again, cold water near-drowning cases). Similarly information in calcium concentrations, synaptic vesicle depletion and diffusing neuromodulators may be lost during scanning. A likely consequence would be amnesia of the time closest to the scanning.

Spinal Cord

Do we need to include the spinal cord? While traditionally often regarded as little more than a bundle of motor and sensor axons together with a central column of stereotypical reflex circuits and pattern generators, there is evidence that the processing may be more complex (Berg, Alaburda and Hounsgaard 2007) and that learning processes occur among spinal neurons (Crown et al. 2002). The networks responsible for standing and stepping are extremely flexible and unlikely to be hardwired (Cai et al. 2006).

This means that emulating just the brain part of the central nervous system will lose much body control that has been learned and resides in the non-scanned cord. On the other hand, it is possible that a generic spinal cord network would when attached to the emulated brain adapt (requiring only scanning and emulating one spinal cord, as well as finding a way of attaching the spinal emulation to the brain emulation). But even if this is true the time taken may correspond to rehabilitation timescales of (subjective) months, during which time the simulated body would be essentially paralysed. This might not be a major problem for personal identity in mind emulations (since people suffering spinal injuries do not lose personal identity), but it would be a major limitation to their usefulness and might limit development of animal models for brain emulation.

Volume transmission

Surrounding the cells of the brain is the extracellular space, on average 200 Å across and corresponding to 20% of brain volume (Nicholson 2001). It transports nutrients and buffers ions, but may also enable volume transmission of signaling molecules.

Volume transmission of small molecules appears fairly well established. Nitrous oxide is hydrophobic and has low molecular weight and can hence diffuse relatively freely through membranes: it can reach up to 0.1-0.2 mm away from a release point under physiological conditions (Malinski et al. 1993; Schuman and Madison 1994; Wood and Garthwaite 1994). While mainly believed to be important for autoregulation of blood supply, it may also have a role in memory (Ledo et al. 2004).

Larger molecules have their relative diffusion speed reduced by the limited geometry of the extracellular space, both in terms of its tortuosity and its anisotropy (Nicholson 2001). Signal substances such as dopamine exhibit volume transmission (Rice 2000) and this may have effect for potentiation of nearby synapses during learning: simulations show that a single dynaptic release can be detected up to 20 μm away and with a 100 ms half-life (Cragg et al. 2001).

Rapid and broad volume transmission such as from nitrous oxide can be simulated using a relatively coarse spatiotemporal grid size, while local transmission requires a grid with a spatial scale close to the neural scale if diffusion is severely hindered.

Glia cells

Glia cells have traditionally been regarded as merely supporting actors to the neurons, but recent results suggest that they play a fairly active role in neural activity. Beside the important role of myelinization for increasing neural transmission speed, at the very least they have strong effects on the local chemical environment of the extracellular space surrounding neurons and synapses.

Glial cells exhibit calcium waves that spread along glial networks and affect nearby neurons (Newman and Zahs 1998). They can both excite and inhibit nearby neurons through neurotransmittors (Kozlov et al. 2006). Conversely, the calcium concentration of glial cells is affected by the presence of specific neuromodulators (Perea and Araque 2005). This suggests that the glial cells acts as an information processing network integrated with the neurons (Fellin and Carmignoto 2004).

If glial processing is significant brain emulation have to emulate the glia cells in the same way as neurons, increasing the computational demands by at least one order of magnitude. However, the time constants for glial calcium dynamics is generally far slower than the dynamics of action potentials (on the order of seconds or more) suggesting that the time resolution does not have to be as fine.

Synaptic Adaptation

Synapses are usually characterized by their “strength”, the size of the postsynaptic potential they produce. Many (most?) synapses in the CNS also exhibit depression and/or facilitation: a temporary change in release probability caused by repeated activity (Thomson 2000). This rapid dynamics likely plays a role in a variety of brain functions, such as temporal filtering (Fortune and Rose 2001 ), auditory processing (Macleod, Horiuchi and Carr 2007) and motor control (Nadim and Manor 2000). These changes occur on timescales longer than neural activity (tens of milliseconds) but shorter than long-term synaptic plasticity (minutes to hours).

Body chemical environment

The body acts as an input/output unit that interacts with our perception and motor activity. It also acts as a chemical environment that affects the brain through nutrients, hormones, salinity, dissolved gases and possibly immune signals. Most of these chemical signals occur on a subconscious level and only become apparent when they influence e.g. hypothalamus to produce hunger or thirst sensations. For a brain emulation some or all of this chemical environment has to be simulated.

What is the resolution of chemical signals in the brain?
How many species neurotransmitters?

Neurogenesis

Neurogenesis and stem cells may play a role. During neurite outgrowth and possibly afterwards cell adhesion proteins can affect gene expression and possible neuron function by affecting second messenger systems and calcium levels (Crossin and Krushel 2000). Recent results show that neurogenesis persists in some brain regions in adulthood, and may have nontrivial functional consequences (Saxe et al. 2007).

Since neurogenesis occurs on fairly slow timescales (> 1 week) compared to brain activity and normal plasticity it may perhaps be ignored in brain emulation if the goal only is an emulation that is intended to function faithfully for a few days and not to exhibit truly long-term memory consolidation or adaptation.

Simulating stem cell proliferation would require data structures representing different cells and their differentiation status, data on what triggers neurogenesis and models allowing for the gradual integration of the cells into the network. Such a simulation would involve modelling the geometry and mechanics of cells, possibly even tissue differentiation.

Ephaptic effects

Electrical effects may also play a role, so called “ephaptic transmission”. In a high resistance environment currents from action potentials are forced to flow through neighbouring neurons, changing their excitability.
It has been claimed that they form a form of communication in the brain, in particular the hippocampus (Krnjevic 1986). However, in most parts of the brain there is a large extracellular space and blocking myelin so even if ephaptic interactions play a role they do so only locally, e.g. in the olfactory system (Bokil et al. 2001), dense demyelinated nerve bundles (Reutskiy, Rossoni and Tirozzi 2003) or trigeminal pain syndromes (Love and Coakham 2001).

If ephaptic effects are important the emulation needs to take the locally induced electromagnetic fields into account. This would plausibly involve dividing the extracellular space (possibly also the intracellular space) into finte elements where the field can be assumed to be constant, linear or otherwise easily approximable.

Quantum computation

While practically all neuroscientists subscribe to the dogma that neural activity is a phenomenon that occurs on a classical scale, there have been proposals (mainly from physicists) that quantum effects play an important role in the function of the brain (Hameroff 1987; Penrose 1989). So far there is no evidence for quantum effects in the brain beyond quantum chemistry or that they play an important role for intelligence or consciousness (Litt et al. 2006). There is no lack of possible computational primitives in neurobiology nor any phenomena that appear unexplainable in terms of classical computations (Koch and Hepp 2006). Quantitative estimates for decoherence times for ions during action potentials and microtubules suggest that they decohere on a timescale of 1e-20 – 1e-13 s, about ten orders of magnitude faster than the normal neural activity timescales. Hence quantum effects are unlikely to persist long enough to affect processing (Tegmark 2000). This has however not deterred supporters of quantum consciousness, arguing that there may be mechanisms protecting quantum superpositions over significant periods (Hagan, Hameroff and Tuszynski 2002; Rosa and Faber 2004).

If these proposals hold true brain emulation will be significantly more complex but not impossible, given the right (quantum) computer. In (Hameroff 1987) mind emulation is considered based on quantum cellular automata based on the microtubule network the author suggest underlie consciousness.

Assuming 7.1 microtubules per square micrometer and 768.9 micrometers in average length (Cash et al. 2003) and that 1/30 of brain volume is neurons (although given that micotubuli networks occurs in all cells glia – and any other cell type! - may count too) gives 10^16 microtubules. If each stores just a single quantum bit this would correspond to a 10^16 qubit system, requiring a physically intractable 2^10^16 bit classical computer to emulate. If only the microtubules inside a cell act as a quantum computing network the emulation would have to include 10^11 connected 130,000 qubit quantum computers. Another calculation, this assuming merely classical computation in microtubules, suggests 10^19 bytes per brain operating at 10^28 FLOPS (Tuszynski 2006). The main problem with these calculations is that they produce such a profoundly large computational capacity on a subneural level that a macroscopic brain seems unnecessary (especially since neurons are metabolically costly).

Analog computation and randomness

A surprisingly common doubt expressed about the possibility of even simulating simple neural systems is that they are analog rather than digital. The doubt is based on the assumption that there is an important qualitative difference between continuous and discrete variables. To some degree this is more a philosophical issue: if a continuous system is simulated by a discrete system, can it ever be an emulation? If computations in the brain make use of the full power of continuous variables the brain may essentially be able to achieve “hypercomputation”, enabling it to calculate things an ordinary Turing machine cannot (Siegelmann and Sontag 1995; Ord 2006). However, brains are made of imperfect structures in turn made of discrete atoms obeying quantum mechanical rules forcing them into discrete energy states, possibly also limited by a spacetime that is discrete on the Planck scale (as well as noise, see below) and so it is unlikely that the high precision required of hypercomputation can be physically realised (Eliasmith 2001).

A discrete approximation of an analog system can be made arbitrarily exact by refining the resolution. If a M bit value is used to represent a continuous signal, the signal-to-noise ratio is approximately 20 log_10(2^M) dB (assuming uniform distribution of discretization errors, which is likely for large M). This can relatively easily be made smaller than the natural noise sources such as unreliable synapses, thermal or electrical noise. The thermal noise is on the order of 4.2e-21 J, which suggests that energy differences smaller than this can be ignored unless they occur in isolated subsystems or on timescales fast enough to not thermalize. Field potential recordings commonly have fluctuations on the order of millivolts due to neuron firing and a background noise on the order of tens of microvolts. Again this suggests a limit to the necessary precision of simulation variables.

A somewhat related criticism is the assumed determinism of computers, while the brain is assumed either to contain true randomness or an indeterministic element (often declared to be “free will”).

The randomness version of the determinism criticism can be met by including sufficient noise in the simulation. Unless there are some important “hidden variables” in the noise of the brain the noise could be approximated using a suitably long-periodic random number generator (Tegmark 2000) or even an attached physical random number generator using quantum mechanics (Stefanov et al. 2000). Hidden variables or indeterministic free will appear to have the same status as quantum consciousness: while not ruled out by current observations there is no evidence that they occur or are necessary to explain observed phenomena.

Posted by Anders3 at 04:32 PM | Comments (23313)

Neural simulation

Neural simulation

The area of neural simulation began with the classic Hodgkin and Huxley model of the action potential (Hodgkin and Huxley 1952). At that point calculating a single action potential using a manually cranked calculator took 8 hours of hard manual work. Since then the ability to compute neural activity across large networks has developed enormously thanks to increases in computer power.

http://neuron.duke.edu/userman/2/pioneer.html

What is needed?

What information does it need for a given resolution?

It is known that the morphology of neurons affects their spiking behaviour (Ascoli 1999), which suggests that neurons cannot simply be simulated as featureless cell bodies. In some cases simplifications of morphology can be done based on electrical properties (REF: Rall etc).

One of the most important realisations of recent computational neuroscience in recent years is that neurons in themselves hold significant computational resources. “Dendritic computing” involves nonlinear interactions in the dendritic tree, allowing parts of neurons to act as ANNs on their own (Single and Borst 1998; London and Hausser 2005; Sidiropoulou, Pissadaki and Poirazi 2006). It appears possible that dendritic computation is a significant function that cannot be reduced into a whole-cell model but requires calculation of at least some neuron subsystems.

Brain emulation need to take chemistry more into account than commonly occurs in current computational models (Thagard 2002). Chemical processes inside neurons have computational power on their own and occur on a vast range of timescales (from sub-millisecond to weeks). Neuromodulators and hormones can change the causal structure of neural networks

About 200 chemical species have been identified as involved in synaptic plasticity, forming a complex chemical network. However, much of the complexity may be redundant parallel implementations of a few core functions such as induction, pattern selectivity, expression of change, and maintenance of change (where the redundancy improves robustness and the possibility of fine-tuning) (Ajay and Bhalla 2006).

Proteomics methods are being applied to synapses, potentially identifying all present proteins (Li 2007).
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=17098931
synapse protein database

At the very low numbers of molecules found in synaptic spines chemical noise becomes a significant factor, making chemical networks that are bistable at larger volumes unstable below the femtoliter level and reducing pattern selection (Bhalla 2004b; Bhalla 2004a). It is likely that complex formation or activity constrained by membranes is essential for the reliability of synapses.

In many species there exist identifiable neurons, neurons that can be distinguished from other neurons in the same animal and identified across individuals, and sets of equivalent cells that are mutually indistinguishable (but may have different receptive fields) (Bullock 2000). While relatively common in small and simple animals, identifiable neurons appear to be a minority in larger brains. Early animal brain emulations may make use of the equivalence by using data from several individuals, but as the brains become larger it is likely that all neurons have to be treated as individual and unique.

Review of models signalling
http://www3.interscience.wiley.com/cgi-bin/fulltext/109659397/PDFSTART

model collections
http://senselab.med.yale.edu/BrainPharm/NeuronDB/ndbNeuronList.asp
http://senselab.med.yale.edu/senselab/NeuronDB/default.asp
http://www.ebi.ac.uk/compneur-srv/LGICdb/LGICdb.php
http://www.iuphar-db.org/iuphar-ic/index.html
http://www.ionchannels.org/
http://www.nimh.nih.gov/neuroinformatics/index.cfm

http://citeseer.ist.psu.edu/symanzik99visual.html
Visual Data Mining of Brain Cs
Visual Data Mining of Brain Cells
(effect of morphology on functional properties)

An issue that has been debated extensively is the nature of neural coding and especially whether neurons mainly make use of a rate code (where firing frequency contains the signal) or the exact timing of spikes matter (Rieke et al. 1996). While rate codes transmitting information have been observed there exist fast cognitive processes (such as visual recognition) that occur on timescales shorter than the necessary temporal averaging for rate codes, and neural recordings have demonstrated both precise temporal correlations between neurons (Lestienne 1996) and stimulus-dependent synchronization (Gray et al. 1989). At present the evidence that spike timing is essential is incomplete, but there does not appear to be any shortage of known neurophysiological phenomena that could be sensitive to it. In particular, spike timing dependent plasticity (STDP) allows synaptic connections to be strengthened or weakened depending on the exact order of spikes with a precision <5 ms (Markram et al. 1997; Bi and Poo 1998). Hence it is probably conservative to assume that brain emulation needs at time resolution smaller than 0.4–1.4 ms (Lestienne 1996) in order to fully capture spike timing.

Neural Models

The first neural model was the McCulloch-Pitts neuron, essentially binary units summing weighted inputs and firing (i.e. sending 1 rather than 0 as output) if the sum was larger than a threshold (McCulloch and Pitts 1943; Hayman 1999). This model and its successors form the basis of most artificial neural network models. They do not have any internal state except the firing level. Their link to real biology is somewhat tenuous, although as an abstraction they have been very fruitful.

More realistic models such as “integrate-and-fire” sum synaptic potentials and produce spikes.

Conductance-based models are the simplest biophysical representation of neurons, representing the cell membrane as a capacitor and the different ion channels as (variable) resistances. Neurons or parts of neurons are replaced by their equivalent circuits, which are then simulated using ordinary differential equations. Beside the membrane potential they have (at least) two gating variables for each membrane current as dynamical variables.

The core assumptions in conductance-based models is that different ion channels are independent of each other, the gating variables are independent of each other, depending only on voltage (or other factors such as calcium), first order kinetics in the gating variables and that the region being simulated is isopotential.

More complex ion channel models with internal states in the channels have been developed, as well as models including calcium dynamics (possibly with several forms of calcium buffering)
http://icwww.epfl.ch/~gerstner/SPNM/node11.html
Synapses can be modeled using exponentials, beta functions
Destexhe 1994

Learning rules, synaptic adaptation
http://icwww.epfl.ch/~gerstner/SPNM/node69.html

In simple neuron models the “neuronic” (firing rate update) equations can be uncoupled from the “mnemonic” (synaptic weight update) equations, the “adiabatic learning hypothesis” (Caianiello 1961).
However, realistic models often include a complex interplay at synapses between membrane potential, calcium levels and conductances that make this uncoupling hard.

Marom S and Abbott LF. "Modeling state-dependent inactivation of membrane currents." Biophys J. 1994 Aug;67(2):515-20.

Parameters used in conductance-based models are today derived using voltage-clamp experimental data
Willms AR. "NEUROFIT: software for fitting Hodgkin-Huxley models to voltage-clamp data." J Neurosci Meth. 2002, 121:139-150.

Reducing models to simpler ones

(Izhikevich 2004) reviews both typical neural firing patterns and a number of computational models of spiking neurons.

(from (Izhikevich 2004), figure 2)

He estimates both the number of possible biological features different models can achieve and how many floating point instructions are needed per ms of simulation (only assuming a soma current, not taking the effects of dendrites and synapses into account):

Model # of biological features FLOPS/ms
Integrate-and-fire 3 5
Integrate-and-fire with adapt. 5 10
Integrate-and-fire-or-burst 10 13
Resonate-and-fire 12 10
Quadratic integrate-and-fire 6 7
Izikhevich (2003) 21 13
FitzHugh-Nagumo 11 72
Hindmarsh-Rose 18 120
Morris-Lecar 14* 600
Wilson 15 180
Hodgkin-Huxley 19* 1200
* Only the Morris-Lecar and Hodgkin-Huxley models are “biophysically meaningful” in the sense that they attempt actually to model real biophysics, the others only aim for a correct phenomenology of spiking.

The (Izhikevich 2003) model is interesting since it demonstrates that it may be possible to improve the efficiency of calculations significantly (two orders of magnitude) without losing too many features of the neuron activity. The model itself is a two-variable dynamical system with two model parameters. It was derived from the Hodgkin-Huxley equations using a bifurcation analysis methodology keeping the geometry of phase-space intact (Izhikevich 2007). While it is not directly biophysically meaningful it, or similar reduced models of full biophysics, may be possible computational shortcuts in brain emulation. Whether such reductions can be done depends on whether the details on internal neural biophysics are important or not for network-relevant properties such as exact spike-timing. It may also be possible to apply reduction methods on sub-neural models, but the approach requires an understanding of the geometry of phase space of the system.


Efficient computation of branched nerve equations.
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=6698635&dopt=Abstract
http://www.neurophys.biomedicale.univ-paris5.fr/~graham/pdf/BG-JCNS-v8_3-2000.pdf

Simulators

There exist numerous simulation systems at present. Some of the more common are GENESIS (GEneral NEural SImulation System) (Wilson et al. 1989; Bower and Beeman 1998) and Neuron (Carnevale and Hines 2006).


Key issues for neural simulators are numerical stability, extendability and parallelizability.

The numerical methods used to integrate conductance based models need to both produce accurate approximation of solutions of the governing equations and run fast. This is made more problematic by the stiffness of some of the equations.

Most neural simulators have been designed to be easy to extend with new functions, often producing very complex software systems.

Neural simulators need to be able to run on parallel computers to reach high performance (see section below).

http://www.neuron.yale.edu/neuron/papers/jcns2006/parallel_nets_2006.pdf
data on performance of Neuron
http://www.neuron.yale.edu/neuron/papers/thensci/spacetime_rev1.pdf


Parallel Simulation

Networks, neurons and compartments are in general just linked to nearby entities and act simultaneously, making brain models naturally suited for parallel simulations. The main problem is finding the right granularity of the simulation (i.e. how many and which entities to out on each processing node) so that communications overhead is minimized, or finding communications methods that allow the nodes to communicate efficiently.

A PGENESIS simulation of 16 Purkinje cells with 4,500 compartments receiving input from 244,000 granule cells took 2.5 hours on a 128-processor Cray 3TE (nominally 76.8 GFLOPS) to calculate 2 seconds of simulated activity (Howell et al. 2000). This implies around 0.3 MFLOPS per neuron and a slowdown factor of 4500.

The so far (2006) largest simulation of a full Hodgkin-Huxley neuron network was performed on the IBM Watson Research Blue Gene supercomputer using the simulator SPLIT (Hammarlund and Ekeberg 1998; Djurfeldt et al. 2005). It was a model of cortical minicolumns, consisting of 22 million 6-compartment neurons with 11 billion synapses, with spatial delays corresponding to a 16 cm2 cortex surface and a simulation length of one second real-time. Most of the computational load was due to the synapses, each holding 3 state variables . The overall nominal computational capacity used was 11.5 TFLOPS, giving 0.5 MFLOPS per neuron or 1045 FLOPS per synapse. Simulating one second of neural activity took 5942 s . The simulation showed linear scaling in performance with the number of processors up to 4096 but began to show some (23%) overhead for 8192 processors (Djurfeldt et al. 2006).

An even larger simulation with 1e11 neurons and 1e15 synapses was done in 2005 by Eugene M. Izhikevich on a Beowulf cluster with 27 3 GHz processors (Izhikevich 2005). This was achieved by not storing the synaptic connectivity but by generating it whenever it was needed, making this model rather ill suited for brain emulation. One second of simulation took 50 days, giving a slowdown factor of 4.2 million.

SPLIT abstracts underlying hardware,
Djurfeldt, M., Johansson, C., ¨Orjan Ekeberg, Rehn, M., Lundqvist, M., and
Lansner, A. (2005). Massively parallel simulation of brain-scale neuronal network
models. Technical Report TRITA-NA-P0513, CSC, KTH, Stockholm.

Well implemented simulations tends to scale linearly with number of processors, although various memory and communications bottlenecks may occur and optimal use of caching can give even superlinear speedup for some problem sizes (Djurfeldt et al. 2005; Migliore et al. 2006). The main problem appears to be high connectivity, since inter-processor communications is a major bottleneck. Keeping communications to a minimum, for example by only sending information about when and where a spike has occurred, improves performance significantly. If brain emulation requires more information than this to flow between processing nodes performance will be lower than these examples.

Simulations can be time-driven and event-driven. A time-driven simulation advances one timestep at a time while an event-driven simulation keeps a queue of future events (such as synaptic spikes arriving) and advances directly to the next. For some neural network models such as integrate-and-fire, the dynamics between spikes can be calculated exactly, allowing the simulation efficiently just jump forward in time until the next spike occurs (Mattia and Giudice 2000). However, for highly connected networks the time between the arrivals of spikes become very short, and time-driven simulations are equally efficient. On the other hand, the timestep for time-driven models must be short enough that the discretization of spike timing to particular timesteps does not disrupt timing patterns, or various techniques for keeping sub-timestep timing information in the simulation (Morrison et al. 2005).

Computational Demands

A volume-based simulation where the brain is divided into size r voxels would encompass 1.4e-3/r^3 voxels. Each voxel would contain information about which cells, compartments and other information that existed inside, as well as a list of the dynamical variables (local electric fields, chemical concentrations) and local parameter values.

For 10 μm side voxels there would be 1.4e18 voxels in a human brain.

A compartment simulation of N neurons with C compartments each would have NC compartments, each storing a list of neighbor compartments, dynamical variables and local parameters. Synapses can be treated as regular compartments with extra information about weight, neurotransmittors and internal chemical state.

A fine resolution compartment model of each neuron would at least have a compartment for each synapse, making C on the order of 10^3. That would imply 1e14 compartments.

Sizes of compartments in current simulations are usually set by taking the length constants of neuron membranes into account: simulating on a much finer resolution is not needed (except possibly to deal with branching). However, for spiny cells synaptic spines likely need to be their own compartments.

PLAUSIBLE RESOLUTION
BITS FOR MODEL
FLOPS
COMMUNICATION

Posted by Anders3 at 04:33 PM | Comments (26957)

Scanning

Scanning, Non destructive scanning, destructive scanning

Scanning

Scanning
• Hardware
o resolution
o only visual?
o scanning speed
o capacity
o reliability
• Reconstruction
o recognizing feature list
o 3-d reconstruction
o optimization

The first step in brain emulation is to acquire the necessary information from a physical brain, which we will call scanning.

Brain emulation for compartment models of neuron activity needs to acquire both geometric data about the localisation, morphology and structure of the nervous connections and functional/chemical data about their nature such as what ion channels, receptors and neurotransmittors are present, the presence of electrical synapses, electrical membrane properties, phosphorylation states of synapses and genetic expression states. This needs to be done at a sufficient resolution. It may be possible to infer functional properties such as whether a synapse is excitatory or inhibitory purely from geometry (i.e. a synapse from a smooth neuron with a particular morphology is likely inhibitory), but it does not seem clear how much information about synaptic strength can be inferred from pure geometry.

If emulation can be achieved using only functional properties of neurons then it may be enough to determine neuron type, connectivity and synaptic strengths.

There are several potential approaches to scanning. Scanning may be destructive, where the brain is destructively disassembled, or non-destructive, in which case the brain is left viable afterwards.

Scanning might also occur in the form of gradual replacement, as piece after piece of the brain are replaced by an artificial neural system interfacing with the brain and maintaining the same functional interactions as the lost pieces. Eventually only the artificial system remains, and the information stored can if desired be moved (Morevec 1988). While gradual replacement might assuage fears of loss of consciousness and identity it appears technically very complex as the scanning system has to not only scan a living, changing organism but also interface seamlessly with it at least on the submicron scale while working. The technology needed to achieve it could definitely be used for scanning by disassembly. Gradual replacement is hence not likely as a first form of brain emulation scanning.

Non-Destructive Scanning


Non-destructive scanning requires minimally invasive methods. The scanning needs to acquire the relevant information at the necessary 3D resolution. There are several limitations:

• The movement of biological tissue, requiring either imaging faster than it can move or accurate tracking. In cat, arterial pulse produces 110–266 μm movements lasting 330–400 ms and breathing larger (300–950 μm) movements (Britt and Rossi 1982) . The stability time is as short as 5-20 ms.
• Imaging has to occur over a distance of >150 mm (the width of an intact brain) .
• The imaging must not deposit enough energy (or use dyes, tracers or contrast enhancers) that hurt the organism.

Of the possible candidates only MRI appears to be able to fulfil the three limitations even in principle. Optic imaging, even using first-arriving light methods, would not work across such a large distance. X-ray tomography or holography of the intensity needed to image tissue would deposit harmful energy.

The resolution of MRI depends on the number of phase steps used, gradient strength, acquisition time and desired signal-to-noise ratio. To record micron-scale features in a moving brain very short acquisition times are needed, or a way of removing the movement artefacts. Each doubling of spatial resolution divides the signal-to-noise ratio by 8. Finally, there are also problems with tissue water self-diffusion, making resolutions smaller than 7.7 µm impossible to achieve (Glover and Mansfield 2002).

Given that brain emulation requires higher resolution, this probably rules out MRI as a non-destructive scanning method. However, if the brain is frozen water diffusion and movement do not occur and very long acquisition times can be used. MRI might hence be a possible scanning method for frozen or fixed brains. Since it is not destructive it may also act as an adjunct to other, destructive, scanning methods.


Destructive Scanning


Destructive scanning has greater freedom both in physical effects used, energy levels and fixing the brain through freezing and/or chemical fixation.

Candidates:
MRI microscopy
Optic microscopy
Black face imaging
Knife edge scanning
All-optical histology
Electron microscopy
TEM
SEM
X-ray
X-ray fresnel/holographic diffraction
X-ray or neutron beam tomography
Atomic force microscopy
Mass spectrometry
Nanodisassembly

MRI Microscopy

Although MRI imaging may not be suitable for scanning entire brains at sufficient resolution, MRI microscopy might be suitable for scanning parts of them if water diffusion is stopped by freezing or fixation.


A combination of MRI and AFM is magnetic resonance force microscopy (MRFM) where a magnetic bead is placed on an ultra thin cantilever and moved over the sample (or the reverse). By generating RF magnetic fields the spins of nuclei within the resonant slice beneath the bead can be flipped, producing forces on the cantilever that can be detected. This would enable identification of the molecules present near the surface. Current resolution achieved are 80 nm voxels in a scan volume of 0.5 µm3 (Chao et al. 2004) and single spin detection with 25 nm resolution in one dimension (Rugar et al. 2004). Whether this can be scaled up to detecting e.g. the presence of cell membranes or particular neurotransmitters remains to be seen.


Optical Methods

Microscopy is limited by the need for staining tissues to make them stand out, and the wavelength of light. The main benefit is that it goes well together with various spectrometric methods (see below) for determining the composition of tissues.

Confocal microscopy suffers from having to scan through the entire region of interest and quality degrades away from the focal plane. Using inverse scattering methods depth-independent focus can be achieved (Ralston et al. 2007).

To add: discussion of McCormick’s team’s work on automated slicing and scanning

Electron microscopy

Electron microscopy can resolve the fine details of axons and dendrites in dense neural tissue. Images can be created through transmission electron microscopy (TEM), where electrons are sent through tissue, or scanning electron microscopy (SEM) where electrons are scattered from the surface: both methods require fixing the sample by freezing and/or embedding it in polymer. However, the main challenge is to automate sectioning and acquisition of data. The three current main methods are serial section electron tomography (SSET), serial section transmission electron microscopy (SSTEM) and serial block-face scanning electron microscopy (SBFSEM) (Briggman and Denk 2006).

SSTEM: High resolution TEM 3D images can be created using tilt-series-based tomography where the preparation is tilted relative to the electron beam, enabling recording depth information (Frank 1992; Penczek et al. 1995). This method mainly appears suited for local scanning (such as imaging cellular organelles) and cannot penetrate very deep into the surface (around 1 µm) (Lučić, Förster and Baumeister 2005).

SSET: Creating ultrathin slices for TEM is another possibility. (Tsang 2005) created a three-dimensional model of the neuromuscular junction through serial TEM of 50 nm sections created using an ultramicrotome. (White et al. 1986) used serial sections to reconstruct the C. elegans nervous system. However, sectioning is physically tricky and labor intensive.

SBFSEM: One way of reducing the problems of sectioning is to place the microtome inside the microscope chamber (Leighton 1981); for further contrast plasma etching was used (Kuzirian and Leighton 1983). (Denk and Horstmann 2004) demonstrated that backscattering contrast could be used instead in a SEM, simplifying the technique. They produced stacks of 50-70 nm thick sections using an automated microtome in the microscope chamnber, with lateral jitter less than 10 nm. The resolution and field size was limited by the commercially available system. They estimated that tracing of axons with 20 nm resolution and S/N ratio of about 10 within a 200 μm cube could take about a day (while 10 nm x 10 nm x 50 nm voxels at S/N 100 would require a scan time on the order of a year).

http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0020329


Overall the problem is not achieving high enough resolution but to image a wide field.

Reconstructing volumes from ultrathin sections faces many practical issues. Current electron microscopes can not handle sections wider than 1-2 mm, long series of sections are needed but the risk of errors or damage increase with the length and the number of specimen holding grids becomes excessive (unless sectioning occurs inside the microscope (Kuzirian and Leighton 1983)). Current state of the art for practical reconstruction from tissue blocks is about 0.1 mm3 , containing about 107-108 synapses (Fiala 2002).

Chemical Analysis

A key problem is to detect the chemical state and type of cellular components. Normally this is done by staining with dyes or quantum dots that bind to the right target followed by readout using optical methods. Beside the need for diffusing dyes through the samples, each dye is only selective for a certain target or group of targets, necessitating multiple dyes for identifying all relevant components. If the number of chemicals that have to be identified is large, this would make dying ineffective.

One possible approach is Raman microspectroscopy (Krafft et al. 2003; Krafft 2004), where near-infrared scattering is used to image the vibration spectrum of the chemical components (mainly macromolecules) of tissue. The resolution for near infrared spectroscopy is about 1 μm (limited by diffraction) and confocal methods can be used for 3D imaging. Recording times are long, on the order of minutes for individual pixels. Using shorter wavelengths appears to induce tissue damage (Puppels et al. 1991), which may be of little concern for destructive scanning. Ultraviolet resonance microspectroscopy has also been used, enabling selective probing of certain macromolecules (Pajcini et al. 1997; Hanlon et al. 2000). In some cases native fluorescence can enable imaging by triggering it with UV light, laser-induced native fluorescence, LINF, such as in the case of serotonin (Tan et al. 1995; Parpura et al. 1998) and possibly dopamine (Mabuchi et al. 2001).

At present it looks uncertain how much functionally relevant information that can be determined from spectra. If the number of neuron types is relatively low and chemically distinct, it might be enough to recognize their individual profiles. Adding dyes tailored to disambiguate otherwise indistinguishable cases may also help.

Gamma-ray holography
Problems: very weak interaction with material unless heavily stained, requires very high energies in order to get acceptable S/N ratio, at which point likely rather destructive.

Nanodisassembly

The most complete approach would be to pick the brain apart atom by atom or molecule by molecule, recording their position and type for further analysis. The scenario in (Morevec 1988) can also be described as nanodisassembly (in an unfixated brain, with on-the-fly emulation) working on a slightly larger size scale. (Merkle 1994) describes a relatively detailed proposal where the brain is divided into 3.2e15 0.4 μm cubes where each cube would be disassembled atomically (and atom/molecule positions recorded) by a disassembler nanodevice (Drexler 1986) over a three year period.

Given that no detailed proposal for a nanodisassembler has been made, it is hard to evaluate the chances of this method. It would have to act at a low temperature to prevent molecules in the sample from moving around, removing surface molecules one by one, identifying them and transmitting the position, orientation and type to second-line data aggregators. Clear challenges are the construction of tool tips that can extract arbitrary molecules or detect molecular type for further handling with specialized tool tips, as well as handling macromolecules and fragile molecular structures. Atomic disassembly would avoid the complications of molecules for the greater simplicity of a handful of atom types, at the price of needing to break molecular bonds and possibly deal with the creation of free radicals.

Posted by Anders3 at 04:35 PM | Comments (63894)

Scan Interpetation

Scan Interpetation


The data from the scanning must be postprocessed and interpreted in order to become useful for brain emulation (or other research). Cell membranes must be traced, synapses identified, neuron volumes segmented, distribution of synapses, organelles, cell types and other anatomical details (blood vessels, glia) identified. Currently this is largely done manually: cellular membranes can be identified and hand-traced at a rate of 1-2 hours/μm3 (Fiala and Harris 2001), far too slow for even small cortical volumes.

Software needed includes:
• Geometric adjustment (aligning sections, handling shrinkage, distortions)
• Noise removal
• Data interpolation (replacing lost or corrupted scan data)
• Cell membrane tracing (segmentation, tracing in 2D and 3D)
• Synapse identification
• Identification of cell types
• Estimation of parameters for emulation
• Connectivity identification
• Databasing
(after (Fiala 2002))

Data handling is at present a bottleneck. 0.1 mm3 at 400 pixels/μm resolution and 50 nm section thickness would (compressed) contain 73 terabytes of raw data. A full brain at this resolution would require 109 terabytes (Fiala 2002).

Geometric adjustment
Various methods for achieving automatic registration (correcting differences in alignment) of image stacks are being developed. At its simplest registration involves finding a combination of translation, scaling and rotation that makes subsequent images match best. However, skewing and non-linear distortions can occur, requiring more complex methods. Combining this with optimisation methods and an elastic model to correct for shape distortion produced good results with macroscopic stacks of rat and human brains (Schmitt et al. 2007).
Noise removal
Data interpolation
Lost/corrupted data must be replaced with probabilistic interpolations. This might require feedback from later stages to find both the most likely interpretation or guess, constrained by what makes sense given known data.
For large lost volumes generic neurons and connectivity might have to be generated based on models of morphology and connectivity.

Cell tracing
Automated tracing of neurons imaged using confocal microscopy has been attempted using a variety of methods. Even if the scanning method used will be a different approach it seems likely that knowledge gained from these reconstruction methods will be useful.
One approach is to enhance edges and find the optimal joining of edge pixels/voxels to detect contours of objects. Another is skeletonization. For example, (Urban et al. 2006) thresholded neuron images (after image processing to remove noise and artefacts), extracting the medial axis tree. (Dima, Scholz and Obermayer 2002) employed a 3D wavelet transform to perform a multiscale validation of dendrite boundaries, in turn producing an estimate of a skeleton.
A third approach is exploratory algorithms, where the algorithm starts at a point and uses image coherency to trace the cell from there. This avoids having to process all voxels, at the price of risking to lose parts of the neuron if the images are degraded or unclear. (Al-Kofahi et al. 2002) use directional kernels acting on the intensity data to follow cylindrical objects. (Uehara et al. 2004) calculates the probability for each voxel to belong to a cylindrical structure, and then propagates dendrite paths through it.
One weakness of these methods is that they assume cylindrical shapes of dendrites and the lack of adjoining structures (such as dendritic spines). By using support-vector machines that are trained on real data a more robust reconstruction can be achieved (A. Santamaría-Pang 2006).

Synapse identification


One of the major unresolved issues is whether it is possible to identify the functional characteristics of synapses, in particular synaptic strength and neurotransmitter content, from their morphology.

Identification of cell types

Estimation of parameters for emulation


Connectivity identification
This steps assigns synaptic connections between neurons.

Currently statistical connectivity rules are used based on proximity, the “Peters’ rule” where synaptic connections are assumed where axons with boutons overlap with dendrites (Peters 1979; Braitenberg and Schuz 1998). This can be used to estimate the statistics of synaptic connectivity (Binzegger, Douglas and Martin 2004; Shepherd et al. 2005). But neural geometry cannot predict the strength of functional connections reliably, perhaps because synaptic plasticity changes the strength of geometrically given synapses (Shepherd et al. 2005).

In electron micrographs synapses are currently recognized using the criteria that within a structure there are synaptic vesicles adjacent to a presynaptic density, a synaptic density with electron-dense material in the cleft and densities on the cytoplasmic faces in the pre- and postsynaptic membranes (Colonnier 1981; Peters and Palay 1996).

It is not possibly to rely on synapses only occurring from axons to dendrites; axo-somatic, axo-axonic, and dendro-dendritic (which may be one-way or reciprocal) synapses have been observed. Occasionally several synapses coincide, such as serial axoaxodendritic synapses and synaptic glomeruli where an axon synapse onto two dendrites where one also synapses on the other.

In general, cortical synapses tend to be either asymmetrical “type I” synapses (75-95%) or symmetrical “type II” synapses (5-25%), based on having a prominent or thin postsynaptic density. Type II synapses appear to be inhibitory, while type I synapses are mainly excitatory (but have exceptions) (Peters and Palay 1996). This allows at least some inference of function from morphology.

The shape and type of vesicles may also provide clues about function. Small, clear vesicles appear to mainly contain small-molecule neurotransmitters, large vesicles (60 nm diameter) with dense cores appear to contain noradrenaline, dopamine or 5-HT and large vesicles (up to 100 nm) with 50-70 nm dense cores contain neuropeptides (Hokfelt et al. 2000; Salio et al. 2006). Unfortunately there does not appear to be any further distinctiveness of vesicle morphology to signal neurotransmitter type.

Gap junctions, where the pre- and postsynaptic cells are electrically linked by connexon channels through the cell membrane, can be identified by membranes remaining parallel with just 2 nm separation and a grid of connexons. They appear to be relatively rare in mammals, but occur in the retina, inferior olive and lateral vestibular nucleus (Peters and Palay 1996).

Posted by Anders3 at 04:36 PM | Comments (31229)

Other Requirements

Computer hardware requirements, Body simulation, Environment simulation and sense simulation

Computer hardware requirements

Computing
a. CPUs (single or multiple)
b. memory
c. internal bandwidth

storage of position, connectivity, states, cellular environment?
Other Requirements

Other
a. virtual reality
b. I/O
c. performance metrics
d. regulatory approval

Simulated bodies and worlds, communications with the outside world are not necessary per se for brain emulation except insofar they are needed to maintain short-term function of the brain. For long-term function, especially of human mind emulations, embodiment and communication is likely important.

Body simulation

The body simulation translates between neural signals and the environment, as well as maintains a model of body state as it affects the brain emulation.

How detailed the body simulation needs to be in order to function depends on the goal. An “adequate” simulation produces enough and the right kind of information for the emulation to function and act, while a convincing simulation is nearly or wholly indistinguishable from the “feel” of the original body.

A number of relatively simple biomechanical simulations of bodies connected to simulated nervous systems have been created to study locomotion. (Suzuki et al. 2005) simulated the C elegans body as a multi-joint rigid link where the joints were controlled by motorneurons in a simulated motor control network. Örjan Ekeberg has simulated locomotion in lamprey (Ekeberg and Grillner 1999), stick insects (Ekeberg, Blümel and Büschges 2004) and the hind legs of cat (Ekeberg and Pearson 2005) where a rigid skeleton is moved by muscles either modeled as springs contracting linearly with neural signals, or in the case of the cat, a model fitting observed data relating neural stimulation, length and velocity with contraction force (Brown, Scott and Loeb 1996). These models also include sensory feedback from stretch receptors, enabling movements to adapt to environmental forces: locomotion involves an information loop between neural activity, motor response, body dynamics and sensory feedback (Pearson, Ekeberg and Buschges 2006).

Today biomechanical model software enables fairly detailed models of muscles, the skeleton and the joints, enabling calculation of forces, torques and interaction with a simulated environment (Biomechanics Research Group Inc 2005). Such models tend to simplify muscles as lines and make use of pre-recorded movements or tensions to generate the kinematics.

A fairly detailed mechanical model of human walking has been constructed with 23 degrees of freedom driven by 54 muscles. However, it was not controlled by a neural network but rather used to find an energy-optimizing gait (Anderson and Pandy 2001). Other biomechanical models are being explored for assessing musculoskeletal function in human (Fernandez and Pandy 2006), and can be validated or individualized by use of MRI data (Arnold et al. 2000) or EMG (Lloyd and Besier 2003). It is expected that near future models will be based on a volumetric muscle and bone models found using MRI scanning (Blemker and Delp 2005; Blemker et al. 2007).

Environment simulation and sense simulation

The environment simulation provides a simulated physical environment for the body simulation. One can again make the distinction between an adequate environment simulation and a convincing simulation. An adequate environment produces enough input to activate the brain emulation and allow it to interact in such a way that its state and function can be evaluated. A convincing simulation is close enough to reality that the kinds of signals and interaction that occurs is hard (or impossible) to distinguish from reality.

It seems likely that we already have the tools for making adequate environments in the form of e.g. game 3D rendering engines with physics models or virtual environments such as Second Life. While not covering more than sight and sound, they might be enough for testing and development. For emulations of simpler brains such as C elegans simulations with simplified hydrodynamics (similar to (Ekeberg and Grillner 1999)), possibly including simulations of chemical gradients to guide behavior.

Convincing environments are only necessary if the long-term mental state of emulated humans is at stake. While it is possible that a human could adapt to a merely adequate environment it could very likely be experienced as confining or lacking in sensory stimulation. Note that even in a convincing environment simulation not all details have to fit physical reality perfectly. Plausible simulation is more important than accurate simulation in this domain and may actually improve the perceived realism (Barzel, Hughes and Wood 1996). In addition, humans accept surprisingly large distortions (20% length change of objects when not paying direct attention, 3% when paying attention) (Harrison, Rensink and van de Panne 2004) which allows a great deal of leeway in a convincing environment.

What quality of environment is needed to completely fool the senses? In the following we will assume that the brain emulation runs in real-time, that is one second of simulation time corresponds to one second of outside time. For slower emulations the environment model would be slowed comparably, and all computational demands divided by the scale factor.

At the core of the environment model would be a physics engine simulating the mechanical interactions between the objects in the environment and the simulated body. It would not only update object positions depending on movement and maintain a plausible physics, it would also provide collision and contact information needed for simulated touch. On top of this physics simulation and a database of object properties a series of rendering engines for different senses would produce the raw data for the senses in the body model.

Vision

Visual photorealism has been sought in computer graphics for about 30 years, and appears to be a fairly mature area at least for static images and scenes. Much effort is currently going into the area for use in computer games and movies.

(McGuigan 2006) proposes a “graphics Turing test” and estimates that for 30 Hz interactive visual updates 518.4-1036.8 TFLOPS would be enough for Monte Carlo global illumination. This might actually be an overestimate since he assumes generation of complete pictures. Generating only the signal needed for the retinal receptors (with higher resolution for the yellow spot than the periphery) could presumably reduce the demands. Similarly more efficient implementations of the illumination model (or a cheaper one) would also reduce demands significantly.

Hearing

The full acoustic field can be simulated over the frequency range of human hearing by solving the differential equations for air vibration (Garriga, Spa and Lopez 2005). While accurate, this method has a computational cost that scales with the volume simulated, up to 16 TFLOPS for a 2x2x2 m room. This can likely be reduced by the use of adaptive mesh methods, or ray- or beam-tracing of sound (Funkhouser et al. 2004).

Sound generation occurs not only from sound sources such as instruments, loudspeakers and people but from normal interactions between objects in the environment. By simulating surface vibrations realistic sounds can be generated as objects collide and vibrate. A basic model with N surface nodes requires 0.5292 N GFLOPS, but this can be significantly reduced by taking perceptual shortcuts (Raghuvanshi and Lin 2006; Raghuvanshi and Lin 2007). This form of vibration generation can likely be used to synthetize realistic vibrations for touch.

Smell and Taste

So far no work has been done on simulated smell and taste in virtual reality, mainly due to the lack of output devices. Some simulations of odorant diffusion have been done in underwater environments (Baird RC, Johari H and GY. 1996 ) and in the human and rat nasal cavity (Keyhani, Scherer and Mozell 1997; Zhao et al. 2006). In general an odour simulation would involve modelling diffusion and transport of chemicals through air flow; the relatively low temporal and spatial resolution of human olfaction would likely allow a fairly simple model. A far more involved issue is what odorant molecules to simulate: humans have 350 active olfactory receptor genes, but we can likely detect more variation due to different diffusion in the nasal cavity (Shepherd 2004).

Taste appears even simpler in principle to simulate since it only comes into play when objects are placed in the mouth and then only through a handful of receptor types. However, the taste sensation is a complex interplay between taste, smell and texture. It may be necessary to have particularly fine-grained physics models of the mouth contents in order to reproduce plausible eating experiences.

Haptics

The haptic senses of touch, proprioception and balance are crucial for performing skilled actions in real and virtual environments (Robles-De-La-Torre 2006).

Tactile sensation relates both to the forces affecting the skin (and hair) as well as how they are changing as objects or the body are moved. To simulate touch stimuli collision detection is needed to calculate forces on the skin (and possibly deformations) as well as the vibrations when it is moved over a surface or exploring it with a hard object (Klatzky et al. 2003). To achieve realistic haptic rendering updates in the kilohertz range may be necessary (Lin and Otaduy 2005).

Proprioception, the sense of how stretched muscles and tendons are (and by inference, limb location) is important for maintaining posture and orientation. Unlike the other senses proprioceptive signals would be generated by the body model internally. Simulated Golgi organs, muscle spindles and pulmonary stretch receptors would then convert body states into nerve impulses.

The balance signals from the inner ear appears relatively simple to simulate, since it is only dependent on the fluid velocity and pressure in the semicircular channels (which can likely be assumed to be laminar and homogeneous) and gravity effects on the utricle and saccule. Compared to other senses, the computational demands are minuscule.

Thermoreception could presumably be simulated by giving each object in the virtual environment a temperature, activating thermoreceptors in contact with the object. Nocireception (pain) would be simulated by activating the receptors in the presence of excessive forces or temperatures; the ability to experience pain from simulated inflammatory responses may be unnecessary verisimilitude.

Conclusion

Rendering a convincing environment for all senses probably requires on the order of several hundred TFLOPS. While significant by today’s standards it represents a minuscule fraction of the computational resources needed for brain emulation, and is not necessary for meeting the success criteria of emulation.

Posted by Anders3 at 04:37 PM | Comments (72970)