image

BRAIN THEORIES

BRAIN THEORIES

GLOSSARY

Bayes’ theorem A pillar of probability theory that provides a way of updating beliefs with new evidence. Named after the Reverend Thomas Bayes, Bayes’ theorem relates the probability of a belief given some evidence to the prior probabilities of the belief and the evidence, and the probability of the evidence given the belief. Because it depends on prior beliefs, Bayes’ theorem has endured a great deal of controversy, but it is now a mainstay within statistics and is increasingly influential as a metaphor or mechanism for how the brain might work.

cerebellum The cauliflower-shaped ‘little brain’ at the bottom of the main brain. It is most often associated with controlling the precision, accuracy and fluency of movements but is now also known to play important roles in other cognitive processes. Surprisingly, the cerebellum contains more neurons than the rest of the brain.

conditioning A learning process by which an event becomes associated with an outcome leading to a change in behaviour. There are many kinds of conditioning – the best known are ‘classical’ and ‘instrumental’ (or ‘operant’). In classical conditioning, an environmental stimulus (such as a high-pitched sound) becomes associated with an outcome (such as the presentation of food) leading to a new behaviour (such as salivation and/or approach) with respect to the sound. In instrumental conditioning, an association is established between an action (such as pressing a lever) and an outcome (such as getting food) leading to an increased propensity to press the lever.

connectionism A theoretical approach that emphasizes how interconnected networks of neurons can learn via simple local rules applied between pairs of neurons. Connectionism is both a model of brain function and a method within artificial intelligence.

hypersynchrony A hypersynchronous state in the brain occurs when levels of neuronal synchrony exceed normal limits, so that large parts of the brain start switching on and off together. These ‘electrical storms’ are associated with epileptic seizures.

neurons The cellular building blocks of the brain. Neurons carry out the brain’s basic operations, taking inputs from other neurons via dendrites, and – depending on the pattern or strength of these inputs – either releasing or not releasing a nerve impulse as an output. Neurons come in different varieties but (almost) all have dendrites, a cell body (soma) and a single axon.

phrenology A now discredited theory proposing that individual variation in mental capabilities and personality characteristics can be inferred on the basis of differences in the shape of the skull.

predictive coding A popular implementation of the Bayesian Brain hypothesis, according to which the brain maintains predictive models of the external causes of sensory inputs and updates these models according to some version of Bayes’ theorem. Predictive coding has its roots in the ideas of Hermann von Helmholtz, who conceived of perception as a form of inference.

re-entry In terms of brain structure, re-entry describes a pattern of connections in which an area A connects to an area B, and B is reciprocally connected back to A. In terms of dynamics, re-entrant connectivity implies that neural signals flow in both directions between two areas. Re-entry should be distinguished from the term ‘feedback’, which is usually used to describe the processing of error signals.

synapses The junctions between neurons, linking the axon of one to a dendrite of another. Synapses ensure that neurons are physically separate from each other so that the brain is not one continuous mesh. Communication across synapses can happen either chemically via neurotransmitters, or electrically.

synchrony Applied to neuroscience, synchrony describes the correlated activity of individual neurons. Neurons in synchrony fire spikes (action potentials) at the same time, which can increase their impact on other neurons that they both project to. Neuronal synchrony has been proposed to underlie many processes related to perception and attention.

THE LOCALIZATION OF FUNCTION

the 30-second neuroscience

Like cartographers of the mind, 19th-century scientists began to map the terra incognita of the brain. Franz Gall, founder of the theory of phrenology, proposed that different parts of the brain produced bumps in the skull depending on individual variation in mental capability and personality. He was proven wrong, but the questions underlying his ideas endured: do memory, language, attention, emotion and perception depend on particular brain areas or are these cognitive functions distributed across the brain? The main way of testing this idea was ‘ablation’, the destruction of a specific area of the brain in an animal (or analyzing people with specific brain lesions). While in humans this evidence seemed to support the localization of function, in animals the case was less clear. Karl Lashley trained rats to navigate around a maze and tested their behaviour after damaging specific parts of their brains. He found that the reduction in performance depended more on the amount of brain tissue damaged than on the specific location affected. This led him to defend the idea of ‘mass action in the brain’, which argued that the cerebral cortex acts ‘as a whole’ in many different types of learning. The modern consensus is that while many functions are indeed associated with particular brain areas, each function nonetheless depends on interactions in widely distributed networks involving many different areas.

3-SECOND BRAINWAVE

Cognitive functions are neither fully localized nor wholly distributed in the brain – each function depends on a complex but specific network of interacting brain regions.

3-MINUTE BRAINSTORM

The question is not where in the brain a cognitive function sits, but what are the underlying mechanisms and key interacting networks supporting it. Neuroscientists have discovered that, while there is clear functional specialization in the brain, these dedicated areas do not work alone – they are better thought of as hubs within complex interconnected networks. For example, fear in the brain depends on the amygdala. If this area is removed, a person becomes fearless, but it is the amygdala’s connections and extended network that allow us to really feel fear.

RELATED BRAINPOWER

THE BASIC ARCHITECTURE OF THE BRAIN

NEUROPSYCHOLOGY

BRAIN IMAGING

THE HUMAN CONNECTOME

3-SECOND BIOGRAPHIES

FRANZ GALL

17581828

Founder of phrenology

KARL LASHLEY

18901958

Defender of the ideas of ‘mass action’ and ‘equipotentiality’ in the brain

30-SECOND TEXT

Tristan Bekinschtein

HEBBIAN LEARNING

the 30-second neuroscience

What happens in the brain when we learn something new? How do changes in neurons and synapses lead to the formation of new memories? Back in 1949, Donald Hebb speculated that learning and memory might depend on a simple process in which ‘neurons that fire together, wire together’. Think of this as a trace, like footsteps left in snow, that deepens whenever two neurons communicate. Early on, Hebb’s theory was applied to experiments on conditioning that originated with Ivan Pavlov. For example, a particular neuron in a bee’s brain fires when the bee is given sugar, causing the bee to stick out her tongue (proboscis). If we introduce a lemon odour before giving the bee the sugar, and repeat this several times, the bee will start to stick her tongue out when she smells the lemon, even when no sugar is offered. This neuron now fires to the odour alone: Hebbian learning has strengthened the connections between these neurons and others responding to lemony smells. The power of Hebb’s idea lies with evidence showing that learning changes the connections between two neurons at the molecular level. Now, imagine you learn a new word: ‘brain-numb’. You are now creating new connections in the networks of your linguistic brain through Hebbian learning.

3-SECOND BRAINWAVE

What happens when we learn something new? ‘Neurons that fire together, wire together.’

3-MINUTE BRAINSTORM

The space between two neurons is called the synaptic cleft. The neurotransmitters that use this little gap as a bridge may make the first neuron excite or inhibit the second. Learning works as a combination of these two types of communication. Based on this, evidence of learning has been found, even in sea slugs. In this case, the molecular mechanism involves glutamate (a neurotransmitter) released by the first neuron, which then crosses the gap and binds to receptors in the second neuron, which in turn makes more receptors available for the next time the glutamate arrives.

RELATED BRAINPOWER

NEURAL NETWORKS

THE NEURAL CODE

THE REMEMBERING BRAIN

3-SECOND BIOGRAPHIES

IVAN PAVLOV

18491936

Developed the concept of conditioning, probably an underlying mechanism of learning in most of its forms

DONALD HEBB

190485

Proposed a mechanism for learning in neurons

30-SECOND TEXT

Tristan Bekinschtein

NEURAL NETWORKS

the 30-second neuroscience

Imagine a computer that can think like us. Now constrain its circuits to work like networks of neurons giving birth to emergent processes that underlie perception, thoughts and actions. This is the aim of computational neuroscience, and although it still remains out of reach, the perspective of neural networks has been highly influential in theories about how the brain works. In 1890, William James proposed that our thoughts are a product of interaction among neurons, but this idea was very hard to test at the time. Taking a more formal approach, in 1943 Warren McCulloch and Walter Pitts created a mathematical model of a single neuron with inputs and outputs that formed the basis of the first artificial neural networks. In the 1970s, mathematical arrays of artificial neurons were created that started to mimic biological brain mechanisms. These ‘connectionist networks’ had new learning algorithms at their core and could solve complex problems of pattern recognition. Not only did these devices help scientists understand how the brain might work, they also led – and are still leading – to new biologically inspired technologies. A key implication of this view is that information is not represented locally in the brain but is distributed across all the connections in a network. The hunt is now on to devise new neural network architectures that mimic the brain even more closely.

3-SECOND BRAINWAVE

Artificial networks of neurons can achieve complex learning behaviour much like biological brains. But more research is needed before these networks can truly think, learn and maybe even feel.

3-MINUTE BRAINSTORM

Machine learning via neural networks has been a major achievement of artificial intelligence. Advanced forms of machine learning allow computers to identify patterns and classify data without being explicitly programmed. Recently, Geoff Hinton, a pioneer in this area, has shown that organizing multiple neural networks hierarchically (so-called ‘deep learning’), is instrumental in learning high-level concepts from unlabelled data. Something like deep learning may be happening in the brain.

RELATED BRAINPOWER

THE LOCALIZATION OF FUNCTION

HEBBIAN LEARNING

THE BAYESIAN BRAIN

3-SECOND BIOGRAPHIES

WARREN STURGIS MCCULLOCH & WALTER PITTS

18981969 & 192369

Showed mathematically that artificial neural networks can implement logical, arithmetic and symbolic functions

DAVID RUMELHART & JAMES MCLELLAND

19422011 & 1948

Created the first model with formal rules that became the basic framework for most connectionism models

30-SECOND TEXT

Tristan Bekinschtein

THE NEURAL CODE

the 30-second neuroscience

How can a group of neurons sense a change in the world, turn that into information, and pass it to other neural networks to generate perception and behaviour? Let us first be clear that the brain does not have one language but many, like a neuronal tower of Babel. Even worse, there may be different languages at different levels: neurons, ensembles (groups of neurons) and the whole brain. For example, simultaneously reading electrical activity from a cluster of neurons in a monkey’s motor cortex shows that the simple sum of the different neurons’ voices does not clearly explain the trajectory of the monkey’s arm movement. But looking at the overall activity of the entire population of neurons does encode the trajectory, demonstrating that the integrated activity of tens of thousands of neurons is needed for a simple movement. How do distant groups of neurons co-ordinate their activity? Is the neural code a ‘rate code’, in other words, is it based on the speed at which neurons fire spikes? Or does the brain use a ‘timing code’, in which it is the precise firing pattern that matters? This is an old debate and the story seems to be more complex than ‘one or the other’. The modern neural code breakers are using new mathematical tools and cutting-edge theories to show how rate and timing codes might work together in mediating the electrical conversations of the brain.

3-SECOND BRAINWAVE

How does the brain talk to itself? To unravel the mystery of the brain’s operation, we will need to comprehend the language of neurons, individually and collectively.

3-MINUTE BRAINSTORM

One promising concept for neural coding is that of synchrony. Neurons that fire spikes together will probably have more impact on their targets than those that do not, assuming that the synchronously emitted spikes arrive at their destinations at the same time. Synchrony among large populations of neurons can be seen in the brain rhythms detectable using EEG and it has been suggested that these rhythms provide ‘windows of opportunity’ within which neurons can effectively communicate with each other.

RELATED BRAINPOWER

NEURONS & GLIAL CELLS

NEURAL NETWORKS

THE OSCILLATING BRAIN

3-SECOND BIOGRAPHIES

ALAN TURING

191254

Father of computer science, cracked the Enigma machine code

WILLIAM BIALEK

1960

Formalized the problem of reading the neural code using information theory

30-SECOND TEXT

Tristan Bekinschtein

DONALD HEBB

Considering the impact he subsequently had on his chosen discipline, Donald Olding Hebb came somewhat late to psychology. His first ambition was to be a writer. Born in Nova Scotia, Canada, he began his academic life at Dalhousie University (after an indifferent school career), where he achieved a BA. His writing career did not take off, so he made the pragmatic move into education, teaching for some years in elementary and high schools. His reading of Freud and Pavlov engendered an interest in psychology, and he studied for an MA at McGill University, followed by a PhD from Harvard (under Karl Lashley), a stint at the Montreal Neurological Institute (with Wilder Penfield, see here) and some time at Yerkes Laboratory of Primate Biology.

Hebb’s work spanned neurophysiology and psychology. His overriding interest was in the link between brain and mind – how neurons behave and arrange themselves to produce what we perceive as thoughts, feelings, memories, and emotions. He came at the conundrum from all directions. He studied sensory deprivation, brain damage, the effects of brain surgery, behaviour, experience, environment, stimulation and heredity, as well as the major theories in psychology of the time, including Gestalt and behaviourism, and the work of Freud, Skinner and Pavlov. His findings inspired him to formulate a kind of grand unified theory for neuroscience that he hoped would unite brain and mind.

In 1942, while at Yerkes, he began writing his seminal work, The Organization of Behaviour: A Neuropsychological Theory, setting out his ideas. It introduced the concept of the Hebbian synapse (broadly, the idea that ‘neurons that fire together, wire together’) and the cell assembly, the notion that Hebbian learning would lead to groups of neurons activating in particular sequences, enabling thought, perception, learning and memory. The work was published in 1949, a year after he was appointed Chair of Psychology at McGill University. Its influence and relevance was far-reaching and Hebb’s pioneering work remains the basis for many developments in robotics, computer science, artificial intelligence and engineering as well as neuroscience and developmental psychology.

Both neuroscientists and psychologists claimed Donald Hebb as one of their own. It is a tribute, maybe, to his latent writing skills that he was able to look forensically at the detail of his research and at the same time step back and show his readers the bigger picture – like all great novelists, a unifier of the general and the particular.

22 July 1904

Born in Chester, Nova Scotia, Canada

1925

Graduated with a BA from Dalhousie University

1932

MA in psychology from McGill University; his thesis entitled Conditioned and Unconditioned Reflexes and Inhibition

193334

Wrote Scientific Method in Psychology: A theory of epistemology based in Objective Psychology; unpublished, but a mine of ideas

1934

Worked under Karl Lashley at University of Chicago

1935

Followed Lashley to Harvard to continue his doctorate on the effects of early visual deprivation on perception in rats.

1936

Awarded PhD

1937

Fellowship at the Montreal Neurological Institute under Wilder Penfield

1942

Began working at Yerkes Laboratory, studying emotional processes in the chimpanzee

1947

Appointed Professor of Psychology at McGill University, where he remained until he retired in 1972

1949

The Organization of Behaviour: a Neuropsychological Theory was published

1960

Appointed President of the American Psychological Association

1972

Retired from McGill University, but remained as Emeritus Professor of Psychology

1980

Returned to Dalhousie University as Emeritus Professor of Psychology

20 August 1985

Died in Nova Scotia

THE OSCILLATING BRAIN

the 30-second neuroscience

Just how often do you have a brainwave? If you are having a bad day you may think ‘hardly ever’, but the truth is that you are having brainwaves all the time, even while you are asleep. The electrical activity of the brain, when measured by techniques such as EEG, is characterized by strong oscillations – waves – which are thought to arise from synchronous activity in large populations of neurons. In fact, ‘alpha waves’ were about the first thing to be noticed by Hans Berger when he invented the EEG back in the 1920s. Alpha waves are relatively slow oscillations at about 10 Hz (10 cycles per second), which are observed predominantly across the back of the brain and are most pronounced during relaxed wakefulness with the eyes closed. This has led some researchers to suggest that the alpha rhythm reflects cortical ‘idling’, though this view is now being challenged. Other prominent brain oscillations include the delta (1–4 Hz), theta (4–8 Hz), beta (12–25 Hz) and gamma (25–70 Hz or higher) rhythms, however, these categories are a little arbitrary. The search is now on to discover what these different oscillations do for the brain and mind. For example, beta oscillations appear over cortical motor areas as the brain prepares for movement, and the gamma rhythm has long been argued to support the binding of perceptual features together in complex scenes.

3-SECOND BRAINWAVE

Synchrony among large populations of neurons leads to characteristic brain oscillations that may underlie many perceptual, cognitive and motor functions.

3-MINUTE BRAINSTORM

Brainwaves are not always good for you. When neural oscillations get too strong, the brain can enter a state of so-called ‘hypersynchrony’, where vast swathes of neural real estate turn on and off together. This kind of electrical storm – literally a ‘brainstorm’ – is what happens during epileptic seizures. Being able to predict hypersynchronous episodes before they happen, perhaps in time to allow an intervention, is a major current challenge for clinical neuroscientists.

RELATED BRAINPOWER

BRAIN IMAGING

SLEEP & DREAMING

3-SECOND BIOGRAPHIES

HANS BERGER

18731941

Neurophysiologist who was the first to record EEG signals; alpha waves were originally called ‘Berger waves’

WILLIAM GREY WALTER

191077

Neurologist and cybernetician who was the first to measure delta waves during sleep

WOLF SINGER

1943

Pioneered the idea that gamma oscillations may play key roles in perception

30-SECOND TEXT

Anil Seth

NEURAL DARWINISM

the 30-second neuroscience

Darwin’s theory of natural selection is one of science’s greatest accomplishments, explaining how complex life emerges from evolutionary variation and selection over very long time spans. Neural Darwinism, developed by Gerald Edelman, proposes that a similar process may occur in the brain, involving groups of neurons instead of genes or organisms. Also known as the theory of neuronal group selection (TNGS), it rests on three proposals. The first is that early brain development generates a highly diverse population of neuronal circuits. The second is that there is selection among these groups: those that are used survive and strengthen; those that aren’t wither away. Finally, the TNGS proposes the idea of ‘re-entry’ – a constant interchange of signals between widely separated neuronal populations. Francis Crick (see here), criticized the theory, pointing out that it lacked a mechanism for replication, a key property of natural selection alongside diversity and selection. However, Edelman has continued to develop his theory, providing new accounts of language and even consciousness. Although direct evidence is still lacking, TNGS seems again increasingly relevant as neuroscientists struggle to understand how very large populations of neurons behave and develop.

3-SECOND BRAINWAVE

Population thinking, so fundamental for Darwin, may also be the key to understanding the brain.

3-MINUTE BRAINSTORM

Neural Darwinism draws a key distinction between ‘instruction’ and ‘selection’. Instructionist systems, like standard computers, rely on programs and algorithms, and they suppress variability and noise. Selectionist systems depend on large amounts of variation and involve the selection of specific states from very large repertoires. So the theory provides a sharp contrast to computer models of brain and mind, highlighting – as Darwin had beforehand – that variation is essential to biological function.

RELATED BRAINPOWER

THE DEVELOPING BRAIN

HEBBIAN LEARNING

NEURAL NETWORKS

3-SECOND BIOGRAPHIES

GERALD M. EDELMAN

1929

Nobel Laureate in Physiology or Medicine for his work on selectionist principles in immunology, has been actively developing neural Darwinism since the 1970s

JEAN-PIERRE CHANGEAUX

1936

Also a pioneer in developing selectionist theories of brain function; in other seminal work he discovered and described how nicotine acts within the brain

30-SECOND TEXT

Anil Seth

THE BAYESIAN BRAIN

the 30-second neuroscience

Imagine being a brain. You are stuck inside a bony skull, trying to figure out what’s out there in the world. All you have to go on are streams of electrical impulses from the senses, which vary depending on the structure of that world and, indirectly, on your own outputs to the body (move your eyes and sensory inputs change, too). In the 19th century, Hermann von Helmholtz realized that perception – the solution to the ‘what’s out there’ problem – must involve the brain inferring the external causes of its sensory signals. This suggests that brains perform something like ‘Bayesian inference’, a term that describes how beliefs are updated as new evidence comes in. In other words, incoming sensory data are combined with ‘prior beliefs’ to determine their most probable causes, which correspond to perceptions. At the same time, differences between predicted signals and actual sensory data – ‘prediction errors’ – are used to update the prior beliefs, ready for the next round of sensory inputs. One interpretation of this idea – predictive coding – argues that the architecture of the cortex is ideally suited for Bayesian perception. On this view, ‘bottom-up’ information flowing from the sensory areas carries prediction errors, while ‘top-down’ signals from higher brain regions, convey predictions.