THE ability to read another person’s thoughts has always exerted an enormous fascination. Recently, new brain imaging technology has emerged that might make it possible to one day read a person’s thoughts directly from their brain activity. This novel approach is referred to as “brain reading” or the “decoding of mental states.” This article will first provide a general outline of the field, and will then proceed to discuss its limitations, its potential applications, and also certain ethical issues that brain reading raises.
The measurement of brain activity and brain structure has made considerable progress in recent decades. Computed tomography (CT) and magnetic resonance imaging (MRI) have vastly improved the ability to measure an individual’s brain structural composition with high detail and non-invasively. This provides a three-dimensional image of the human brain showing the distribution of gray and white matter, bone, and cerebrospinal fluid. Although these structural neuroimaging techniques are routinely used in neuroradiology to assess injuries of the central nervous system and to diagnose neurological diseases, they provide no information about a person’s current mental states (such as their current ideas, thoughts, intentions, and feelings). This is because they measure the structure of the brain rather than the brain activity that changes from moment to moment. In order to read out the current mental state of a person a measurement of their current brain activity is required.
Brain activity can be measured using a number of techniques: electromagnetic brain activity signals can be measured using electroencephalography (EEG) and magnetoencephalography (MEG). These techniques map brain activity with high temporal resolution (in the millisecond range), but their spatial resolution is very low (several centimeters). Already in the 1960s researchers used EEG for brain-based spelling devices. Subjects learned to control the alpha oscillations of their EEG and were then able to transmit Morse code by sending short versus long bursts of alpha activity (Dewan 1967). Such techniques could potentially be useful in helping paralyzed people communicate their thoughts and wishes by deliberately changing their brain activity. The goal in this field of so-called non-invasive brain-computer interfaces (BCIs) is to develop techniques that allow users to control technical devices “with the power of thought.” Already today it is possible to control a prosthesis, spell a letter, or steer a wheelchair using EEG-based BCIs. Unfortunately the low spatial resolution of EEG means that it is limited to reading out simple commands, such as spelling texts or moving a computer cursor on a screen. It is difficult to read out more complex ideas (such as an intention or a memory) due to the lack of spatial resolution. The key problem is that the brain represents information in fine-grained columnar maps with a resolution of approximately 0.5mm (Tanaka 1997). These activation patterns are far too small to be resolved with EEG.
Complementary to EEG/MEG, functional magnetic resonance imaging (fMRI) allows measurement of brain activity with high spatial resolution (a few millimeters), but lower temporal resolution (a few seconds). Unlike EEG, fMRI signals are only an indirect marker of the activity of nerve cell clusters, because brain activity is estimated via its effects on the oxygen content of blood. However, fMRI is currently the only available non-invasive procedure that allows for a measurement of brain activity with high spatial resolution without having direct access to the brain through invasive surgical techniques. The resolution achievable with fMRI is just enough to extract some information from fine-grained activity patterns. For this reason, fMRI-based brain reading techniques allow reading out a person’s thoughts in much more detail than with EEG. In particular, the combination of fMRI with specialized statistical pattern recognition techniques has provided a new impetus to the field of brain reading.
Brain reading requires that every mental state (“thought”) is associated with a characteristic pattern of brain activity. Similar to a fingerprint, a brain activity pattern as a unique and unmistakable brain signal indicates a specific thought. By learning to identify such brain activity patterns it is thus possible to infer what a person is thinking. A typical brain reading procedure starts by measuring the brain activity patterns that occur when a person has a specific thought. Then a computer is trained to recognize the specific patterns of brain activity that are associated with the different thoughts (Figure 1.1). This is done using so-called pattern-recognition algorithms that can classify brain activation patterns in a statistically optimal fashion. Similar algorithms are also used to detect fingerprints or identify faces from surveillance videos. Unlike traditional methods for analyzing brain imaging data, pattern recognition combines information from multiple brain locations and thus maximizes the information that can be read out. By combining fMRI with pattern recognition, the field of “brain reading” has made huge progress in the last few years. It has been possible to read out very detailed contents of a person’s thoughts, including detailed visual percepts and ideas, memories, and even intentions and emotions. It is possible to read out implicit and even unconscious mental states, such as unconscious percepts and decisions (for an overview see Haynes and Rees 2006; Norman et al. 2006).
Take, for example, the read-out of subjects’ intentions from brain signals (Haynes et al. 2007). In this experiment we let subjects decide freely between two possible choices, i.e. adding or subtracting numbers. Importantly, participants made their choice covertly and so we initially did not know which choice they had made. Then, after a delay, we showed them the numbers and asked them to do the corresponding calculation. We were able to decode their intentions with 70% accuracy based only on their patterns of brain activity—even before they had seen the numbers and began to calculate. Because of a delay between the choice and the presentation of the numbers we were able to exclude that other neural activity, such as the actual carrying out of calculation or the preparation of the buttons to indicate the solution, was used for the prediction. In one area of the brain called the medial prefrontal cortex, we were able to read from fine-grained patterns of brain activity which intention a subject had chosen. In another experiment, we showed that such intentions could be partially predicted from brain activity even several seconds before a subject had consciously made up their mind (Soon et al. 2008).
FIG. 1.1 (Also see Plate 1). Decoding mental states from brain activity using statistical pattern recognition techniques. Left: Functional magnetic resonance imaging (fMRI) uses a measurement grid of small volumes (“voxels”) with a resolution of typically between i-3mm. Each of these voxels measures changes of the oxygen content of small blood vessels that are an indicator of neural activity. In order to extract a maximum of information out of the fMRI signals, statistical pattern recognition algorithms or “decoders” are used. These decoders are given the brain activity patterns and the corresponding mental states and learn the mapping between the two. Then they are tested to see if they can correctly classify unknown mental states from their corresponding activity patterns, (a) This shows a hypothetical voxel that responds strongly when a person is lying and weakly when they are telling the truth. In this case, the decision whether the person is lying or not could be made based on the activity in this single voxel because the distributions are widely separated (far right). However, in real neuroimaging data, the responses in individual voxels are only slightly separated and thus individual voxels are not sufficient to tell truth from lie. (b) If the neurocognitive correlates of deception are specific, global patterns of brain activity, then pattern recognition can be used to recognize deception by training a classifier to identify the occurrence of this pattern of brain activity. The right-hand figure shows the principle of pattern recognition using the average activity in two different brain regions as an example. The average activation values of the two individual regions can be plotted on the x-axis and y-axis yielding a number of measurement points, one for each brain pattern. Red points correspond to global patterns acquired during truthful responding and blue points correspond to global patterns acquired during deception. The decision cannot be made based on individual regions but can easily be made when taking into account the combined activation in both regions. Here the decision is made using a linear decision boundary (dashed line). A key question is, however, whether such unique signature patterns exist that are valid for different people, different situations, and different types of lies. (c) This shows a local decoding approach that can be used to reveal information stored in micro-patterns of local cortical maps (Haynes and Rees, 2006). The right hand figure shows a case where the decision boundary is non-linear. New measurements of brain activity are classified as lie or truth depending on which side of the boundary they fall (black and white symbols). Reproduced from Detecting concealed information using brain-imaging technology, Bles and Haynes, Neurocase. Reprinted with permission of the publisher (Taylor and Francis Group http://www/informaworld.com).
These recent advances should not obscure the fact that scientific “mind reading” is still in its infancy. But is it only a matter of time until we can build a “universal mind reading machine”? Such a machine should be able to decode arbitrary thoughts of any person virtually in real time. Such a machine will presumably remain a pure fiction for the foreseeable future due to fundamental methodological challenges.
The brain imaging technology available today does not have a sufficient resolution to allow differentiating between subtly different brain activity patterns, or their corresponding mental states. This would require increasing the spatial resolution down to around 0.5mm which is the approximate size of cortical columns (see e.g. Tanaka 1997). A cortical column is the smallest topographic unit in the neocortex and contains cells coding for similar contents. Also there are severe limitations to real-time brain reading, such as the low temporal resolution of fMRI or the large computational power required for online decoding. Furthermore, fMRI and EEG signals are contaminated by strong noise originating from limitations of the measurement technology and from physiological background signals (such as heart beat and breathing rhythms). Taken together this severely limits the currently attainable accuracy of brain reading.
Coding of the details of mental states in the brain is substantially different from person to person. This is presumably due to the fact that the development of fine-grained cortical topographies is idiosyncratic and follows principles of self-organization. Individual experiences also play an important role in shaping each person’s brain topography, for example, the individual associations and connotations that are a vital component of most thoughts. For this reason it is currently very difficult to learn to read the fine-grained details of one person’s thoughts by training an algorithm on data from another subject.
In order to decode a person’s thoughts it would be necessary to know how they are encoded in that individual person’s brain. Currently, it is not possible to directly “read” the “language” of the brain, that is, to identify mental states based on a systematic interpretation of the corresponding brain states. For this reason the mapping from brain activity patterns to thoughts is learned for each specific subject using brute force statistical pattern recognition techniques. This can be thought of like a dictionary that translates brain activity patterns into the corresponding thoughts (Figure 1.2, a). In order to read out a specific thought there has to be an entry in the dictionary, and each entry has to be painstakingly learned by getting a person to think the thought, after which the concurrent brain activity is measured. Obviously this is only possible for a very limited number of mental states. There are first approaches that show how a few simple calibration measurements can be used to read out a large number of simple percepts and even concepts (Kay et al. 2008; Mitchell et al. 2008). If the brain activity patterns for several mental states are known, it is possible to partially recover other mental states as well, based on interpolation. For example, it might be possible to infer the pattern for “motorbike” by averaging the patterns for “car” and “bicycle” (Figure 1.2, b). Such interpolation can provide a surprisingly powerful approximation; however, it will break down where the relevant mental state violates principles of linearity and compositionality.
Currently, decoding approaches assume a static relationship between thoughts and brain activation patterns. So it remains unclear how to account for the continuous learning and the change of connotations that are likely to occur throughout the lifespan. For example, the associations that a child and an adult might have with the term “favorite movie” are likely to be quite different. Despite the large body of research on learning and plasticity, currently only little is known about how this affects the decodability of mental states.
FIG. 1.2 (Also see Plate 2). A major challenge in brain reading is to learn how to decode a possibly infinite number of arbitrary mental states despite only being able to measure the brain activity patterns corresponding to a few thoughts. a) The simplest approach is a look-up table where the brain activity pattern is listed for a number of mental states that have been measured. The problem is that it is virtually impossible to measure the patterns corresponding to all potential thoughts a person might have. b) The way out is to learn to exploit the systematic relationships between different thoughts. If the brain activity patterns for “cars” and “bicycles” are known, then decoding of a “motorcycle” might be possible based on the notion that it is a concept that is “half way” between a car and a bicycle and thus it might have a brain activity pattern that is the average between that of a car and a bicycle. It has been shown that similar basic principles can be extended to many mental states (Kay et al. 2008; Mitchell et al. 2008).
For the above mentioned reasons, it is not likely that we can expect a “universal thought reading machine” in the near future, that is, a machine that reads out the mental states of an arbitrary person with high accuracy, online, and without requiring long calibration. However, it is important to note that very powerful commercial applications do not necessarily require such universal thought reading. For example, the identification of a lie requires telling whether a person is lying or not, which is a binary decision. A detailed reconstruction of a person’s thoughts (i.e. why they are lying, what they are thinking while they are lying) might be desirable, but is not essential for detection of deception. Importantly, it also seems to be possible to detect deception in one person by using a decoder trained on brain activation patterns from a group of other people. So it should be possible to develop a lie detector that can then be used on a large number of suspects without requiring calibration on each individual. For many similar applications it would be sufficient to classify a person’s mental states coarsely. The brain activity patterns for such coarse classification are also approximately similar from person to person. So it is likely that brain reading applications will be available earlier than the advent of a universal thought reading machine in the distant future. The state of the art of two applications of brain reading are outlined in the following sections.
The classical approach to lie detection uses polygraphy, a technique that measures a number of physiological indicators of peripheral arousal in parallel, such as skin conductance, heart rate, and respiration rate. The idea is that a person who is lying is highly aroused and thus the peripheral indicators of this arousal will reveal on which question they are lying. Interestingly, polygraphy is indeed reliable when applied to naïve subjects. The problem of classical polygraphy is that it uses arousal as a physiological marker of deception, but arousal can be affected by other mental factors (such as general anxiety) or by deliberate manipulation. For example, it has been repeatedly shown that subjects can deliberately and selectively control their level of arousal in polygraph tests. Instructions on how to do this are freely available on the Internet. Therefore, manipulation of polygraphy results by trained subjects cannot be excluded and the validity of the tests remains doubtful.
An alternative to the measurement of peripheral arousal lies in brain-based lie detection (reviewed in Bles and Haynes 2008). The idea is to directly reveal the cognitive processes involved in the generation of a lie. FMRI (and possibly EEG) signals are measured while a test subject is lying in a scanner answering questions related to the crime. A similar approach is to use fMRI and EEG signals to reveal that a subject covertly recognizes crime-related material. Current research shows that EEG and fMRI can be used to detect deception accurately in artificial laboratory settings, where, for example, subjects are asked to lie about whether they have previously been exposed to specific playing cards. However, these laboratory experiments are still far from what would be required in real-world lie detection. Detection of artificial laboratory lies gives no clear indication as to whether a lie could be detected during a criminal investigation. The laboratory situations differ from the real world in a number of important parameters, such as the motivation of the subjects, the personality characteristics of the study sample, and the reward/punishment value of the anticipated consequences. So, although fMRI-based lie detection certainly represents a technical improvement and has considerable development potential, it still awaits clear validation in real-world settings under field conditions.
An important question is the degree to which polygraphy and brain-based lie detection can be manipulated by trained subjects. The brain-based approach is presumably more difficult to manipulate due to the difficulty of deliberately obtaining a specific brain activation pattern, whereas it is easier to achieving a specific level of arousal. This would suggest that brain-based lie detection is more reliable. On the other hand, brain-based lie detection requires cooperation by the subject because even the smallest movements inside the scanner make fMRI signals unusable. Thus, fMRI-based lie detection is promising, but still in development. It seems imperative to formulate clear standards of practice before commencement of commercial lie detection applications for which hard scientific evidence from real-world applications is currently not available.
Another future application of brain reading technology is so-called “neuromarketing,” such as the prediction of consumer behavior from brain activity for optimization of products and advertising. In recent years this area has received tremendous interest and there were repeated attempts to optimize marketing campaigns by adding brain-based sources of information. For neuromarketing applications it is also not necessary to await the development of a universal thought reading machine. Similar to lie detection, many powerful applications would be possible even with a simple binary decoding scheme, such as predicting whether a person is likely to purchase a product or not, or whether a product is experienced pleasant or unpleasant.
Neuromarketing focuses mainly on reward-related brain regions, such as the nucleus accumbens or the orbitofrontal cortex, that are believed to play a key role in governing consumer choices. For example, if one product evokes a higher response in nucleus accumbens this would be seen as an indicator of a desire (“craving”) for the product. Importantly, reward-related brain regions are anatomically easy to identify and thus are in predictable positions. This allows development of a technique on one group of subjects which can then be applied to another group of subjects. But although the link from activity in reward-related brain regions to preference is very plausible, further research is still needed to exclude other potential causes of increased brain responses. For example, responses in the nucleus accumbens are also increased by the prominence or “salience” of objects. This means that the activity in this region does not uniquely signify the valence of products. This highlights the pitfall of invalid “reverse inference” (Poldrack 2006). Just because a brain region B is always active during a specific mental state M, this doesn’t mean that the presence of activity in B implies the presence of M, simply because B could be active also during other mental processes (Figure 1.3).
In addition to the feasibility of brain reading applications, their usability is an important factor that will decide on the degree to which neuroscientific technologies enter everyday life. Usability refers to how easy (or complicated) a technique is to use and how much joy (or frustration) arises when using it. Brain reading techniques still need considerable development before they are likely to enter any mass markets. One important usability factor is mobility. Only EEG and near-infrared spectroscopy are partially suitable for mobile applications. In contrast, in the foreseeable future fMRI will remain a stationary technology due to the high weight of scanners and the tight security restrictions (that are in turn due to the use of strong magnetic fields). Nevertheless, certain applications do not require mobility. For example, it is not necessary to perform lie detection in real-world situations; instead a test subject can be taken to a scanner. A different constraint to usability is that the present use of EEG and fMRI is still very cumbersome. For EEG, recording electrodes must be placed in contact with the scalp and attached with a special electrode paste. This requires a substantial set-up time of up to an hour (depending on the number of electrodes). For certain applications such as neuromarketing or lie detection such difficulties might be acceptable, but for everyday applications (such as the remote control of a TV or computer using EEG) they are certainly not. In contrast, MRI is contact-free but preparation here is tedious in other ways. It involves a number of safety procedures and several exclusion criteria need to be considered due to the use of strong magnetic fields. Subjects who suffer from claustrophobia or subjects with pacemakers, brain stimulators, or certain metals in their body (e.g. surgical screws) have to be excluded. Furthermore, the procedures are not very comfortable and involve high noise levels and require the subject not to move during the measurement period of up to 1 hour.
FIG. 1.3 Similarity between brain patterns characteristic for deception (a) and response inhibition (b). There are several individual brain regions in prefrontal and parietal cortex that are active in both cases. Thus, one has to be careful to avoid invalid “reverse inference” (Poldrack 2006) when inferring mental states from brain activity. Just because a brain region B is always active during a specific mental state M, this doesn’t mean that the presence of B implies the presence of M, simply because B could be active also during other mental processes. However, when considering the whole brain activation pattern using pattern recognition techniques the danger of a false inference is much lower. Schematically redrawn from Spence, S. A., Farrow, T. F., Herford, A. E., et al. (2001). Neuroreport, 12, 2849–53, Figure 1 and Blasi, G., Goldberg, T. E., Weickert, T., et al. (2006). European Journal of Neuroscience, 23, 1658–64, Figure 1b.
As outlined earlier, brain reading is an emerging technology that has strong limitations but which might allow for certain simple commercial applications within the coming years. But should we allow commercial technologies that read a person’s thoughts? As in many areas of biomedical research one is faced with a dilemma. On the one hand, new findings raise hope for improvement of clinical and technical applications. For example, BCIs can help identify residual mental processes in waking coma patients (Owen et al. 2006; Coleman et al. 2009), or can help paralyzed patients communicate with their environment or control artificial prostheses (Blankertz et al. 2008). Such clinical applications are likely to be uncontroversial. Other applications might, on the other hand, be viewed critically. This includes commercial applications, such as reading a product preference for marketing purposes, or measuring the attitude of job candidates towards a future employer. Several important ethical aspects are raised by brain reading, both in research and in applications.
Mental privacy It is fundamental to our self-model that our thoughts are private and cannot be read from the outside. Typically, the belief that someone could read or control my thoughts could be considered an indicator of a psychiatric condition. This means that any technical applications that can read a person’s mental states must be handled with particular sensitivity because they can be used to invade a person’s “mental privacy” (Farah 2005). It is important to discuss any potential consequences that would follow if a person were found to be engaged in deliberating criminal actions. We normally tend to consider it fully legal if a person thinks about committing a criminal act as long as they don’t put it into action. However, what if it were possible to decode that a person was truly committed to performing a criminal offense? Should the person be stopped before they a crime? Current research shows that it should be possible tell whether a person is thinking about a specific intention, but it is not clear whether it is possible to tell whether they are truly committed to it.
Data security Most current neuroimaging research takes place in academic institutions that have strict data protection policies. With the progressive use of such technologies for commercial applications, it is foreseeable that large amounts of sensitive personal information will end up in the hands of private companies that could potentially extract critical personal information, even beyond the information for which a test was originally planned. For example, say a subject has consented to a neuromarketing study with a private company. With the advent of techniques that allow for the prediction of diseases from brain imaging recordings, the data obtained during the neuromarketing session could potentially be used to read out certain aspects of a person’s medical condition. The possibility to decode such “collateral information” might not be apparent to date, but might become available with further progress of techniques for decoding of medical states from neuroimaging data (Kloppel et al. 2008).
Quality There are currently no guidelines that define quality standards for successful decoding of mental states. This is problematic because commercial companies are already marketing brain reading applications without a widely acknowledged scientific assessment of the validity of such techniques. There are published studies on the reliability of MRI lie detectors, but these relate to artificial laboratory situations, which do not allow to tell how well they perform in real-world scenarios. Thus, scientists need to begin defining guidelines and quality standards in this emerging field.
To summarize, modern neuroimaging techniques have made substantial progress over the last few years and now have shown that it is possible to decode a person’s mental states from their brain activity. There are still many technical and methodological challenges in this field that render it highly unlikely that a universal thought reading technology will be available in the near future. But nonetheless, the first applications are beginning to emerge thus making it necessary to monitor and discuss their ethical implications.
Blankertz, B., Losch, F., Krauledat, M., Dornhege, G., Curio, G., and Müller, K.R. (2008). The Berlin Brain-Computer Interface: accurate performance from first-session in BCI-naïve subjects. IEEE Transactions on Biomedical Engineering, 55, 2452–62.
Bles, M. and Haynes, J.D. (2008). Detecting concealed information using brain-imaging technology. Neurocase, 14, 82–92.
Coleman, M.R., Davis, M.H., Rodd, J.M., et al. (2009). Towards the routine use of brain imaging to aid the clinical diagnosis of disorders of consciousness. Brain, 132, 2541–52.
Dewan, E.M. (1967). Occipital alpha rhythm, eye position and lens accommodation. Nature, 214, 975–7.
Edelman, S., Grill-Spector, K., Kushnir, T., and Malach, R. (1998). Toward direct visualization of the internal shape representation space by fMRI. Psychobiology, 26, 309–21.
Farah, M.J. (2005). Neuroethics: the practical and the philosophical. Trends in Cognitive Sciences, 9, 34–40.
Haynes, J.-D. (2008). Decoding the contents of visual consciousness from human brain signals. Trends in Cognitive Sciences, 13, 194–202.
Haynes, J.-D. and Rees, G. (2006). Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7, 523–34.
Haynes, J.-D., Sakai, K., Rees, G., Gilbert, S., Frith, C., and Passingham, R.E. (2007). Reading hidden intentions in the human brain. Current Biology, 17, 323–8.
Kay, K.N., Naselaris, T., Prenger, R.J., and Gallant, J.L. (2008). Identifying natural images from human brain activity. Nature, 452, 352–5.
Kloppel, S., Stonnington, C.M., Chu, C., et al. (2008). Automatic classification of MR scans in Alzheimer’s disease. Brain, 131, 681–9.
Mitchell, T.M., Shinkareva, S.V., Carlson, A., et al. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320, 1191–5.
Norman, K.A., Polyn, S.M., Detre, G.J., and Haxby, J.V. (2006). Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences, 10, 424–30.
Owen, A.M., Coleman, M.R., Boly, M., Davis, M.H., Laureys, S., and Pickard, J.D. (2006). Detecting awareness in the vegetative state. Science, 313, 1402.
Poldrack, R.A. (2006). Can cognitive processes be inferred from neuroimaging data? Trends in Cognitive Sciences, 10, 59–63.
Soon, C.S., Brass, M., Heinze, H.J. and Haynes, J.-D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 543–5.
Tanaka, K. (1997). Mechanisms of visual object recognition: monkey and human studies. Current Opinion in Neurobiology, 7, 523–9.