© The Author(s) 2020
L. Schlicht et al. (eds.)Mind Reading as a Cultural PracticePalgrave Studies in Science and Popular Culturehttps://doi.org/10.1007/978-3-030-39419-6_6

6. The Idea of Reading Someone’s Thoughts in Contemporary Lie Detection Techniques

Larissa Fischer1  
(1)
RWTH Aachen University, Aachen, Germany
 
 
Larissa Fischer

Introduction

When we talk about mind reading techniques, we refer to a socio-historical phenomenon that can be linked to a magic-like power, to a specific knowledge of human nature, and to the idea of a specific psychic medium. The underlying desire of these imagined techniques has always been to gain insight into the true nature of another person’s thoughts. This idea sparks great hopes, but it is also associated with great discontent because the mind is perceived as something very valuable that makes people what they are—especially for the modern subject.

The most prominent ability that has held up best in popular culture is the idea of telepathy, a gift that enables people to read the minds of others. In Perry Rhodan, one of the best-known science fiction series in German-speaking countries, which has been published for over 50 years, telekinesis is one of the essential so-called psi-skills. The characters who have this ability use it to spy on but also to empathize with others. One of the very early stories describes how the mighty figure of the mouse beaver uses its power:

Gucky interfered with Bully’s mind. Then he could understand the other man’s bad mood and forgive him generously, which was not his style otherwise. ‘Well, that’s what it looks like up there. No wonder you’re already afraid, Mr. Bull,’ he said dryly. […] With this remark, however, the mouse beaver had openly admitted that he was involved in Bully’s thoughts, and that was strictly forbidden.1

In this Cold War-related and thus very technocratic science fiction universe, interfering with the thoughts of others is—following a popular topos that is widespread in the sci-fi world—a natural consequence of genetic mutation, but without consent it is forbidden. This allows a conclusion to be drawn about general social concerns relating to insight or access to the world of others’ thoughts, especially during the Cold War, when covert service projects like the CIA’s MKUltra experimented with various techniques and substances to control consciousness. In this example from popular culture, the thought is personal property, and this becomes apparent in the face of a technology that can access the mind, even if fictional.

When we talk about the ideas and concepts people have about the mind and reading the mind, there are of course scientific approaches trying to grasp this phenomenon on an individual level. Concepts of Theory of Mind (ToM) and social cognition as core principles of social communication are constantly being investigated in psychology and anthropology.2

David Premack and Guy Woodruff used the term Theory of Mind for the first time in their 1978 study to describe the ability of empathizing with the mental states of self and others. The term “theory” refers to the kind of techniques used to observe states “that are not directly observable and one uses these states anticipatorily, to predict the behaviour of others as well as one’s own.”3 So humans are, to a certain extent, able to “read the minds” of others in terms of theorizing the mind of self and others.

Thus, detecting the deceptive behaviour of others in a “natural” way is part of these ToM skills as well. The very recent study of Suzanne Stewart and colleagues shows, for example, that the capacity to detect deception correlates with both emotional intelligence (EI) and ToM, as certain people have the ability to perceive and decode the emotional states of others as well as to reason cognitively about other’s behaviour.4 Stewart et al. also argue that training for ToM skills could make it easier for certain professions to deal with potential liars.5 Coming at the question from the opposite perspective, a study by Xiao Pan Ding et al. on the ToM abilities of young children shows that ToM control training causes a lasting ability to deceive even when the child has never shown deceptive behaviour before.6 Thus ToM abilities appear to be central for both lying and for the ability to detect the lie.

Because thoughts are an invisible phenomenon, it also seems comprehensible that imagining a medium or technology giving access to the mind of a person has been part of cultures of technical improvement. In fact, these ideations and expectations can be understood as future-oriented visions that materialize in texts, technology or other artefacts and help to structure their realization in advance.7 Visions of mind-reading materialized in and shaped by actual technologies have changed their form several times, starting with the brain mirror or encephaloscope imagined by the surgeon Eduard Albert in Vienna at the end of the nineteenth century.8 The possibility of taking a photograph of the human mind at that time was discussed by those involved in parapsychological experiments.9 The beginnings of the polygraph, which later became popular as the lie detector, followed about twenty years later as a combination of individual medical instruments for measuring psychophysiological parameters. In 1924, the first brain wave measurements of the human brain with electroencephalography (EEG) were made.10 And today, we can already look back at the implementation of modern neuroimaging technologies such as fMRI . In recent decades, knowledge about the structure and function of the human brain has grown rapidly, and the relocation of neurobiological findings into fundamental philosophical questions has also provided a variety of explanatory approaches. One of the paradigmatic questions that remains to be addressed in neurophilosophical debates is how to reveal the truthfulness of a person’s statement.

Inspired by the insightful analysis of Melissa Littlefield, this chapter is based on the thesis that there are persistent assumptions associated with the various lie detection technologies. Taking the United States as an example, Littlefield emphasizes how “[…] brain-based lie detection technologies are not immune to nor have they resolved the practical and ideological conundrums that informed lie detection at the turn of the twentieth century.”11 In the following, I want to elaborate how lie detection in Germany has always been associated with an obscure expectation of reading someone’s thoughts and how modern neuroimaging technologies such as fMRI led to a shift in the way lie detection is perceived. The imagined possibilities that come to life in the neurosciences are to some extent the result of expectations related to and raised by this discipline, like the prospect of generating new knowledge on the frontier of this scientific field. These expectations are also linked to a prospective knowledge of the neuronal characteristics of the “lying brain.” This in turn leads, per my argument, to a more or less conscious awareness of the limitations of actual knowledge available in the neuroscientific field, which also emanates into the public perception of lie detection technology and shapes the desires and aspirations surrounding it.

Karin Knorr Cetina grasps the coexistence of knowledge and ignorance with the concept of “negative knowledge,”12 which gave the present analysis a fruitful impulse, since this concept is neither limited to a “strategic ignorance”13 of facts nor to their complete non-knowing,14 but addresses the very “knowledge of the limits of knowing.”15 The strength of this concept lies precisely in acknowledging that negative knowledge can potentially be turned into positive evidence, if, for example, in the research process the recognition of errors provokes new approaches or if previously rejected information suddenly becomes relevant.16 Taking into account the sociological perspective on the scientific production of knowledge, including what Knorr Cetina calls liminality or negative knowledge, I will borrow this science studies approach to examine and contrast both the jurisprudential and scientific hypotheses of lie detection.17

In this chapter, I seek to shed light on the interconnections between the concrete practice of and research on lie detection technology and the broader popular idea of getting access to someone’s thoughts and feelings. I first explore the genesis of the science of understanding lies and the development of lie detection in applied psychology research laboratories in the United States and Germany at the beginning of the twentieth century. Second, on the basis of an analysis of court decisions on the use of the polygraph in Germany and of the German legal foundations of lie detection, I will show how the German legal system has treated lie detection procedures as a machine-supported access to the mind in the context of lie detection from the 1950s until today. Third, it will be part of my analysis to work out how in science the detection of lies in the context of neuroscientific procedures has gained new relevance, even though the findings are hardly used in practice. And this is despite the fact that some neuropsychologists additionally use the classical measuring parameters of the polygraph even today. Thus, in the last section insights into the perspectives and aspirations of the researchers will complement those implied by judicial practice.18

On Understanding the Lie and Its Detection

What is the meaning of “lying”? First and foremost, lies are inextricably linked to moral judgement. However, in ancient Greece lying was inseparable from the ideas of fiction, fallacy and poetic expression, all of which were covered by the term ψευ6οζ (pseudos). Today, Ψευ6οζ is translated as “pseudo,” “lie” or “untrue,”19 but in the former sense it was not necessarily related to negative or immoral behaviour.20 It was only Plato’s critique of rhetoric and poetry that claimed to distinguish the true from the untrue, but nevertheless the term maintained an association with both falsehood and wisdom. With the birth of Christianity, the lie was then largely stylized by Augustine as the basic vice of human beings.21 Thus, lying out of self-interest became not only a culpable offense but also, according to Augustine’s teachings, something that would lead to the loss of eternal life.22 This moral condemnation continues into Kant’s philosophy of the metaphysics of manners (“Metaphysik der Sitten”) when he concludes that “by a lie a man throws away and, as it were, annihilates his dignity as a man.”23 In this judgement of the Enlightenment there is an addition: the lie is not just reprehensible, but also hurts the lying person as a subject of responsibility for itself and its actions.

The first efforts to measure lies and guilt were made by the ancient Egyptians when after the death of a person, their guilt was calculated according to the weight of the heart.24 The central organ here that was responsible for dishonesty and also for thinking in general was not the brain, but the heart.25 From the Middle Ages to the early modern period, deception detection in Europe was based on the methods of the Ordeal, which meant, for example, that the accused had to apply his tongue to hot iron. When it burned, he was guilty and put to death, because the dryness of the tongue was interpreted as a sign of lying.26 A physiological characteristic that also emerged in the Middle Ages was the measurement of the pulse. The Persian physician Ibn Sina already described in the eleventh century that the change of the pulse could also indicate that someone was concealing information. So the detection of deception has always been a matter of the detection of physical reactions.27 This basic assumption that lying requires some physical effort, which can then be measured, is still valid today for methods and new technological developments for truth verification such as polygraph testing, thermographic or expressional facial reading, deception detection via PET or fMRI , and others.

Even if from a socio-psychological as well as a linguistic point of view lying constitutes an indispensable part of social intelligence,28 the understanding of lying as a counter-principle to truthfulness in Western societies is to this day shaped by a Christian worldview. Lying as the “conscious utterance of an untruth with the intention to deceive”29 violates our more or less implicit social dictum of truth, can even be punishable, and is the object of every interrogation practice. The classic device for lie detection is the polygraph , which has been used as a lie detector since the beginning of the twentieth century.

The first so-called lie detection tests emerged in the United States within the instrument-based experiments on the cognitive and emotional state of others. The subject was a key interest in the laboratories for applied psychology in the early twentieth century.30 The idea that a mental state is observable in bodily reactions was the major focus in the laboratory work of William Moulton Marston, who is known as the inventor of the first lie detection test. Working in the environment of Harvard’s experimental psychology department established by Hugo Münsterberg, Marston was following Münsterberg’s ideas about applied psychology. Furthermore, Marston was following up on Münsterberg’s theory “that psychological states and emotions manifest themselves in and through the body.”31

This period of the general exploration of consciousness and the human psyche coincides with the development of the polygraph machine. Marston created a basis for experiments on lie detection with his theory of “deceptive consciousness” (1917, 1920). Marston was particularly interested in exploring the correlation between blood pressure and deception,32 which, some years later, was used and technically incorporated into the “Polygraph-Machine” by the police officer John Larson.33 Now, in the present day, various physiological parameters such as breathing deeply, skin conductivity, pulse or blood pressure as indicating an increased level of stress and delayed response to certain questions are associated with the assumption that the truth unconsciously makes itself apparent in the body even when it is consciously suppressed. In this assumption, it is already assumed that telling the truth is a “natural” human behaviour and that lying is a deviation; this is a fundamental paradigm that is still valid today for various approaches to lie detection.34 While in the United States at the beginning of the twentieth century research into the detection of lies was carried out in technically equipped laboratories, in the German-speaking countries only isolated experiments were undertaken. The Italian psychologist Vittorio (1913) experimented with psychophysiological features of lying at the University of Graz, and Karl Gustav Jung and Max Wertheimer also discussed the usefulness of such measurements for the forensic field.35 While neither the experiments of European psychologists such as Jung, coming from the field of psychoanalysis,36 nor experimental psychology, nor lie detection techniques by criminologists such as Cesare Lombroso37 were ever taken up in a sustained way in forensic practice, the field of “applied” psychology newly founded in the United States appeared to be much more practice-oriented. The final invention of the lie detector did not happen in scientific labs but in US police investigation practice, supported by John Larson and Leonarde Keeler.38 It was accompanied by media attention, which led to a discussion about the abolition of the violent enforcement of confessions, the so-called third degree in police work.39 As part of the professionalization programme of the police, several senior police officers embraced “the ‘science’ of lie detection to overcome problems with the third degree.”40

However, although claims were made about a professionalization of interrogation techniques, a curious development can be seen in the professional circles of known polygraphists. For one thing, Keeler was very interested in magic tricks and incorporated his findings into the usage of the already established number test, which is still the pretest for the polygraph process today.41 This process, also known as a stimulus test, is intended to familiarize the polygraphist with the person’s reactions, to adjust the instrument, to tune the test person into the procedure and above all to create confidence in the functioning of the test procedure.42 Even though this pre-test in Germany is not based on trickery, it gains its reliability through the high comparability of the responses in this test, which is not the case in the much more fragile actual test procedure afterwards. Since the difference of these proceedings is not transmitted to the suspect, it can be identified as a specific construction of the belief in the device.

Beyond this overlap between the play with deception and polygraph testing, it must also be mentioned that the study of polygraph testing also took place in a scientific context in which researchers played with the boundaries of what was deemed scientifically possible. Marston, for example, offered a scientific definition for the basic emotions with regard to his specific feminist theory of the superiority of female traits. As Geoffrey Bunn shows, Marston’s belief in the positive effects of dominance and submission was just as integral to his conviction of female authority as his belief that justice could be done by the polygraph.43 Another interesting and more recent example from this field of polygraph testing is the founder of the CIA’s Lie Detector School, Cleve Backster. In addition to his development of the standard method of polygraph testing for the US authorities, the polygraph was eventually transferred into the field of parapsychology and telepathy following his apparent findings from experiments with his dragon tree: a plant connected to the polygraph electrodes for measuring skin conductivity caused some noisy recordings when a “(plant-)murderer” was present. For Backster, the plant’s increase in electrical resistance therefore must signal some special primary sensory abilities, as its “emotional stress” is supposedly registered when the person does no more than think of an evil act.44 From a scientific view, the “totally unscientific discontinuity of logic” of Backster’s conclusion can be seen as part of anti-intellectual tendencies in the United States of the late 1960s.45 Even though Backster’s studies have no scientific evidence, the idea that plants react to thoughts is taken up by the media even to this day.46

Until the early 1950s the polygraph didn’t show up in German courts or any other German authority. During the Weimar Republic in Germany, there had been constant criticism of violent police practice and the partisanship of the judiciary.47 However, in the course of the political escalation leading up to 1933, every development towards a more humanistic interrogation practice was turned into its opposite. During National Socialism, developments in legal psychology came to a halt. Many scientists emigrated, were murdered or imprisoned.48

The introduction of polygraph testing in German courts in the form of a published judgement took place in 1953 at the Regional Court [Landgericht] Zweibrücken.49 Back then the polygraph was seen by the Court as a machine that produced inadmissible evidence; the Federal Court of Justice (BGH) stated that polygraph testing violated man’s freedom of will and was thus a violation of human rights. This harsh criticism contrasts with the perception of the polygraph in the United States as a more humane instrument, and can be seen as a rejection of instruments of power that recalled the assaults and medical crimes of the National Socialists. It shows both a criticism of torture methods and also a more general scepticism about technology and its empowerment over man. Locating the device in the context of a police interrogation highlights the polygraph’s power to act and the intimidating arrangement and materiality of the testing procedure: the test subject is connected to an apparatus by means of sensors on his hands and arms and straps around his chest. His body is completely observed by machine and expert during the interrogation. Keeping in mind this notion of the polygraph as it was understood by the German court of that time, we now turn to court decisions on the polygraph from 1954 onwards.

Who May Read Mind and Soul? Polygraphs in German Courtrooms

In the post-war period, the question of the admissibility of polygraph expert opinions in German courts has been part of the question of the democratization of the legal system. The polygraph, as an apparatus for assessing statements, was the subject of much debate over the question of what principles of human dignity apply to individuals within a judicial framework.50 In 1954, the German Federal Supreme Court explained in a groundbreaking judicial decision:

The polygraph aims to obtain more and different “statements” from the accused than those he would have obtained during the usual interrogation, including those which he makes unwillingly and which he cannot make without the device. In addition to the conscious and deliberate answers to the questions, the unconscious also “answers,” without the accused being able to prevent it.51

This means that the emergence of inner human processes must be judged with “caution,”52 which is not guaranteed by the indiscriminate and inherently repressive nature of the polygraph machine. In this view, the polygraph is not able to proceed with caution because the measurable processes are declared to be deeper processes originating from the unconscious, and thus give an indication of what is going on in the mind. These phenomena are not accessible through bare human observation, and because only the machine brings them to light, such evidence is inadmissible. Ensuring the protection of the autonomous subject is a central concern in the German legal debate on human dignity in the 1950s, and finds its way into the jurisprudence of the Federal Constitutional Court a few years later with the so-called Objektformel.53 This formula states that the human being must not be degraded to an object, since its status as a subject would then be threatened. In 1954, the individual was already conceived as a subject that has to be protected from technology—and thus from the polygraph—that measures its inner life in terms of the soul or mind:

Such insight into the soul of the accused and his or her unconscious emotions violates the freedom of decision and execution of the will [Willensentschließung und -betätigung] (§ 136 a StPO) and is inadmissible in criminal proceedings.54

The attribution of the subject’s agency is less central than the emphasis on the soul that comes to light by means of the body. This is frequently claimed in court decisions up to the 1990s and is also echoed in later decisions in the same terms. This notion of the soul as an object worth protecting and an expressive object, one that is veiled by the human body, implies not only the existence of a distinct language of the unconscious that gives insight in the soul. In this court decision of 1954, the court also treats the apparatus as having its own agency, which can reveal movements of the human soul and thus draw conclusions from the subject’s thoughts in a manner beyond bare human capacity. Even if the test procedure itself is a cooperation of expert, test subject and polygraph forming a socio-technical ensemble,55 for the court the autonomy and mind controlling abilities of the apparatus are the main focus and thus problematized. What happens here is an attribution of autonomy to the machine, which does not exist within the individual elements that contribute to the realization of the test procedure.56 We find the same attribution of agency when the court evaluates the judge’s task of finding the truth. Thus, in contrast to the machine, the judge is permitted to do more:

with prudence, restraint and knowledge of human nature the court can take into account the conscious and unconscious expressions, which usually emerge at the trial. These rather rough sensory impressions of daily life are not equal to those which are gained by measuring unconscious and hidden bodily processes and are then used for the interpretation of the soul.57

Along the separation between the inadmissible mechanical readout and the permitted physical analysis of the judge,58 a distinction is drawn between (1) the expressions recognizable by humans and (2) those that only emerge with the aid of the machine. These attempts to differentiate between the registration of expressions by judges and machines have repeatedly been the subject of dispute, right up to more recent court decisions. As an entity representing the court, the judge is trusted to evaluate the truthfulness of the answer by his knowledge of human nature alone, on the basis of the physical signals sent by an interviewee as to whether he is telling the truth. Remarkably, here the judge’s natural ability to read the truthfulness of others is directly compared to the agency of a machine, without questioning the extent of either of the two abilities. This “illusion” of attributing all agency to one actor59 is part of the judicial construction of the polygraph as a mind-reading machine that goes beyond the “natural” mind-reading techniques of judges. There is no doubt that the polygraph is able to give insight into the soul of a person and thus to deduce the formation of his thoughts. Whether the procedure is valid and scientifically tenable is not called into question here. Rather, it is the authorization for the production and interpretation of the impulse of the unconscious that is up for debate and determines who may produce and use certain knowledge.

The polygraph is indeed the executive element in the interpretation of the inner impulses of the unconscious, but the medium is first and foremost the human body, measured by the machine. In 1954, the relationship of the human body to the soul is determined along a classical Cartesian separation between the physical and the spiritual, whereby the machine has access to the latter. Not just the soul is sacralized by the subject’s right to hide its innermost being, but this legalistic sacralization also determines the physical condition of the accused with regard to an inner core of humanity that is the secret of every human. The definition of the subject here accords with an essential definition of the natural person with his body and mind. Here we see an interesting reference regarding the constitution of the juristic subject to its position as a natural person. This constitution presupposes a specific subjectivation of the individual in the legal context.60 The individual here is not to be submitted to a technology as an external object, but is to be subjected to his or her own unconsciously confessing body. The attribution of the subject’s agency is less central here than the emphasis on the inner space that has to express itself through the body. Thus, as Margaret Gibson points out in her analysis of the polygraph as a truth machine, once connected to the polygraph apparatus the body becomes “a specific site of knowledge and information processing” which is monitored, imaged, translated and recorded by the machine.61 This notion of the translational power of the machine with respect to the body’s interior becomes effective in the early jurisprudence on polygraph testing and lasts until 1998. The body is thus declared a place “of secret, private and invisible thoughts,”62 which must be protected from the truth-reading machine. Moreover, not merely is it impossible for the body to lie, “the body also tells the ‘truth.’”63

The idea of the confessing body, which is already established in the court decision of 1954, attains increased significance from the 1980s onwards. In the course of a number of trials related to suspected abuse in the 1990s, the discussion about the admissibility of polygraph tests in court proceedings was revived and received attention again in jurisprudence.64 In these court decisions, the rather philosophical arguments made on the basis of the protection of the human soul and the unconscious are enriched with more and more mechanical descriptions of the polygraph procedure and the function of the psychological expert.65 Finally, in 1998 the BGH announced a revision of the 1954 court decision and dispelled the concerns that the polygraph would read to unveil the soul:

Furthermore, a voluntary test does not allow insight into the soul of the accused. It records and registers physical side effects like a kind of “magnifying glass” and makes these unconscious utterances visible. It is not the machine that decides about truth and untruth, but the expert through his interpretation of the reactions.66

On the one hand, this statement shows that the polygraph becomes an integral part of the practice of expert assessment in court, which consists of an ensemble of machine, interviewee, expert and judge. That the psychologist is explicitly emphasized here as an expert may be connected with the fact that in the course of the 1980s, legal psychology was able to establish itself as a separate field of expertise, meaning that its protagonists finally gain more acknowledgement as experts in court.67

On the other hand, from now on the polygraph doesn’t unveil the secrets of the human soul anymore but, more than ever before, the body is declared to be a medium of self-articulation whose codes can be captured by a decoding machine like a technological “magnifying glass.” This goes hand in hand with the judicial construction of a primarily biopsychological object of research that can be detected, rather than a morally charged question of honesty. Since the court decision of 1998, there has been increasing reference to the cognitive state of the subject.68 Even when one court stated in 2000 that you still “cannot really look into people’s minds,”69 henceforth insight into the human mind is thought of as access through “body data” or “biosignals,” with the prospect that it will be possible to “measur[e] brain waves.”70 This shift from the notion of the human soul talking through the body to the idea of the human mind being expressed by the body has to be seen in relation to the idea of the brain as an information processing system, which became relevant with the emergence of modern neuroscience during the late 1990s.71

Even if the polygraph remains contested after 1998, its threat to human rights has vanished. From now on, it is the scientific foundations of the procedure are called into question, rather than its ethical parameters. This opens up new possibilities of practical application for polygraphs, but also for other methods of lie detection. By making its reactions magnifiable, countable and legible, the body becomes an object of science. Furthermore, the polygraph declared as a magnifying glass becomes a paradigmatic model for further technological developments in the neurosciences of the near future.

Mind Reading as Science? Lie Detection in the Neuropsychological Laboratory

As mentioned at the beginning, the judicial debate refers regularly to the scientific immaturity of truth assessment procedures such as polygraph testing. But at the same time, there is no active exchange between (neuro)psychological research and the legal field. However, since research in Germany in the field of deception detection takes place almost exclusively in the laboratories of experimental neuropsychology research, it is indispensable to take a look at the further development of neurotechnological access to lying. The inclusion of deception detection in the research agenda of the neurosciences is mainly due to the fact that neuroscience is the first discipline to see itself as reaching beyond the traditionally defined terrain of the natural sciences and to make the investigation of consciousness processes one of its tasks.72 Since this has been the subject of research in the humanities and social sciences up to now, translation difficulties have arisen between the humanities and modern neuroscientific approaches to the subject, which are also apparent in deception detection research and knowledge communication by those working in the field. One interviewee explains using the example of morality:

If a brain area is actually active within the framework of a controlled experiment, I usually already know what this area is usually used for. But neuropsychologists often lack the language to explain in simple terms what this network function is. […] In fact, we know that in experiments that also involve the moral evaluation of images, a certain area is active. But that is not the “moral area.” That is simply the difficulty of the verbalization of network processes.73

As has already been stated from epistemological and linguistic perspectives, neuroscience faces the problem of a missing mentalistic terminology in order to correctly describe their phenomena.74 But neuropsychologists identify both the difficulty of translating within the scientific field, and also a huge barrier in the possibility of transferring knowledge from research into practical fields because of differences in language and approaches. Another researcher interviewee stated the problem as such: “I sometimes find it difficult to communicate between the different disciplines within science […]. But I feel that when you talk to practitioners, it becomes very difficult.”75

Even if none of our interview partners was convinced that their test methods were ethically harmless and that they should be introduced without fail, they speak of an untapped potential in methodology.76 Several researchers emphasized that precisely because the methods currently in use are not very advanced, scientific progress in this regard will be important. This would at least make it possible to replace the older and inferior procedures like polygraph testing. They have great doubt about the future of the subject matter and practical use of their research results, because the topic is sometimes seen as unscientific within the scientific community and there is a lot of popular mystification about it. The following observation is a good example:

There is a certain scepticism from science. I also think that certain disciplines sometimes feel that you are dealing with a field that at first seems unscientific, and this scepticism continues, for example, in terms of financing possibilities, i.e. third-party funding, that you would not get for something like that.77

A strategy to overcome what experts in the field see as misunderstandings of the polygraph is shown by their consistent avoidance of the terms “lie detection” and “lie detector.” For example, one neuroscientist who works with the polygraph decided to actively escape the unscientific associations such phrases evoke with a policy on terminology:

I’m trying to avoid that word [lie detection], and I’m trying to get it out of people’s heads, too. Because what is associated with it in people’s minds usually comes from television or from some abusive pseudo-detection method that has no quality at all and is really more about being spectacular than being scientific.78

At the same time, however, another neuroscientist who has also experimented with the polygraph mentioned that he himself had—apart from the polygraph—a passion for magic tricks, also in connection with his use of the number test: “It may have a non-psychological origin, but I’m not sure. Because I also used to do magic tricks - you’ll laugh - in the past.”79 Even though convincing the test-subject of the instrument’s efficacy was always part of the psychological experiment, the proximity of the magical to the detection of lies seems to appear with some regularity in the interests of its protagonists. At least the impression remains that, as in Marston’s work on deceptive detection, the subject does not detach itself from a connection to “fringe” or “junk science.”80 This, as Ken Alder points out, might have to do with the fact that the polygraph, even if used in science, was not born of science81 but developed right from the start as an application for police interrogation in the United States and became a subject of fiction. However, studies of brain imaging-based approaches to lie detection on an international level are seen to be very promising, both with regard to their future use in court82 and as an intra-disciplinary argument for the implications of new methods such as brain-pattern recognition analysis.83 Current experiments on neuroscientific methods of lie detection are based on the tradition of the psychophysiological credibility assessment that conscious lying also means additional effort in the brain,84 since the true answer must be suppressed. In an fMRI measurement, the visualization of a change in blood flow indicates increased activity and thus effort in specific brain regions, which in turn are distinguished from those characteristics that are associated with true statements. This kind of brain imaging-based lie detection promises a scientifically valid procedure that is supposed to make its specific understanding of a “lie” visible within the imaged brain.

What you see, then, is that the false statement leads to increased activity, especially in the prefrontal and parietal cortex. That is to say, the brain is more active when you lie than when it tells the truth. And I think you can make that a rule of thumb.85

As in polygraph measurements, neuroimaging lie detection is thus based on the same assumption found in polygraph measurements, namely that telling the truth provides the baseline and the “normal” and that lying is the exception. This concise expression in this statement draws a line between the subject that is lying on and the brain telling the truth. Thus, we see the continuation of the idea of the “body as confessional,”86 that is, committed to truth, even if in this case that formulation is transferred onto the brain. Neuroscientists assume that the conscious hiding of information can be seen in deviant activity in certain brain areas. It is then defined and interpreted as “lying” when “there are always longer response times with lies compared with the truth, and stronger prefrontal activity with lies.”87 Another basic assumption that has continuity in the transition to cerebral methods is the notion of the remembering body, as something that can be hooked up to a technological application to extract the truth. One interviewee said: “there’s something the body remembers that’s inaccessible to us. And that would need to be verified.”88 During the legal discussion about the use of polygraphs in court, it had already become apparent that the aim of the procedure was to read the truth of the body most directly in order to be able to circumvent the subjective answer of the interviewee. This goal is even more noticeable in the neuroscientific approaches, because there is a more direct access to the brain as the control centre of the body. It is striking that neuroscientists address the brain as an acting entity. Thus, they refer to either a body as a “remembering medium” or a brain as a “storage” device that lies or talks, that “tells the truth” or that “says: I recognize.”89 The agency that is granted to the body in the psychophysiological access with the polygraph is given instead to the brain by neuroscience. Thus, the brain emerges as the representative of subjective truth and becomes the actor of the scientific ensemble.

As already mentioned, there is another approach via fMRI deception detection that works on the recognition of objects and should therefore be independent of physiological reactions during lying. Advocates of the so-called multivariate pattern recognition analysis (MVPA), or simply ‘brain decoding,’ promote this other approach, which is based on complex algorithmic evaluations. They argue, for example, that their decoding methods offer the prospect of being able to convict criminals based on their brain activity while looking at images relevant to the crime, and this without specifically inducing psychophysiological stress reactions: “Instead of identifying physiological markers for deception, one requires physiological markers that indicate whether a particular piece of information is known to the subject.”90 Thus, behind the various neuroscientific imaging methods for the recognition of lies is the hope that it will no longer be necessary to fall back on the physiological reactions of the vegetative system as indicators, but rather become possible to measure lies where they arise and make them visible.91 Here it is a matter of a direct exposure of the lie via the recognizing brain, regardless of the stress or exertion level of the person, but nevertheless by way of increased activity in a certain area of the visual cortex.

The neuroscientists we spoke to working on and with decoding methods do not show any inhibitions about the term “mind reading,” or “reading” in general, although in the literature some neuroscientists seek to avoid or differentiate the terminology “brain reading” and “mind reading.”92 In fact, the very advanced approaches to multivariate pattern analysis, presented as a way to (re)construct the language of the brain out of fMRI data, are expressed in terms of “reading thoughts from brain activity.”93 In particular, neuropsychological studies dealing with the decoding of thoughts refer to this method as “brain reading” or “brain decoding” not only in the public representation of research but also within the scientific community. Strictly speaking the term refers to a technique in which a decoder is trained algorithmically so that it can recognize certain patterns in the brain scan and even reconstruct them visually.94 Moreover, the decoding method is one possible procedure to detect whether somebody is telling the truth or not. The prospect of being able to detect concealed information in the liar’s brain is promoted by neuroscientific research groups as something that will provide a method that doesn’t need any conscious reaction or attempt to lie on the part of the suspect. This “brain reading” approach reads the mental signals directly from the brain:

Nobody has to lie. Of course, you can… “Did you see that gun, yes or no?” Of course, you can also ask. But you don’t really know that. You just have to show the picture. And the crucial point is then, […] actually, if you show the picture, you would expect that the pure memory signal that is retrieved is independent of the fact that you ask the person a question.95

More than any other neurotechnology, the MRT scanner gives the impression of enabling direct access to the human brain . After all, the activation processes of the neuronal system can be made directly visible here, which allegedly guarantees the directness of what Lorraine Daston and Peter Galison call “mechanical objectivity.”96 Even though the fMRI image is produced via a database logic rather than mechanically,97 the statement above referring to a pure memory signal is characterized by the idea that the scanner generates a truth that is free of subjective human influence. The subject thereby stays completely passive, and the technology makes the brain “speak” for itself. Returning to the metaphor of the magnifying glass from jurisprudence, here the connection can be made with a unifying vision of the objective machine that reaches into both the courtroom and the neuroscience laboratory. One of the judges we spoke to emphasized the advantages of the polygraph, especially in modern judicial evidence production where not just physical damage plays a role, but also the will itself, in the following way:

[In former times in these kind of cases] it used to be violence […]; [a] leg was gone, I had scratches, injuries, it was clear. But today I only have the will. The head is against it, and I would like to be able to measure that. I would also like to be able to prove that. And not only with statements. Because what the mouth speaks does not have to be what the brain thinks. And I want to intervene where the brain thinks. At the source.98

The notion of the brain as a source of truth, which is also evident in the polygraph procedure, is a motif that spans all the lie detection technologies discussed here. Even though the “source” that is accessed with either the polygraph or fMRI is fundamentally different, there is the common attribution of entering into an inner core. In the psychological polygraph test, the behaviour of the body reveals what the person thinks, and if he or she lies the thought remains invisible but is still surmised by putative experts. In the case of neuronal access, on the other hand, it is a matter of attributing behaviour like deception to a particular activity pattern of the brain.

But despite the different notions of behaviour, the “source” of a lie, and the scope of its interpretation, the unifying motif is the machine as a “magnifying glass” and provider of access to either the body or the brain as a quantifiable object. The basic assumption is the same: “The dream of a machine that stands outside culture, history and society, and can grasp the subjective processes of truthful or lying speech is an ideal form of objective science.”99

And in both cases the machine itself becomes a detective and a knowing authority that—in a way that cannot be entirely explained—compensates for the limitations of knowledge. The imagination continues to treat loyalty or truthfulness as something that is ultimately readable as a physical manifestation and by the machines as graphically detectable.100 At this stage, when neuropsychology promises to provide direct access to a pure sign in the brain, an important step of translation is omitted: by equating the scientifically graspable with the essential process of thinking thoughts,101 two essentially different things, brain activity and a mental act, are ontologically connected. In the words of Michael Hagner, this is accompanied by the element of the uncanny:

The uncanny is to be sought neither in our daily experience - to which we have immediate access - nor in the activity of the brain itself - which we can measure - but rather in the space between them, the logic of which is hidden from us and from the measuring devices.102

Since its early days the polygraph has always been understood as a “spectacular science.”103 Compared to the polygraph, the neuroimaging methods appear as scientific and materialistic methods that deliver objective results. However, it is precisely the claim to realize mind reading that presupposes a process that goes beyond the limits of knowledge in terms of its production and logic. But, whereas polygraphs are based on the assumption of a confessing body and a knowing machine, neuroscientific approaches to mind reading start at a higher level. The MRI scanner does not just magnify. While it doesn’t augment the body’s reactions in order to read lies, it is supposed to make the thought itself appear, whereupon active lying becomes obsolete.

Conclusion

One could hear thoughts as little as one could smell pictures, feel sweetness and bitterness, taste distance and closeness.104

As has been shown in the analysis of court decisions, German jurisprudence exhibits a traditional idea of the polygraph as a truth machine. Thus, the machine is not only exaggerated in its autonomy but is also demonized to the exact extent that the judge is granted a natural ability and knowledge to assess the truthfulness of a statement. The machine doesn’t have this implicit or intuitive ability of knowing what the truth is or not. In the early court decisions, the soul and the unconscious are the subject of the debate about the detection of lies, but the subject is still granted a sacral space in which even its thoughts remain its property. With an increased determination of the human body as a calculable object of science, however, an integration of the polygraph as part of a socio-technical ensemble, shared with the psychological experts and the judge, occurs. The polygraph, seen as a magnifying glass, now is in the hands of the human expert. Even if it is still viewed critically, the possibility of its further technological transformation is opened up.

Included in the neuroscience research programme, lie detection has surfed on the wave of innovation in this new discipline. The concept of lies and their characteristics in the body have remained the same. The lie is still seen to require an extra effort, but the confessing body is replaced by the confessing brain. As Nicolas Rose concisely notes: “while psychology’s proxies for mind reading […] were so often criticized and even parodied […] the proxies used by the brain imagers […] largely slipped unnoticed into the background. An objective and materialist technology for ‘reading the mind’ now seemed to be possible.”105 Even if neuroscientists are strategically trying to detach deception detection research from its connection to the spectacle of the lie detection machine, the magical is still present and profits from the limits of knowledge about the actual procedure of the machine. Instead of the vision of an autonomous lie detector, the MRT scanner becomes a mind reading machine reaching a higher level of neurological obscurity. Due to a lack of “mentalistic terminology”106 that would allow an adequate scientific description of phenomena formerly shaped by philosophy, neuroscience seems to be limited to a reductionist usage of metaphors in the word field of mind reading. Furthermore, by ontologically linking the notion of a mental act, such as lying or recognizing, with that of the brain’s activation pattern, it is difficult to realize the linkage to the magical and uncanny. In the court decisions, the soul was exposed through the body, while in the neuroscientific laboratory, the brain steps out as a thing that speaks and confesses. This reductionism and simultaneous concealment of the technological processes that lead to a result about truthfulness seem to nurture the belief in a mind reading machine.

In relation to lie detection procedures, both jurisprudence and neuroscientific research are united by a common belief that is only possible by the acceptance of limited knowledge: in principle, although not yet completely, it is possible to read the human mind. The technologies for lie detection are not seen as mere scientific instruments. In particular, the continuity of the idea of mechanical objectivity leads to a mystifying vision of a psychic machine that can also be suspected of controlling thoughts. It is thus equally connected with feelings of fascination and the uncanny. However, as has been shown, the vision of reading others’ minds is not per se a technological one, as one might assume with respect to modern neuroscience. Rather, this vision is based on an attribution of natural human sensory skills, such as the judge’s capacity to evaluate the truth. Even though the polygraph has already been declared as magnifying glass, this ability to read minds finds an actual technological equivalent in the MRT scanner, which can legitimize its access to body and brain through scientific objectivity.

As mentioned at the beginning, mind reading, imagined as a technique that gives access to the other’s thoughts, is imagined as a certain ability of either a machine or a gifted person. There are concepts in social psychology that can be understood as mind reading processes to a certain extent. ToM approaches describe the ability to access others’ feelings and thoughts by imitating or empathizing with others. In these approaches, however, a natural empathy with the counterpart is just as immanent as the natural limits of this human ability and knowledge. Mind reading in terms of an extra natural sense, like the telepathy imagined in Perry Rhodan, is and stays an extremely attractive topos but impedes an adequate description of mental processes.

Notes
  1. 1.

    Kurt Brand, Perry Rhodan 137: Sturm auf die Galaxis, 137th ed. (Perry Rhodan, 1975), 48f., author’s translation.

     
  2. 2.

    Theories about these abilities today are divided into two main approaches. On the one hand, there is the theory-theory, where Theory of Mind is the cognitive ability to predict and explain behaviour in a manner similar to a scientific theory. The simulation perspective, on the other hand, represents a more biological perspective emphasizing the socializing processes of empathy and the individual’s mirroring of others’ behaviour, convictions and feelings. Several studies have used neuroimaging technologies in attempts to resolve these opposite approaches and show that both cognitive and affective processes are involved in ToM abilities; see, for example, Carruthers 1996; Churchland 1998; A. I. Goldman, “In Defense of the Simulation Theory,” Mind & Language 7, no. 1–2 (1992): 104–119; Suzanne L. K. Stewart, Clea Wright, and Catherine Atherton, “Deception Detection and Truth Detection Are Dependent on Different Cognitive and Emotional Traits: An Investigation of Emotional Intelligence, Theory of Mind, and Attention,” Personality and Social Psychology Bulletin 45, no. 5 (2019): 796.

     
  3. 3.

    David Premack and Guy Woodruff, “Does the Chimpanzee Have a Theory of Mind?” Behavioral and Brain Sciences 1, no. 4 (1978): 515–526, p. 525.

     
  4. 4.

    Xiao Pan Ding et al., “Theory of Mind Training Causes Honest Young Children to Lie,” Psychological Science 26, no. 11 (2015): 1812–1821.

     
  5. 5.

    Stewart et al., “Deception Detection,” 805.

     
  6. 6.

    Ding et al., “Theory of Mind,” 1820.

     
  7. 7.

    For the notion of vision and expectation, see Mads Borup et al., “The Sociology of Expectations in Science and Technology,” Technology Analysis & Strategic Management 18, no. 3/4 (2006): 286; Nik Brown, Brian Rappert, and Andrew Webster, Contested Futures: A Sociology of Prospective Techno-Science (Aldershot, UK and Burlington, VT: Ashgate Publishing Limited, 2000).

     
  8. 8.

    See Michael Hagner, “Mind Reading, Brain Mirror, Neuroimaging: Insight into the Brain or the Mind?” in Psychology’s Territories: Historical and Contemporary Perspectives from Different Disciplines, eds. Mitchell Ash and Thomas Sturm (Psychology Press, 2007), 291.

     
  9. 9.

    Michael Hagner, Der Geist bei der Arbeit. Historische Untersuchungen zur Hirnforschung (Göttingen: Wallstein, 2007), 233.

     
  10. 10.

    See Cornelius Borck, Hirnströme. Eine Kulturgeschichte der Elektroenzephalographie (Göttingen: Wallstein Verlag, 2005), 197.

     
  11. 11.

    Melissa Littlefield, The Lying Brain. Lie Detection in Science and Science Fiction (Ann Arbor: University of Michigan Press, 2011), 142.

     
  12. 12.

    Karin Knorr Cetina, Epistemic Cultures: How the Sciences Make Knowledge (Cambridge, MA: Harvard University Press, 1999), 63ff.

     
  13. 13.

    For further concepts of the non-knowledge in terms of strategic ignorance, there are various approaches within Ignorance Studies, see, e.g., Linsey McGoey, “Strategic Unknowns: Towards a Sociology of Ignorance,” Economy and Society 41, no. 1 (2012): 1–16.

     
  14. 14.

    Ann Kerwin calls this complete lack of knowledge the “unknown unknowns” which means that one does not know about the “unknown,” see Ann Kerwin, “None Too Solid: Medical Ignorance,” Knowledge 15, no. 2 (1993): 179.

     
  15. 15.

    Korr Cetina, Epistemic Cultures, 64.

     
  16. 16.

    Cf. Ibid., 63.

     
  17. 17.

    By “borrowing” I mean not just the use of the concept out of the scientific into legal context. It should also be noted that in her analysis of the epistemic culture in high-energy physics, Karin Knorr Cetina defines her concept of “negative knowledge” not as a deficiency or non-knowledge, but more as a requirement. In my analysis, however, the concept of limited knowledge is alienated to the extent that it also problematizes the actual limitation of knowledge production in the case of lie detection technologies.

     
  18. 18.

    The reason for the historically wide range and heterogeneity of the material lies in the fact that the field of lie detection in Germany has a marginal status; the legal principles have not been fundamentally renewed since 1954 and are therefore still referred to in jurisprudence today. Since the field is a common subject area but is strictly separated by discipline, it was of particular value for the present analysis to contrast the scientific field with the legal field.

     
  19. 19.

    This is according to Google translator: https://​translate.​google.​com. For a comprehensive elaboration of the interconnection of fiction and lying and its transformation with its translation into the Latin “mendacium,” see Martin Hose, “Fiktionalität und Lüge,” Poetica 28 (1996): 260ff.

     
  20. 20.

    See Dietzsch in his “Small history of the lie”: Steffen Dietzsch, Kleine Kulturgeschichte der Lüge (Leipzig: Reclam, Leipzig, 1998), 24–26.

     
  21. 21.

    See ibid., 32.

     
  22. 22.

    Ibid., 35.

     
  23. 23.

    See Immanuel Kant, The Metaphysics of Morals, ed. Mary J. Gregor (Original work published 1797; Cambridge University Press, 1991), 225–226.

     
  24. 24.

    Jan Assmann, “Das Herz auf der Waage. Schuld und Sünde im Alten Ägypten,” Schuld, eds. Tilo Schabert and Detlev Clemens (München: Verlag Wilhelm Fink, 2002), 123.

     
  25. 25.

    See Ahmed A. Karim et al., “Zur Neurobiologie des Lügens,” Neurobiologie forensisch-relevanter Störungen: Grundlagen, Störungsbilder, Perspektiven, ed. Jürgen Müller (Kohlhammer Verlag, 2009), 139–140.

     
  26. 26.

    Paul V. Trovillo, “History of Lie Detection,” Journal of Criminal Law and Criminology 29, no. 6 (1939): 850–851.

     
  27. 27.

    Ibid., 140.

     
  28. 28.

    See, among others, Victoria Talwar and Kang Lee, “Social and Cognitive Correlates of Children’s Lying Behavior,” Child Development 79, no. 4 (2008): 866–881.

     
  29. 29.

    See Hans Rott, “Der Wert der Wahrheit,” Kulturen der Lüge, ed. Mathias Mayer (Weimar: Böhlau Verlag, 2003), 9; author’s translation.

     
  30. 30.

    See Ken Alder, “America’s Two Gadgets: Of Bombs and Polygraphs,” Isis 98, no. 1 (2007): 6.

     
  31. 31.

    Littlefield, The Lying Brain, 52.

     
  32. 32.

    William M. Marston, “Systolic Blood Pressure Symptoms of Deception,” Journal of Experimental Psychology 2, no. 2 (1917): 117–163.

     
  33. 33.

    John Larson, “Modification of the Marston Deception Test,” Journal of Criminal Law and Criminology 12, no. 3 (1922): 390. For further explanations see also Littlefield, The Lying Brain, 36f.

     
  34. 34.

    For a comprehensive historical overview of lie detection, see the classical work of Paul V. Trovillo, “A History of Lie Detection,” Journal of Criminal Law and Criminology (19311951) 29, no. 6 (1939): 848–881.

     
  35. 35.

    See Stephan Schleim, Gedankenlesen: Pionierarbeit der Hirnforschung (Hannover: Heise, 2007), 23f., or Denis Köhler and Katrin Scharmach, “Zur Geschichte der Rechtspsychologie in Deutschland unter besonderer Betrachtung der Sektion Rechtspsychologie des BDP” (2013), 457.

     
  36. 36.

    Silvan Niedermeier, “‘The Only Torture Involved Is Self-Induced’: Zur Geschichte Des Lügendetektors in Den USA” Wahrheit Und Gewalt: Der Diskurs Der Folter in Europa Und Den USA, ed. Thomas Weitin (Bielefeld: Transcript, 2010), 227–29.

     
  37. 37.

    Margaret Gibson, “The Truth Machine: Polygraphs, Popular Culture and the Confessing Body,” Social Semiotics 11, no. 1 (2001): 65.

     
  38. 38.

    Ken Alder, “America’s Two Gadgets: Of Bombs and Polygraphs,” Isis 98, no. 1 (2007): 127.

     
  39. 39.

    Niedermeier, “‘The Only Torture Involved’,” 223ff.

     
  40. 40.

    Richard A. Leo, Police Interrogation and American Justice (Harvard University Press, 2009), 82. See also the analysis of the polygraph as a “third-degree machine” by Geoffrey C. Bunn, “Spectacular Science: The Lie Detector’s Ambivalent Powers,” History of Psychology 10, no. 2 (2007): 159ff.

     
  41. 41.

    As Ken Alder writes, “[…] all his life, Keeler continued to practice sleight-of-hand tricks and magical acts.” See Ken Alder, The Lie Detectors: The History of an American Obsession (New York: Free Press, 2007), 56.

     
  42. 42.

    Thus, the test person connected to the apparatus is asked to select a number in a given range. In the subsequent query of the individual numbers, the test person should deny in each case that this is the number selected by him.

     
  43. 43.

    See Geoffrey C. Bunn, “The Lie Detector, Wonder Woman and Liberty: The Life and Work of William Moulton Marston,” History of the Human Sciences 10, no. 1 (1997): 95; see also: Bunn, “Spectacular Science,” 166.

     
  44. 44.

    Arthur W. Galston and Clifford L. Slayman, “The Not-So-Secret Life of Plants: In Which the Historical and Experimental Myths About Emotional Communication Between Animal and Vegetable Are Put to Rest,” American Scientist 67, no. 3 (1979): 338–341.

     
  45. 45.

    See ibid., 337. The experiments of Backster could not be replicated for decades. Arthur Galston and Clifford Slayman describe Backster’s erroneous conclusion as follows: “At this point there took place a totally unscientific discontinuity of logic. Without investigating the recording conditions to identify the sources of unexpected noise and drift, Backster jumped to the conclusion that because the plant record resembled in a single respect human records obtained during emotional reaction, the plant must have been experiencing something like human emotion. This is a classical semantic confusion of identity, roughly equivalent to arguing that because the face of the full moon displays dark patches resembling a human face, there must be a real man in the moon.” Ibid., 340.

     
  46. 46.

    For some media reports, see Xaver Frühbeis, “Die Gefühle des Drachenbaums,” Deutschlandfunk (2 February 2006). https://​www.​deutschlandfunk.​de/​die-gefuehle-des-drachenbaums.​871.​de.​html?​dram:​article_​id=​125409; Michael Pollan, “The Intelligent Plant,” The New Yorker (16 December 2013). https://​www.​newyorker.​com/​magazine/​2013/​12/​23/​the-intelligent-plant; “Cleve Backster Talked to Plants. And They Talked Back.” The New York Times (21 December 2013). https://​www.​nytimes.​com/​news/​the-lives-they-lived/​2013/​12/​21/​cleve-backster/​.

     
  47. 47.

    Robert Zagolla, Im Namen der Wahrheit: Folter in Deutschland vom Mittelalter his heute, 1., Aufl. (Berlin-Brandenburg: be.bra, 2006), 129.

     
  48. 48.

    Ibid., 457.

     
  49. 49.

    BGH, Urteil vom 16.2.1954 – 1 StR 578/53 (LG Zweibrücken), NJW 1954, 649 (beck-online).

     
  50. 50.

    Carolin Stenz, “‘Demokratisierung’ des Strafprozesses? Der ‘Lügendetektor,’ der Entlastungsbeweis und die Reform des Strafverfahrensrechts zwischen Demokratisierung und Liberalisierung (1975-1983),” Rechtskultur 3 (2014): 48.

     
  51. 51.

    BGH, Urteil vom 16.2.1954 – 1 StR 578/53 (LG Zweibrücken), NJW 1954, 649 (beck-online); author’s translation and emphasis.

     
  52. 52.

    Ibid., author’s translation.

     
  53. 53.

    The formula taken up by the 1959 judgement (in BVerfGE 9, 89 (95)) is based on the sentence, “Human dignity is affected when the human being is degraded to an object, to a mere means, to a justifiable greatness,” from Günter Dürig, “Der Grundrechtssatz von Der Menschenwürde: Entwurf Eines Praktikablen Wertsystems der Grundrechte aus Art. 1 Abs. I in Verbindung Mit Art. 19 Abs. II Des Grundgesetzes,” Archiv des Öffentlichen Rechts 81 (N.F. 42), no. 2 (1956): 127; author’s translation.

     
  54. 54.

    BGH, Urteil vom 16.2.1954 – 1 StR 578/53 (LG Zweibrücken), NJW 1954, 649 (beck-online); author’s translation and emphasis.

     
  55. 55.

    Here, I refer to a concept introduced by Wiebe Bijker, who points out that “The technical is socially constructed, and the social is technically constructed - all stable ensembles are bound together as much by the technical as by the social.” Wiebe E. Bijker, “Do Not Despair: There Is Life After Constructivism,” Science, Technology & Human Values 18, no. 1 (1993): 125.

     
  56. 56.

    For a very constructive and applicable approach to the analysis of distributed and gradualized agency, see Werner Rammert, “Distributed Agency and Advanced Technology,” Agency Without Actors? New Approaches to Collective Action, 1st ed. (London: Routledge, 2012), 98–112; Werner Rammert and Schulz-Schaeffer, “Technik Und Handeln: Wenn soziales Handeln sich auf Menschliches Verhalten und technische Artefakte verteilt,” Können Maschinen handeln?: Soziologische Beiträge zum Verhältnis von Mensch und Technik (Frankfurt am Main: Campus, 2002), 11–64.

     
  57. 57.

    BGH, Urteil vom 16.2.1954 – 1 StR 578/53 (LG Zweibrücken), NJW 1954, 649 (beck-online); author’s translation and emphasis.

     
  58. 58.

    Ralf Kölbel, “Zur Problematik der strafprozessualen Körperhermeneutik,” Goltdammers Archiv für Strafrecht 153 (2006): 483 ff.

     
  59. 59.

    Rammert, “Distributed Agency and Advanced Technology,” 90.

     
  60. 60.

    Doris Schweitzer offers a fruitful proposal for how to use subjectivation theory for the analysis of the legal person. See Doris Schweitzer, “Die Subjektwerdungen der juristischen Person Subjektivierungstheoretische Überlegungen zur rechtlichen Personalisierung von Kollektiven,” Jenseits der Person, eds. Thomas Alkemeyer, Ulrich Bröckling, and Tobias Peter (Bielefeld: transcript, 2018), 175–94.

     
  61. 61.

    Gibson, “The Truth Machine,” 66.

     
  62. 62.

    Ibid., 67.

     
  63. 63.

    Katja Franko Aas, “‘The Body Does Not Lie’: Identity, Risk and Trust in Technoculture,” Crime, Media, Culture 2, no. 2 (2006): 145.

     
  64. 64.

    Max Steller, “Psychologische Diagnostik - Menschenkenntnis oder Angewandte Wissenschaft?” Psychologische Begutachtung Im Strafverfahren: Indikationen, Methoden Und Qualitätsstandards (Darmstadt: Steinkopff, 2005), 9.

     
  65. 65.

    Considerations and explanations of the measurement took place in the course of the 1990s and were partly included in subsequent judgements of 1998, such as: LG Düsseldorf, decision of 09.10.1998 - VI 14/98, 648, in StV 1998, 647 - 649 (Issue 12).

     
  66. 66.

    AG Demmin, Zweigstelle Malchin, Urteil vom 07.09.1998 (94 Ls 182/98), JurPC Web-Dok. 176/1998, Par. 40; author’s translation and emphasis.

     
  67. 67.

    The Section of Legal Psychology [“Sektion Rechtspsychologie”] was founded in 1978 as part of the Association of German Psychologists, whereupon in 1985 the Section was finally admitted to the German Society of Psychology, see Köhler and Scharmach, “Zur Geschichte der Rechtspsychologie in Deutschland,” 458.

     
  68. 68.

    BGH, Urteil vom 17.12.1998 – 1 StR 258/98 –, juris, Rd. 26; BVerwG, Beschluss vom 31.07.2014 – 2 B 20/14 –, juris, Rd. 2; AG Bautzen, Urteil vom 26.10.2017 – 42 Ds 610 Js 411/15 jug –, juris Rd. 45.

     
  69. 69.

    AG Bremen, Beschluss vom 25.04.2000 – 61 F 0734/99, abgedruckt in STREIT 4/2000, S. 177.

     
  70. 70.

    BGH, Urteil vom 17.12.1998 – 1 StR 258/98 –, juris, Rd. 26; AG Bautzen, Urteil vom 26. 10.2017 – 42 Ds 610 Js 411/15 jug –, juris Rd. 51. LG Düsseldorf, Beschluss vom 09.10.1998 – VI 14/98, 648 (in StV 1998, 647 - 649 (Heft 12)).

     
  71. 71.

    As Cornelius Borck points out, in the course of his EEG studies in the 1950s Norbert Wiener had already cultivated the techno-centric relationship between working processes of computer and brain. See Cornelius Borck, Hirnströme. Eine Kulturgeschichte Der Elektroenzephalographie (Göttingen: Wallstein, 2005), 298.

     
  72. 72.

    See Peter Hucklenbroich, “Gedankenlesen mittels Neuroimaging? – Zur Wissenschaftstheorie bildgebender Verfahren in Medizin und Neurowissenschaft,” Visualisierung und Erkenntnis. Bildverstehen und Bildverwenden in Natur-und Geisteswissenschaften, eds. Dimitri Liebsch and Nikola Mößner, 1., 1. (Köln: Herbert von Halem Verlag, 2012), 270–271.

     
  73. 73.

    Researcher, Int07, 231; author’s translation and emphasis.

     
  74. 74.

    Hucklenbroich, “Gedankenlesen mittels Neuroimaging?”; Frederic Gilbert, Lawrence Burns, and Timothy Krahn, “The Inheritance, Power and Predicaments of the ‘Brain-Reading’ Metaphor,” Medicine Studies 2, no. 4 (2011): 229–44.

     
  75. 75.

    Researcher/b, Int03, 227; 10; author’s translation and emphasis.

     
  76. 76.

    Researcher, Int03, 172; author’s translation and emphasis.

     
  77. 77.

    Researcher/a, Int03, 226; author’s translation and emphasis.

     
  78. 78.

    Researcher, Int02, 5; author’s translation and emphasis.

     
  79. 79.

    Researcher, Int07, 36; author’s translation and emphasis.

     
  80. 80.

    See both Geoffrey C. Bunn, “The Lie Detector, Wonder Woman and Liberty: The Life and Work of William Moulton Marston,” History of the Human Sciences 10, no. 1 (1997): 114, and Littlefield, The Lying Brain, 10.

     
  81. 81.

    Alder, “America’s Two Gadgets,”135.

     
  82. 82.

    Daniel D. Langleben and Jane Campbell Moriarty, “Using Brain Imaging for Lie Detection: Where Science, Law, and Policy Collide,” Psychology, Public Policy, and Law 19, no. 2 (2013): 222–234.

     
  83. 83.

    See, for example, John-Dylan Haynes and Geraint Rees, “Decoding Mental States from Brain Activity in Humans,” Nature Reviews Neuroscience 7, no. 7 (2006): 523; Kendrick N. Kay et al., “Identifying Natural Images from Human Brain Activity,” Nature 452, no. 7185 (2008): 352–355; Mart Bles and John-Dylan Haynes, “Detecting Concealed Information Using Brain-Imaging Technology,” Neurocase 14, no. 1 (2008): 82–92.

     
  84. 84.

    Sean A. Spence et al., “Speaking of Secrets and Lies: The Contribution of Ventrolateral Prefrontal Cortex to Vocal Deception,” NeuroImage 40, no. 3 (2008): 1411–1418.

     
  85. 85.

    Researcher, Int17, 15; author’s translation and emphasis.

     
  86. 86.

    Gibson, “The Truth Machine,” 72.

     
  87. 87.

    Researcher, Int16, 114; author’s translation and emphasis.

     
  88. 88.

    Researcher, Int02, 246; author’s translation and emphasis.

     
  89. 89.

    For example: Researcher, Int02, 246; Researcher, Int17, 15; 23; 86; 87; author’s translation and emphasis.

     
  90. 90.

    Mart Bles and John-Dylan Haynes, “Detecting Concealed Information Using Brain-Imaging Technology,” Neurocase 14, no. 1 (2008): 87.

     
  91. 91.

    Schleim, “Gedankenlesen,” 110.

     
  92. 92.

    John-Dylan Haynes, “Brain Reading: Decoding Mental States from Brain Activity in Humans,” Oxford Handbook of Neuroethics, eds. J. Illes and B. J. Sahakian (Oxford: Oxford University Press, 2011). For further information see also: Nikolas Rose, “Reading the Human Brain: How the Mind Became Legible,” Body & Society 22, no. 2 (2016): 13.

     
  93. 93.

    Researcher, Int17, 49; author’s translation and emphasis.

     
  94. 94.

    See, for example, Kendrick N. Kay et al., “Identifying Natural Images from Human Brain Activity,” Nature 452, no. 7185 (2008): 352–355; Yoichi Miyawaki et al., “Visual Image Reconstruction from Human Brain Activity Using a Combination of Multiscale Local Image Decoders,” Neuron 60, no. 5 (2008): 915–929.

     
  95. 95.

    Researcher, Int17, 43; author’s translation and emphasis.

     
  96. 96.

    Lorraine Daston and Peter Galison define mechanical objectivity as a scientific virtue that emerged in the late nineteenth century with the advent of photographic and autoprint images in the natural sciences. The idea here was that these methods “let nature speak for itself” and are free of human subjectivity. See Lorraine Daston and Peter Galison, Objectivity, 3. (New York: Zone Books, 2008), 120. Even though in the twentieth century mechanical objectivity was supplemented by the “trained judgement” (ibid., 311) of experts interpreting data and images, mechanical objectivity still plays a central role when it comes to the explanation of brain-imaging methods or results.

     
  97. 97.

    Sarah de Rijcke and Anne Beaulieu, “Networked Neuroscience: Brain Scans and Visual Knowing at the Intersection of Atlases and Databases,” Representation in Scientific Practice Revisited, eds. Catelijne Coopman et al. (Cambridge: MIT Press, 2014), 132.

     
  98. 98.

    Judge, Int26, 169; author’s translation and emphasis.

     
  99. 99.

    Gibson, “The Truth Machine,” 65.

     
  100. 100.

    Ibid., 63–64; author’s translation and emphasis.

     
  101. 101.

    See Hucklenbroich, “Gedankenlesen mittels Neuroimaging?” 270.

     
  102. 102.

    Hagner, “Mind Reading, Brain Mirror, Neuroimaging,” 298.

     
  103. 103.

    Bunn, “Spectacular Science,” 156–178.

     
  104. 104.

    Wim Vandemaan, Perry Rhodan 2533: Reise in die Niemandswelt: Perry Rhodan-Zyklus ‘Stardust’ (Perry Rhodan, 2010), 27; author’s translation.

     
  105. 105.

    Rose, “Reading the Human Brain,” 9f.

     
  106. 106.

    Hucklenbroich, “Gedankenlesen mittels Neuroimaging?” 282.

     
Acknowledgements

This research was supported by German Research Foundation (grant no. 320725678). In particular, the analysis of jurisprudence was developed in close cooperation with my colleague Bettina Paul, for which I express my deepest gratitude. I would also like to thank Torsten Voigt for his constant and constructive support.