We have come to regard Theory of Mind (ToM) as a natural process through which readers interpret and comprehend characters in fiction. Authors such as Jane Austen or Virginia Woolf, writing books about human characters like Emma or Mrs. Dalloway, expect their readers to infer characters’ motivations and feelings by assuming a similarity to the readers’ own thinking and feeling bodies. The authors often construct their characters as human beings with ToM skills similar to the reader. However, writers may also undermine readers’ and characters’ assumptions that they correctly perceive others’ intentions, by deliberately depicting mistakes in their construction of other people’s minds—or misconstruction— thereby creating complex webs of misunderstanding and misinterpretation between people of different cultures or even those growing up in the same home (as in Shakespeare’s Othello or A Midsummer Night’s Dream). My interest in this essay is with authors who go so far as to construct test-cases of Theory of Mind by creating interactions between characters that are clearly human and characters that are artificial constructs, harboring alien, uncharted minds. Robots, golems, moving statues, speaking pictures, and dynamic chess pieces—objects visually and cognitively marked as different from humans and thus as more than normally unpredictable, may be presented in narratives as reliable resources for absolute “truth” and reflectors of the human soul, or, at other times, as completely opaque. By describing the interaction between human and nonhuman bodies, and the clash of cultural and biological expectations with alien neuron systems, the texts question and probe the limits of humanity and human expectations, and force the reader to rethink and reconsider reactions that seem “natural.” Fantastical worlds and artificial minds provide a perfect laboratory for examining how human characters contend with, adapt to, and overcome—or fail to overcome—obstacles to their own survival and success. Such worlds allow for “extreme case analysis” and provide readers with optional cognitive “playgrounds” for their own thinking and feeling brains.
My examples here are taken from Isaac Asimov and Charles Lutwidge Dodgson (also known as Lewis Carroll), in their engagements with science fiction and fantasy. These authors probe the question of how can humans, when confronted with an alien mode of thinking or behavior, understand the reasoning inside such brains? Stanislaw Lem in Solaris maintains that such feats are impossible: there are entities too great and too obscure for human minds to understand or even to approach (namely the ocean).1 Humans hardly understand their own hidden desires and motives, their obscure biology—how can they comprehend others? But authors like Asimov and Dodgson take the approach that even a partial success is worthwhile, and they optimistically create human characters that learn not to rely on intuitive human ToM skills, but rather to deploy a “Theory of Artificial Minds” (ToAM) in order to succeed or even survive in complex fictional environments. Their protagonists acquire ToAM skills as players acquire a game’s rules and strategy, assuming the method in which logicians and scientists approach a problem: they study the environment and its constraints, they observe alien creatures’ behaviors, collect as much empirical and background (or historical) information as possible, and then they deduce a set of premises, or game rules. These rules are used in turn to predict the creatures’ responses, and must be modified according to the success or failure of their projections. Of course in a simple game scenario there is an underlying assumption that the alien behavior will be consistent, but occasionally inconsistency is also taken into account, and the human characters must live with uncertainty and develop heuristics to reduce this instability. By endowing their characters with an approach that is similar to Dodgson’s description of formal logic— that is, take whatever “rules” or premises you observe (whether they seem true or false in the “real world”) and follow them to their logical conclusion2—the authors reduce the emotional and even intuitive aspects of character to character interactions. The human character has the propensity for reacting emotionally, but it manages to control its spontaneous reaction (which may be very negative) and to use inference and logic instead to comprehend the alien one. A nonemotional reaction, however, does not mean inhumane. Frequently the protagonists that are most logical and least emotional, like Powell in I, Robot, also prove to be the most open and accepting of alternative ways of thinking. The highly popular character Spock in the Gene Roddenberry Star Trek series refrains from emotional reactions and uses logical inference to understand the needs of other species; he has become a model of respect for diversity and difference in both the television series and Star Trek books. Alice in Dodgson’s books for children is just a child, easily moved to tears or anger, but she manages often to control her emotional behavior and apply logical thinking in order to interact with and learn from alien beings. Dodgson and Asimov’s human protagonists mediate for the reader a rational access to alien minds and thus encourage readers to adjust to new ways of thinking and alternative perspectives.
Isaac Asimov’s I, Robot, published in 1950, was groundbreaking in its inventiveness. Asimov introduced the robot with a positronic brain, the forefather of Data in Star Trek: The Next Generation. Science fiction differs from fantasy, explains Larry Niven,3 in that (at the time it is written) you cannot prove that it cannot happen: the premises upon which the sci-fi worlds are constructed must seem plausible and believable within the current scientific community (readers often argue about the details and authors like Niven relate to their concerns, as with Niven’s Ring World series). Fantastical worlds, however, while they need to be consistent within their own narrative schema, do not have to seem scientifically plausible (like Terry Pratchett’s utterly flat Discworld, which rides on the back of four elephants standing on the back of Great A’Tuin the turtle). Asimov’s artificial, human-constructed, positronic robot brain is consistent with a science fiction approach, and a hierarchy of three basic rules built into its logic prescribes the robot’s behavior (or “motivation”). A robot may not injure a human being or, through inaction, allow a human being to come to harm; it must obey orders given it by human beings except where such orders would conflict with the first law; and it must protect its own existence as long as such protection does not conflict with the first or second law. However, how these rules are construed by the robot’s electronic mind is often surprising, and the human characters face the problem of what happens “when one is face to face with an inscrutable positronic brain, which the slide-rule geniuses say should work thus and so. Except that it doesn’t” (56). In “Runaround,” Isaac Asimov’s story of a drunken robot, two human robot specialists, Mike Donovan and Greg Powell, find themselves in dire need of a robot’s services, but the robot, named Speedy, does not function as planned. Rather than bringing back selenium from a certain pool on Mercury’s surface, the robot circles the pool again and again, swaying drunkenly and quoting lines from Gilbert and Sullivan. The narrator describes the desperation of the two men: unless they understand why Speedy behaves as it does, and motivate it to act differently, they will burn to death within a few hours. Having a Theory of Artificial Mind becomes here a matter of life or death: thus the story provides for the reader an extreme, nearly bare-bones testing ground of human versus machine thought. Asimov’s narrative is constructed as a conundrum or dilemma similar to the detective story. As in detective stories, there is a riddle or problem that must be resolved, and the basic “rules of the game” are already known ahead of time: what are the given physical realities, and what are the possible motivations for the culprit’s behavior. Using all the relevant facts, human characters combine Theory of Mind skills with deductive reasoning to construct a solution. In I, Robot, the protagonists know that the three laws of robotics determine Speedy’s behavior (rather than love or power or money found in human detective stories), and they must use the same rules to devise a solution to their problem. Lisa Zunshine suggests that detective stories exercise readers’ metarepresentational ability, or the ability to keep track of sources of representations (who said what, when), to store such data under advisement and to reevaluate their truth-value once more information becomes available. Detective stories provide pleasure in the cognitive training of this ability (123). In addition, however, these “whodunits” provide the reader with the enormous satisfaction of solving riddles and bringing open questions to a neat and logical conclusion—a response shared by mathematicians, scientists, crossword puzzle solvers and novel readers alike. While Asimov’s stories exercise readers’ metarepresentational skills to some extent, they emphasize more the gratification of riddle resolution—provided by detective like scientists and robopsychologists with unusual ToM skills.
In “Runaround,” Asimov’s human characters model two distinct approaches to artificial behavior interpretation. The narrator employs language patterns, action, and thought descriptions to stress the difference. Mike Donovan represents the average human point of view: he reacts very emotionally to the stressful situation, and becomes frustrated and angry, attributing human agency to the robot’s mind. Donovan construes the robot’s behavior as deliberately antagonistic and menacing. When he calls Speedy drunk, he suggests the negative connotations of tales of self-induced inebriation (e.g., the biblical Noah and Lot), and when he applies the word thinks he implies that Speedy is an independent, free-willed thinker. He thus attributes a malevolent intention to Speedy’s refusal to obey orders. By suggesting that Speedy is playing a game Donovan conjures an intentionally irresponsible, even teasing, and aggravating attitude: “‘To me, he’s drunk,’ stated Donovan, emphatically, ‘and all I know is that he thinks we’re playing games. And we’re not. It’s a matter of life and very gruesome death’” (43). Greg Powell, on the other hand, is more rational and inclined to proceed logically, and he does not easily fall back on intuitive human ToM. Powell refuses to treat Speedy as a human, nor does he assume deliberate insubordination. Refuting the negative implication of self-induced inebriation, he replies, “Speedy isn’t drunk—not in the human sense, because he’s a robot, and robots don’t get drunk. However, there’s something wrong with him which is the robotic equivalent of drunkenness” (43).
When readers first read Donovan’s allegations, they may construct a certain view of Speedy’s agency and motivation. However, once Powell reacts, Donovan’s view will come under advisement. Powell’s approach asks the reader to revise an earlier opinion of Donovan’s highly emotional message. He refutes Donovan’s humanization of Speedy’s intentions, by objectifying Speedy, saying that: “A robot’s only a robot. Once we find out what’s wrong with him, we can fix it and go on” (43). As do investigators in detective stories, Powell attempts to reconstruct the scenario before Speedy “went bad,” so as to imagine what could have brought about Speedy’s bizarre behavior. Powell thus acts for the reader as a mediator to the artificial mind. He analyzes the environment around the pool, pointing out that while Speedy is perfectly adapted to normal Mercurian environment, the particular region they are in is abnormal and particularly toxic to the Robot’s iron body (“There’s our clue”). He then requires Donovan to reiterate exactly what he asked of Speedy and realizes that Donovan failed to emphasize the urgency of the request. Powell, using the basic laws of robotics upon which Speedy’s positronic brain is constructed, analyzes the miscommunication and the electronic conflict that occurred in the machine brain. Speedy’s Rule 3 (saving himself) was strengthened as a result of his costly construction, given that he is “expensive as a battleship.” Although obeying Rule 2—human commands—is in general of higher priority than Rule 3, the orders given by Donovan were given casually and without special emphasis, so the Rule 2 potential was rather weak. Powell concludes that the conflict between potentials is what drove Speedy to the pool and away from it, causing him to circle it; the point where the potentials strike an equilibrium is the distance from the pool that Speedy maintains. This explains the drunken attitude, says Powell:
And that, by the way, is what makes him drunk. At potential equilibrium, half the positronic paths of his brain are out of kilter. I’m not a robot specialist, but that seems obvious. Probably he’s lost control of just those parts of his voluntary mechanism that a human drunk has. (46)
Thus using inference and logic, Powell shows the reader how a specific combination of decision states, or algorithmic possibilities, caused a conflict of interest that confused the machine brain. The lack of urgency in Donovan’s command, given a low priority level by the machine brain, is what triggered the conflict with Speedy’s unusually high priority instruction to protect his expensive iron body. Powell alerts the reader to a more rational representation of the particular reality of machine “thought”—a type of short-circuiting of metal neurons. This human character thus acts like a detective, psychologist, and computer analyst in one. He probes the source of conflict that could cause a robot to appear malevolent and confrontational, without making the mistake of thinking that the robot has an autonomous, self-governed inner self to which his actions are clues. Powell’s deductions prove correct and they allow him to devise a solution that brings Speedy out of his confused state and back to normal functioning, saving the lives of the two humans. The scientist puts himself at risk (effectively triggering a response to a No. 1 Rule), overriding Speedy’s circuit malfunction with a higher priority command that forces Speedy into positive action. Once Speedy returns to normal and realizes how poorly he performed, he apologizes profusely, thereby disproving Donovan’s insinuation of deliberate robot insubordination and reinforcing Powell’s approach.
Many of the stories in I, Robot repeat this pattern of strange robot behavior followed by detective-like logical analysis, or ToAM, that explains it. The author thus encourages readers to adopt analytic and deductive thinking when dealing with initially incomprehensible brains. Zunshine suggests that high intensity detective stories pay a price for their demanding cognitive exercise. Such stories must be more focused and emphasize a certain plotline above all others. They do not, for example, give equal time to romance and mystery, since that may create competition with the “metarepresentational framing required to process the detecting elements of the story” (143, 147). Emphasizing aspects of romance or other genres may weaken the mystery plot by compromising readers’ ability to keep track of characters’ states of minds, especially when they are under advisement. Asimov’s stories, too, focus on specific dilemmas, and they rarely create multiple plot lines or complex intentionality levels. Rather, they concentrate on the “artificial mind” problem at hand. Readers need to concentrate on the state of mind of the people trying to understand the machines, the current artificial world rules, and the possible options for robot thought processes. Readers (along with human characters) are working within a type of symbolic logic framework, in which each new premise that the author introduces is used in constructing or testing the next stage.
An exception to this focus on logic is Asimov’s “Liar!,” a story in which the schema is reversed: instead of humans trying to read the minds of robots, a mind-reading robot attempts to exercise theory of human minds. This strange creation has on the one hand direct access to human thoughts (literally “reading them”), but on the other it has no human judgment or evaluative skills nor any human intuition and common sense. The robot’s behavior is structured upon the three rules embedded into its logic and its positronic brain, the first of which is not to hurt people, and the second of which is to obey orders. His interpretation of these commands is wish-fulfillment: when he perceives a certain desire or wish within the human characters, for example a romantic affection for someone or an ambition to take over another’s position, the robot attempts to fulfill this desire through stories, or lies. Because the robot lacks the ability to generate embedded narratives of the minds he reads, he cannot imagine the full story of the humans’ reactions, and thus he cannot predict the consequences of any actions he takes based on his privileged knowledge. Disregarding the misery that will result in feeding humans on fantasies, he misleads the human characters with invalid “facts,” promising them what they cannot have. They, in turn, assume that a machine cannot lie, they fail to verify his input, and act foolishly on false assumptions. When he tells the robopsychologist Susan Calvin that her affections for one of the scientists are reciprocated by that person, even though the man in fact loves another woman, Susan acts on his information and makes futile overtures. The robot does not understand relative magnitudes of human pain, and that the shame of being openly rejected and losing one’s pride may be far worse than living with unrequited love. The tenuous balance between the fear of being made to look like a fool in contrast to obtaining one’s desire is not apparent to a robot brain. “Liar!” thus provides a test case depicting what happens when one’s mind is bound by certain fixed rules, and is privileged with special knowledge, but does not have access to human cultural and biological requirements and expectations. While the robot fails to develop a correct theory of human mind, Susan does figure out how his artificial mind works. She realizes that her assumption that a machine is unbiased and therefore provides true facts was wrong, and that all her coworkers fell for the same illusion, eager to have their fantasies fulfilled. Incensed by being made to look a pathetic fool, she destroys the robot by charging him with causing pain to humans and thus driving him mad.
About a century before Asimov, Charles Lutwidge Dodgson also created worlds full of artificial beings. Dodgson was a professor of mathematics and loved math and logic, observing a unique type of truth and beauty in these subjects. He believed everyone should be trained in logic theory and mathematics, as it provided enjoyment and an excellent way to develop the mind. Using various visual aids he developed teaching methods fit for different audiences, including those not expected to study logic in Victorian England—young children, high school girls, and women teachers. In his first volume of Symbolic Logic, which appeared in February 1896, he addressed schoolteachers in a separately printed prospectus, where he points symbolic logic out as a healthy mental recreation in the education of young people between twelve and twenty:
I claim, for Symbolic Logic, a very high place among recreations that have the nature of games or puzzles; and I believe that any one, who will really try to understand it, will find it more interesting and more absorbing than most of the games or puzzles yet invented. (45)
Dodgson claimed that people harbor three false ideas about the study of logic: “that it is too hard for average intellects, that it is dry and uninteresting, and that it is useless.” He rebutted all three, and concluded that this fascinating subject will be “of service to the young” and should be taken up in high schools and by families.4
Similar to Asimov’s construction of robotic mind-riddles for youth and adults, Dodgson invented for children fascinating games, puzzles, and riddles that engaged their attention and encouraged them to solve problems and to experience the satisfaction of coming up with clever solutions. I suggest that Dodgson’s Alice books themselves (Alice in Wonderland and Through the Looking Glass), the first originally written for Alice Liddell,5 were a powerful means of expressing to thousands of children and adults, across generations, the beauty and wit and power of logic and math. In these books, Dodgson creates a new model protagonist who acts out his ideas, a heroic paradigm that boldly seeks new and unknown regions—not just physical, but also of the mind.6 Dodgson’s Alice models a logician’s approach to alternative ways of thinking and to the understanding of alien minds.
Dodgson’s imaginative worlds, created for a young audience, are different from Asimov’s more serious, scientific-sounding constructs, and they do not claim to be possible technologically. The inhabitants are purely fantastical, a combination of mythological creatures, characters out of fables, political cartoons, and humanobs (humanlike, man-made animated objects). While the stories seem like a wild collage of children’s memories and scenes from pretend games, they display Dodgson’s remarkable acuity as a teacher for children. He uses objects and concepts that are familiar to many Victorian children (e.g., poetry, puns, and visual art) to introduce new ideas, and thus he provides a type of learning lab for the character Alice—and for the children who read her story. Unlike Powell and Donovan, Alice does not come equipped with a predetermined set of premises through which she can interpret alien behaviors. There are no “three laws of Wonderland creatures.” Like a child in a bewildering and alien adult world, she is unprepared and must undergo a complex process of learning to interpolate world rules and acquire meaning. Many of the Wonderland or Looking Glass world creatures with which Alice interacts, especially the playing cards and chess game pieces, are comparable to elements of formulas and sets that Dodgson presented in his mathematic lectures and game theories. They have values and attributes that can be used to categorize them as elements of sets, and they are bound by specific game or world principles. Pawns, for example, can only move in specific directions, and in the Looking Glass world creatures must sometimes walk backward in order to move forward. When Alice approaches them with her Victorian child’s conceptions, she often finds that they cannot be understood simply through her own embodied ToM. Her critical, reasoning mind is the bridge that enables her to close the gap between human expectations and alien modes of thought, allowing her to construct a ToAM. Alice develops her ToAM gradually, by using—as Powell did—empirical information, inference, and deduction. At every new encounter she gains more knowledge. As she compiles a set of rules of behavior by which the strange creatures abide, she improves her ability to predict their reactions, and she devises adaptive strategies to deal with them and to maneuver toward her goals. Dodgson, through the narrator character (a type of “fond uncle”), constructs Alice as a curious, independent, and humorous person with deductive powers, as well as frustrations and concerns—someone with whom children can identify and whose learning methods they may imitate. Alice’s learning process thus becomes a model through which Dodgson promotes logic and inference as tools for dealing with new ideas and unconventional minds.
How does Dodgson convey this process to the reader? Like Asimov, he uses his protagonist to mediate the access to alien minds. Such mediation may not be necessary when alien creatures exhibit humanlike emotions and behaviors, such as fear or anguish or anger. The narrator uses what Alan Palmer terms a thought-action (or action-disposition) type of description—for example, the gardener Five “had been anxiously looking across the garden,” and when the Queen appears “the three gardeners instantly threw themselves flat upon their faces” (Palmer 108). These behaviors are easily interpreted by both Alice and readers, and evoke immediate responses: Alice pities the frightened gardeners when they are threatened with decapitation, and she hides them in a flowerpot. But behaviors that are remarkably bizarre and cannot be understood intuitively, such as physical reactions based on prediction of the future or disembodied responses, require mediation and cognitive analysis. For example, many of the characters seem to ignore physical risk to themselves and to others, and they misinterpret each other’s (and Alice’s) needs. The Red Chess Queen, after an exhausting race, offers Alice dry biscuits to “quench her thirst” rather than water. The footman nearly gets his nose smashed, but utterly ignores the event.7 And the Duchess rocks her wailing baby under a barrage of flying pots and pans, insensitive to the blows and callously ignoring the danger to the baby’s well-being—in contrast with Alice, who (in keeping with her humane character) becomes greatly anxious (“jumping up and down in an agony of terror”) for the poor child’s welfare (84). In addition, the physical world rules of the fantastical worlds shape the physical reactions and assumptions of the inhabitants. They must run extremely fast to stay in the same place, or move backward to advance. In the Looking Glass world where future events are known in advance (that is—creatures “live backwards”), the characters frequently act on the knowledge of a future or potential event rather than in response to a direct physical cause. Cakes must be served before they are cut. People are punished before they are found guilty, and injuries are tended to before they occur. The white chess Queen cries in pain—anticipating a pinprick from her brooch, but remains smiling when the event actually takes place, and refrains from screaming again having “done all the screaming already.” The Queen thus displays a disembodied emotional response to a physical event occurring on her body, as if she is insensitive to pain resulting from the prick.8 The earthly empirical concept of direct “cause and event,” with which Alice is familiar, becomes distorted by the effect of “living backwards.”9 At this more complex level of behavior, Dodgson’s artificial creatures seem quite inscrutable, and Alice’s ToM inference falls short of predicting their reactions or guiding her as to how she should best respond.
The narrator has several ways of revealing the internal workings of alien minds to readers. He can give authorial commentary, or expose the alien mind to the reader directly, or use the protagonist, Alice, to figure out the ToAM for the reader. As it turns out, like many other science fiction and fantasy narratives including Asimov’s robot stories, the narrator creates an asymmetric mode of mind exposure. Unlike Alice’s broadly exposed contemplations and embedded narratives, the alien characters’ minds are opaque: the narrator does not expose them for viewing and there are no descriptions of their thought processes, nor of their embedded narratives. That is, there are no unequivocal authoritative statements such as “the Gardener felt sure that his end was near . . .” or “he hoped that Alice would save his life, despite her youth.” Nor does the text describe the cards’ mental plan to escape their plight, and the Queen’s attempt to prevent them. This mental opaqueness dehumanizes the artificial characters, and keeps the reader from “seeing through” their eyes and from adopting their point of view, maneuvering the reader to focus on Alice and look through her perspective. Alan Palmer describes a similar narrative technique in Evelyn Waugh’s Vile Bodies, where the narrator dehumanizes even human characters by hiding their thoughts and embedded narratives from the readers.10 It is Alice’s analytical and probing deliberations, revealed to the reader, that allow, as do Powell’s inferences, access to the characters.
But because Alice cannot rely on intuitive (or previously developed) understanding and intuition, she needs to adopt a more formal analytical approach. Alice needs to collect empirical data by observing the characters’ conduct (including their reaction time and cause-and-effect schema), by listening to their input and by repeatedly questioning them. She needs to acquire new terms and language skills to better comprehend their world and to understand their obscure language and its implications (just as logicians and mathematicians require a definition of terms). Alice must then extrapolate new world rules from the creatures’ speech and body motions, test these rules by experimentation, draw conclusions, and finally apply them—both to social situations and to physical maneuvers en route to her goal.11 Thus fantastical worlds such as these encourage in Alice the exercising of a logician’s type of ToAM. Since Alice’s thoughts are described by the narrator, the reader becomes privy to the deductive processes, inferences, and conclusions she generates as she constructs a schema of new world rules and behaviors.
One example of the process of deliberation and new schema construction is the remarkable question and answer dialogue between the White Queen and Alice. In this discussion Alice studies the idea of futuristic causality, a phenomenon that explains alien creatures’ disembodied behavior as well as their ability to conceive “impossible things.” In a world where potential events are seen as if they already occurred, empirical data is not immediately necessary for something to be proven as true and believable, but only the expectation or potential that it will occur. Thus, explains the White Queen, the King’s Messenger is already sitting in prison, being punished, “and the trial doesn’t even begin till next Wednesday, and of course the crime comes last of all.” Alice expresses her confusion about such a system of causal rules: what if the event does not occur—that is—it becomes a failed alternative? The Queen insists that it may be so much the better, but Alice has a hard time accepting this alien notion, and especially its ethical implications. Not only will there be effect without cause, but conventions of justice based on the concept of punishment only of the guilty may be altogether eradicated. Alice’s encounter with alien world rules thus makes her aware of cultural conventions she (and readers) have taken for granted:
“Suppose he never commits the crime?” said Alice.
“That would be all the better, wouldn’t it?” the Queen said . . .
Alice felt there was no denying that. “Of course it would be all the better,” she said: “but it wouldn’t be all the better his being punished.”
“You’re wrong there, at any rate,” said the Queen. “Were you ever punished?”
“Only for faults,” said Alice.
“And you were all the better for it, I know!” the Queen said triumphantly.
“Yes, but then I had done the things I was punished for,” said Alice: “that makes all the difference.”
“But if you hadn’t done them,” the Queen said, “that would have been better still; better, and better, and better!” . . .
Alice was just beginning to say “There’s a mistake somewhere—”, when the Queen began screaming . . . (248)
Alice’s analytical acquisition of a theory of alien minds seems to be highly conscious and goal-oriented, perhaps similar to the descriptions of autistic savants’ attempts at learning to read human faces and intentions.12 It seems to differ from the automatic or nonconscious manner in which normal human children acquire ToM skills. However, it may be that the basic cognitive / neurological structures underlying both automatic and self-aware ToM acquisition are alike or even common to both. The manner of acquisition of ToM skills by children is still debated by researchers, a crucial question being of the relative weights of innate capabilities versus social training. Some researchers emphasize a more modular and “innate” approach, while others, such as Jay L. Garfield, Candida C. Petterson, and Tricia Perry, argue that the acquisition of ToM is “dependent as well on social and linguistic accomplishment, and that it is modular in only a weak sense” (505). Garfield et al. present a developmental model that is “social and ecological as opposed to being individualistic.” I suggest that for any model of automatic ToM acquisition in normal children, underlying that capability is a basic biological capacity for rule acquisition, or positive-negative value acquisition, (perhaps using neurological constructs such as those as found in conditioning described by Larry Squire and Eric Kandel, 57). That is, infant brains, on the basis of social and environmental interaction, must be able to construct certain cause and effect rules—albeit automatically and not in a language form. For example—“Given milk, the pain in my stomach disappears,” or “the light is turned on, someone will come to me,” and eventually “if I hit someone, they will feel pain, shout at me, or hit me back.” Children’s ability to read visual body cues and to comprehend others’ intentions may develop automatically (or without awareness) through social interaction, but it relies on a neurological logical-comparative mechanism, which enables the child, unconsciously, to test inferences and to discern and remember causal rules. I believe this comparative mechanism comes into play, and becomes more noticeable to the interpreting person when an obscure or alien mind crops up, and the interpreter finds that he or she has to consciously extract information and generate new causal rules to comprehend that mind. Authors like Dodgson and Asimov, a logician and a scientist who use analytical thinking in their work, externalize and expose the logical process in their narratives by depicting the interaction of human and alien characters. They seem to suggest humans should recognize the limitations of their normal human intuition and, when confronted with seemingly alien modes of behavior, they should adopt a more formal logical approach. Like their fictional protagonists, readers should observe others, collect empirical data and extrapolate new world—or cultural—rules. This approach can encourage readers to adopt a more flexible, and perhaps sympathetic, approach to different modes of thinking. Many science fiction and fantasy writers (including Star Trek producer Gene Roddenberry), believed that their narratives could thus encourage a more pluralistic society.13
1 Stanislaw Lem, 1921-2006, was a science fiction writer whose works explore philosophical themes such as the impossibility of mutual communication and understanding. The official Lem site is Solaris, ed. Stanislaw and Tomasz Lem 2000-2007 http://www.lem.pl/.
2 Morton N. Cohen, Lewis Carroll, a Biography (New York: Alfred A. Knopf, 1995) 446.
In 1886 Dodgson wrote The Game of Logic, in which he explains that it does not matter what the beginning premises are, as long as they are followed logically to the end:
I don’t guarantee the Premises to be facts . . . It isn’t of the slightest consequence to us, as Logicians, whether our Premises are true or false: all we have to make out is whether they lead logically to the Conclusion, so that, if they were true, it would be true also.
3 The prolific science fiction writer Larry Niven has written over sixty books, among them the Ring World series, and has created the unique alien the Pierson’s Puppeteer; personal interview, March 18, 2008, Bar Ilan University Creative Writing Visitor Lecture.
4 In his introduction to his Symbolic Logic book, Charles Dodgson (a.k.a. Lewis Carroll) writes:
this is, I believe, the very first attempt . . . that has been made to popularize this fascinating subject . . . But if it should prove, as I hope it may, to be of real service to the young, and to be taken up, in High Schools and in private families, as a valuable addition to their stock of healthful mental recreation, such a result would more than repay ten times the labor that I have expended on it. (47)
5 Alice Liddell was one of Dodgson’s favorite child-friends. Morton Cohen in the biography of Dodgson describes the famous trip that took place on July 4, 1862, in which Charles Dodgson, Robinson Duckworth (Fellow of Trinity College), and three Liddell sisters set off for Godstow for one of their picnics. On the way, Charles told Alice her story, and Alice petitioned that the tale might be written out for her, and Alice’s Adventures Underground (later “in Wonderland”) was created (89).
6 For discussion of the new paradigm in Dodgson’s books, see Orley Marron, “Fictions Beyond Fiction: A Cognitive-Literary Study of Human-like Animated Objects in the Fiction of Lewis Carroll and Nathaniel Hawthorne,” diss., Bar-Ilan University, 2008.
7 The footman in Alice’s Adventures in Wonderland does not react as a normal human would to a near catastrophe:
At this moment the door of the house opened, and a large plate came skimming out, straight at the Footman’s head: it just grazed his nose, and broke to pieces against one of the trees behind him. “—or next day, maybe,” the Footman continued in the same tone, exactly as if nothing had happened. (81)
8 In Through the Looking Glass (hereafter TLG), the White Queen displays anticipatory responses before she is injured, crying out loudly, and then explaining to the worried Alice that “I haven’t pricked it yet, . . . but I soon shall—oh, oh, oh!” (249).
9 The essence of “living backwards” as explained by the White Queen in TLG 247 is that “one’s memory works both ways.” Alice explains that her memory works “only one way” and that she “can’t remember things before they happen.”
“It’s a poor sort of memory that only works backwards,” the Queen remarked.
“What sort of things do you remember best?” Alice ventured to ask.
“Oh, things that happened the week after next,” the Queen replied in a careless tone. (247)
10 Palmer discusses presentation of consciousness in Vile Bodies by Evelyn Waugh:
The key to the fictional minds in Vile Bodies is that there is a good deal of intermental thinking but very little evidence of doubly embedded narratives. There is very little indication of the existence of one character in the mind of another…The lack of doubly embedded narratives demonstrates some very solipsistic states of mind . . . [it] contributes substantially to the callous and unfeeling quality of the novel. (233-34)
11 For example, by conversing with the talking flowers in TLG, Alice learns to walk backward in order to move forward toward the Queen.
12 Autistic savant Temple Grandin wrote the book Animals in Translation: Using the Mysteries of Autism to Decode Animal Behavior. Temple attributes her success as a humane livestock facility designer to her ability to recall detail, which is a characteristic of her visual memory. She has become aware of details to which animals are particularly sensitive, giving her insight into their needs, and allowing her to design humane animal-handling equipment. Her site is http://www.grandin.com/.
13 Gene Roddenberry’s Star Trek scripts promoted many forms of interracial relations, exemplified by the first interracial kiss broadcast on television. Actress Nichelle Nichols, who played Lieutenant Uhura, discussed the importance of this symbolic event as well as Martin Luther King’s conviction that her role in the series provided an immensely important non-stereotypical model for African-Americans. She was presented on an equal basis, on a level of dignity and authority, and with the highest of qualifications. Interview and text can be found in the “Trek Nation” news site, “Nichols Talks First Inter-Racial Kiss,” Trek, ed. Christian Hohne Sparborth, 5 Sept. 2001, http://www.trektoday.com/news/050901_05.shtml.
Asimov, Isaac. I, Robot. New York: Bantam Dell Books, 2004. Print.
Bartley, William Warren. “Introduction.” Lewis Carroll’s Symbolic Logic, Part I & Part II. 1896. By Lewis Carroll. Comp. and ed. William W. Bartley. New York: Potter, 1977. Print.
Carroll, Lewis. The Annotated Alice: Alice’s Adventures in Wonderland & Through the Looking Glass Today. Comp. and ed. Martin Gardner. London: Penguin Books, 1970. Print.
---. Lewis Carroll’s Symbolic Logic, Part I & Part II. 1896. Ed. William Warren Bartley, III. New York: Potter, 1977. Print.
Cohen, Morton N. Lewis Carroll, a Biography. New York: Alfred A. Knopf, 1995. Print.
Dodgson, Charles Lutwidge. The Game of Logic. London: McMillan & Company, 1887. Print.
Gardner, Martin. “Introduction and Notes.” The Annotated Alice: Alice’s Adventures in Wonderland & Through the Looking Glass. By Lewis Carroll. London: Penguin Books, 1970. Print.
Garfield, Jay L., Candida C. Peterson, and Tricia Perry. “Social Cognition, Language Acquisition and the Development of Theory of Mind.” Mind and Language 16.5 (2001): 494-541. Print.
Hambly, Barbara. Ishmael (Star Trek No. 23). New York: Pocket Books, 1985. Print.
Palmer, Alan. Fictional Minds. Lincoln: University of Nebraska Press, 2004. Print.
Squire, Larry R., and Eric R. Kandel. Memory: From Mind to Molecules. New York: Henry Holt and Company, 2003. Print.
Waugh, Evelyn. Vile Bodies. Boston: Little, Brown and Company, 1930. Print.
Zunshine, Lisa.Why We Read Fiction: Theory of Mind and the Novel. Columbus: The Ohio State University Press, 2006. Print.