3

minds and machines

 

 

I thought of what I called an ‘automatic sweetheart’, meaning a soulless body which should be absolutely indistinguishable from a spiritually animated maiden, laughing, talking, blushing, nursing us, and performing all feminine offices as tactfully and sweetly as if a soul were in her. Would anyone regard her as a full equivalent? Certainly not.

William James

At the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Alan Turing

Computer models of the mind no more imply that the mind is a computer than computer models of the economy imply that the economy is.

Hugh Mellor

Since the earliest days of philosophy, the latest technology has often been used as a model for understanding the mind. As the American philosopher John Searle observes:

In all the metaphors listed above, it is the mechanisms of the objects and not their materials that are important. If the Greeks ever did think that the mind worked like a catapult, this was not because they believed it to be made of wood and rope. Aristotle believed the mind to be the organizing principle of the body: ‘That is why we can wholly dismiss as unnecessary the question whether the soul and the body are one: it is as though we were to ask whether the wax and its shape are one.’2 Aristotle’s teacher Plato had thought, on the contrary, that the mind was composed of a different kind of substance from the body, and philosophers from St Thomas Aquinas to René Descartes inherited his vision. However, it is Aristotle who holds the floor today, with a view of the mind as a process rather than a separate object. The modern project of artificial intelligence research contends that this process is computation.

The first cinematic depiction of artificial intelligence was in Fritz Lang’s 1926 classic Metropolis. The film was set in a dystopian industrial society of the future and features a female android fashioned to resemble a workers’ leader. As the android’s face takes shape and its eyes open to reveal a perfect likeness, its proud creator remarks that ‘All she lacks is a soul’. Yet this disadvantage does not prevent the android from being taken for a normal woman. The lack of a soul would not bother AI researchers today, if only they could get one of their designs to hold a proper conversation with a person. The ability to be convincing in everyday chat is the condition of the ‘Turing Test’, proposed by the British mathematician and inventor of the digital computer, Alan Turing, in 1950.3 In what he called the ‘imitation game’, an interrogator sits in a room connected to two other rooms via a terminal. In each of the other rooms is a test subject, one a human and the other a computer. The interrogator asks the human and the computer questions through his terminal, and if he cannot tell which is the artificial mind and which the ‘real’ mind, then it makes sense to count them both as real. Strictly speaking, it is not the computer that would be the mind – the mind would be the software program that runs on the computer. If all minds, including those of humans, function in this way, then human consciousness is what happens when a certain kind of software is running on the hardware of the brain.

Turing himself believed that his test would be passed by the year 2000. However, the turn of the millennium has come and gone, and we are still waiting. AI researchers clearly have some explaining to do. They might try blaming Hollywood for raising our expectations. While the movies have given us armies of super-intelligent robots from C3PO to the Terminator, the technology of the real world has lagged far behind. Computer hardware may have been around for half a century now, but it has never been up to the task in hand. For most of computing history, researchers have had to contend with machinery that can barely reproduce the brainpower of an insect. Even if the hardware were up to the job, there would still be the problem of designing the right kind of software to produce humanlike intelligence. The brain would have faced a similar problem during its own evolution since, physiologically, our brains are the same today as they were thousands of years ago, before we developed religion, mathematics, art and literature.

Some philosophers and scientists believe that brain architecture and linguistic culture were mutually reinforcing and that this drove the evolution of human intelligence – just as happens in the computer industry, where new software applications require faster machines to run them better, which in turn make better applications possible. This process is beginning to pay dividends. If AI enthusiasts have been over-optimistic in the past, its critics have been too pessimistic. For example, the American philosopher Hubert Dreyfus once promised that a computer would never be able to beat him at chess, only for Maurice Greenblat’s ‘Machack’ programme to do so shortly afterwards in 1967. In his 1986 book Mind Over Machine, co-written with his brother Stuart, Dreyfus modestly explained that although computers could beat players at his level, it was unlikely that they would ever be able to overcome a true master. In 1997, however, IBM’s ‘Deep Blue’ defeated the reigning world champion Gary Kasparov, regarded by chess authorities as the best human player ever to have lived. Few would say that Deep Blue ‘thinks’ in the fullest sense of the word. It is a very fast but ultimately mindless number-crunching machine. However, as computers become more adept at chess, face-recognition and many other tasks that were once the sole province of human minds, to disqualify their achievement may begin to look more and more like a prejudice against silicon.

The human brain possesses a hundred billion neurons which process information at an estimated rate of between a hundred million and a hundred billion MIPS (millions of instructions per second). By contrast, the first Apple Macintosh, introduced in 1984, ran at around 0.5 MIPS – comparable with a bacterium. The best desktop machines today can manage a thousand MIPS, while the fastest supercomputer weighs in at ten million MIPS. As for memory, the brain’s one hundred trillion synapses are estimated to hold the equivalent of one hundred million gigabytes of information, whereas the humble 1984 Mac possessed a mere eighth of one megabyte.4 Over the past fifty years, however, advances in computing power have observed ‘Moore’s Law’ – proposed in 1965 by Gordon Moore, the cofounder of Intel – according to which the power of computer chips doubles every eighteen months to two years. If Moore’s Law continues to hold, then on conservative estimates computers will have reached a level of performance comparable to the human brain some time before 2019.5 In all likelihood, the main protagonists in the debate over whether a computer can have a mind will have an answer one way or another within their lifetimes.

John Searle is certain that he already has the answer, for he believes that no computer could possess a mind even if it passed the Turing Test. I met Searle in the place he describes as ‘paradise’ – the neo-Roman idyll of Berkeley College, California, where he has worked since 1959. Searle has a well-earned reputation as a philosophical bruiser. With a dense, compact frame, he swaggered backwards and forwards across his office with hips thrust out, emphasizing each point with a jab of his fist. He produces a stream of intricate logical reasoning to attack his bugbears, but really comes to life when loudly denouncing them as ‘bullshit’ at the beginning and end of each diatribe. He is exhilarating company, which is no doubt why he receives repeat invitations from such social luminaries as the Getty family. As a young assistant professor at Berkeley, Searle became involved in the Free Speech Movement, joining a student protest in 1964. His interest was only prompted because the college authorities had banned him from delivering a speech criticizing McCarthyism. For the rest of the time he found the constant protests an annoying disruption to his philosophy classes, and he dismissed the political left as evil and the right as stupid. When the students succeeded in overthrowing the Berkeley College authorities, Searle joined the new leadership for a while until they found he was resistant to socialist dogma and he tired of making enemies. However, he has never tired of making enemies within his own field. Among them are the late French thinker Jacques Derrida – in the 1970s they clashed over an abstruse interpretation of the English philosopher J. L. Austin’s work – and Daniel Dennett in a feud that sprawled across the pages of the New York Review of Books. Readers were treated to a volley of letters in which the two philosophers progressively questioned each other’s integrity, sanity, hearing and eyesight.

Searle’s most famous contribution to philosophy is the Chinese Room thought experiment.6 This has been so influential that the computer scientist Patrick Hayes once defined cognitive science as ‘the ongoing research program of showing Searle’s Chinese Room Argument to be false’.7 Searle imagined a native English speaker locked in a room with a number of boxes containing Chinese characters. People outside the room can pass questions to him by posting a string of Chinese characters through the letter box. The man also possesses a very long instruction manual containing tables that enable him to cross-reference the characters and post back the correct answers by using the symbols stacked in the boxes. By this means he could conduct a conversation, albeit an extremely slow one, with a native Chinese speaker. However, it is clear that the man locked in the room does not understand Chinese. He does not know what the symbols stand for and cannot understand the questions he is asked, nor the answers he gives. His activity consists not in intelligently conversing, but in mindlessly manipulating a database of symbols according to a program of rules. Since this process is essentially what AI programs use to interact with questioners, it follows that computers do not understand English no matter how quickly and efficiently they search through their databases and produce their outputs.

One response known as the ‘Systems Reply’ is that the involvement of the man is a red herring. The man may not understand Chinese, but the room taken as a whole does. The man is merely the implementer of the rules, akin to a computer’s Central Processing Unit (CPU). However, Searle points out that the man could memorize the contents of the boxes and the instruction manual so that the entire system is within him and he would still not have any idea what his words are about. Another approach is the ‘Robot Reply’, which locates the problem in the room’s isolation. If a computer were fitted inside a robot body and sent out into the world with microphones and video cameras to act as ‘senses’, the machine would come to truly understand the language it spoke by coming into contact with the objects to which it had been unwittingly referring. Searle points out that the data received from the cameras and microphones would be fed to the CPU in the form of numerals – in other words, we would be giving the already overworked machine another set of symbols to manipulate. According to Searle, the mere syntax of symbols can never pull itself up by its own bootstraps and climb into the semantics of thought – symbols cannot interpret themselves. And it would be no good including a definition of what each symbol is supposed to denote in the programming code – as these would be couched in yet more symbols. Computers are devices that manipulate symbols according to rules. What the symbols stand for does not matter, so long as the rules are followed and each input results in the appropriate output. The interpretation of those outputs is performed by the computer’s users rather than the machine itself. It is not that a machine per se cannot think – the brain is a machine, Searle attests, and brains can think – but that thinking is not a case of mindless symbol manipulation. Something needs to breathe life into the symbols to give them meaning and, to Searle, this component is the consciousness ‘generated’ by biological brains.

Paul and Patricia Churchland, a husband-and-wife team of philosophers, object that Searle has no right to claim that semantics cannot grow out of syntax, that meaning cannot be achieved from the bottom up, as this is an empirical matter to be settled by scientific study rather than armchair speculation. In their offices high up in the futuristic campus of the University of San Diego, they assured me that Searle’s position represents a failure of imagination similar to that of the poets William Blake and Wolfgang von Goethe, who found it inconceivable that the small particles we now call ‘photons’ might be responsible for light. One could argue that the essential property of light is luminance, while electricity and magnetism are forces, and since forces by themselves are not constitutive of luminance, electricity and magnetism cannot be sufficient for light. One would, of course, be wrong, though the argument seemed eminently reasonable before we understood the parallels between the properties of light and those of electromagnetic waves. In 1864, the physicist James Clerk Maxwell suggested that light and electromagnetic waves were identical. This prompted the Churchlands to suggest a nineteenth-century version of Searle’s experiment, which they call the ‘Luminous Room’:

We all know that such a room would be pitch-black inside, but this is because the frequency of the magnet’s oscillations would be too low – by a factor of 10 to the power 15. The wavelength of the electromagnetic waves produced will be far too long and their energy too weak for human eyes to detect them. This, however, is a matter of quantities rather than qualities. If the frequency of the oscillations was increased sufficiently, there would come a point at which the room would be illuminated. Similarly, although Searle’s Chinese Room might not understand Chinese, this need not mean that no room operating on the same principles could ever understand Chinese.

According to Searle, a functioning Chinese Room simulates the understanding of Chinese without understanding the essence of the language. However, it is strange to divide the essence of a skill from the ability to employ it successfully. It certainly wouldn’t make sense the other way around. You could not be said to understand the essence of Chinese if you could not speak it. Perhaps if the mechanism in the room were to work much faster and were given enough contact with the world, it would inevitably gain a foothold into the meaning of its symbols. Words are able to mean what they do because they are placed in constant relationships with actions and objects. Computer syntax acquires meaning because we tie certain symbols into constant causal relationships with the world via programming and the engineering of peripherals. If an outbreak of food poisoning suddenly wiped out a road monitoring centre, the station’s traffic cameras and computers would continue to represent the road network, at least until the power ran out, because the same causal relationships between the monitors and the monitored would continue to hold.

However, it is no use expecting the causal relationships of computer programs to furnish their symbols with meaning if, as Searle also alleges, they do not possess the right kind of causal powers to be conscious. Searle believes that computer simulations of the brain processes that produce consciousness stand in relation to real consciousness as a computer simulation of digestion stands in relation to real digestion. ‘It’s just bullshit,’ he scoffed, ‘because a simulated stomach can’t actually digest anything. A simulated mind can’t understand anything. No one expects that you could stuff a pizza into a perfect computer simulation of the digestive process.’ However, other kinds of simulation seem to give us what we want. When we call up the calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don’t complain that ‘it isn’t really a calculator’, because the physical attributes of the device do not matter. Here, a simulation is as good as the real thing. What is essential are the results that each incarnation produces when you punch in the numbers, which are the same in both cases. The question, then, is whether the important ‘results’ of consciousness are a matter of abstraction or embodiment. You could even say it is the entire question, and one that cannot be settled by stipulating that the results must be conceived in biological brains. For artists, acrylic paint simulates the effect of oil colours. A retinal scanner simulates the effect of a key. What matters is that the door opens, not that the tool is made of carved nickel alloy. If a pocket calculator does real calculation because the results of its operations – numbers on a screen – have an equivalence with or are interchangeable with the numbers written on a mathematician’s blackboard, the question is whether the products of consciousness – words and expressions – are more like these numbers than slices of pizza.

Searle believes that there is something special about brain tissue that enables it to generate consciousness. It is the biological function of the brain to produce thoughts just as it is the function of the heart to pump blood or the lungs to breathe air. However, this seems to talk of consciousness as a kind of ‘brain glow’, a discrete physical effect, and it is far from clear that it can be such a thing. Searle is a materialist, and agrees that the brain is a machine – that consciousness is the result of a machine process. It’s just that a computer, he believes, is not the appropriate kind of machine with which to draw an analogy. Minds, he argues, are part of nature, but computers exist only because people regard certain objects as computers. ‘Nothing is a computer in itself,’ he said. ‘Something only becomes a computer when it is used to compute something by a conscious agent.’ He insisted that a computational interpretation could be put on anything: ‘Any mundane object such as a window can be a computer – representing a one if open and a zero if closed.’ Searle was returning to the point that semantics – meaning, or ‘aboutness’ – cannot be derived from the bottom up. And, because non-trivial computation is a higher order function of an object, there are no natural computers – nothing is, in itself, a computer before it is used as one, nor can it be its own user. Daniel Dennett’s response is that brains will in fact have to be their own users, because no one else is going to do it for them. Brains succeed because they are the products of natural selection. Evolution has shaped the human brain so that its neuronal patterns react to inputs from our environment and cause a rational response to them. Likewise, the symbols manipulated inside a computer acquire their significance, their ability to do a job, by the constant relations they have with other objects outside the machine. In the case of today’s computers, these objects include printers, scanners and missile tracking systems. In the future the objects may include other language users. Dennett parodies Searle’s position as claiming in effect, that ‘Airplane wings are really for flying, but eagles’ wings are not.’9 In a sense this is true, since no one specifically designed eagles’ wings to enable them to fly. However, this does not allow us to deny that eagles fly, or that we can make sense of what they do as flying.

Philosophers who believe full AI to be possible concede that pocket calculators lack consciousness. Such ‘one-track’ minds are not minds at all. The mind exists, they think, when thousands of similar activities are all taking place in the computer. However, the notion of bolting on more competencies soon runs into a problem. An ordinary computer works with a serial processor – a bottleneck through which all its operations are squeezed. So at any one time it is performing a single operation, a single activity. If minds are possible in these circumstances then they are even stranger than we thought. It implies that a mind can be considered to have an existence when viewed over an expanse of time, even though it does not exist in any single moment taken by itself. One can also ask about the upper limit to this timescale. On computationalist principles alone, there is no reason why a computer should not take, say, ten billion years to complete the operations equivalent to a human adult remarking on the weather. To insist on more prompt answers would seem arbitrary. The strangeness of this thought leads one to consider an alternative form of computation.

This is the answer favoured by Searle’s colleague at Berkeley – the chess-playing critic of AI, Hubert Dreyfus. Professor Dreyfus is famous in the philosophical world for disseminating the thought of the German existentialist philosopher Martin Heidegger. He is also known, and envied, for making a fortune working as a philosophical consultant to Fernando Flores, the economic whizz-kid who President Salvador Allende of Chile made finance minister at the age of twenty-nine, in his country’s short-lived socialist government. Following three years in prison under General Augusto Pinochet, Flores studied under Dreyfus before starting a successful software company. With his ascetic demeanour, pale clothes and lean build, Dreyfus does not look as though he was performing this role for the money, but he does allow himself the luxury of zipping around campus in a vintage Volkswagen Karmann Ghia. Of all the philosophers I talked to during the research for this book, Dreyfus was the only one who seemed to be thinking about each question anew even when it concerned issues he had long ago left behind. One of these was artificial intelligence. ‘I don’t think about computers any more,’ he told me. ‘I figure I won and it’s over – they’ve given up.’

Dreyfus’s ‘winning’ argument is that human minds and computers work in very different ways. For example, a human chess player does not decide on the best move by running through thousands of positions and examining the consequences of every possible action. Instead, it might take a grand master only a few seconds to decide what to do using intuition. The grand master is using a highly developed form of common sense to rule out certain moves at the outset. He knows intuitively that certain approaches will be suicidal and does not bother to process them at all. The computer by contrast – even Deep Blue, the conqueror of Gary Kasparov – knows nothing of the sort and laboriously grinds out the consequences of moves that are ‘obviously’ stupid to a human player. There needs to be a way of determining what is relevant to apply the rules to, but a computer is incapable of this kind of discrimination. Common sense requires the ability to make generalizations, and such judgements are made by looking at a situation holistically rather than by applying a set of rules to one piece of information at a time. It is the difference between knowledge and ‘know-how’. ‘Computers, you see,’ explained Dreyfus, ‘possess only representations and rules for manipulating those representations, and these things simply are not involved in know-how.’ It should be noted that computers’ lack of common sense was one of the reasons why critics originally doubted that they would ever be able to play chess at grand-master level. But at least chess is a limited, self-contained domain with a finite number of possible moves to consider. If a computer had to face life in the everyday world, there would be too many options to process every single one.

Dreyfus agrees that, in one sense, the brain is a computer, ‘in that it’s a piece of meat that comes up with answers, but more interesting is whether it follows rules, whether the brain’s processing is holistic or atomistic’. By this, Dreyfus means whether the brain works atomistically like a digital computer processing strings of code according to rules, or holistically like an analogue device that learns to recognize patterns. The latter model is known variously as ‘Parallel Distributed Processing’ (PDP) and ‘connectionism’. Unlike a PC, the brain does not have a CPU where every calculation is made. The brain processes information on the basis of millions of smaller units working in parallel. Each processing unit, or neuron, is arranged in a network of weighted connections that maps various pieces of information. This mapping can be fine-tuned by adjusting the weighting of the connections between neurons until the appropriate outputs are achieved. The chips inside digital computers work much faster than neurons, but because many interconnected neurons tackle the same problem simultaneously, the brain can achieve a greater speed in certain tasks. Such a ‘parallel’ processing architecture is more efficient at dealing with extremely multifarious inputs, such as recognizing the outline of a lion on the horizon. It is also faster at recalling how to avoid the beast, since this information is distributed across the system in the relative strengths of the connections between millions of neurons. Given the quantity of information that a brain has to store, laboriously forcing the entire catalogue through the bottleneck of a CPU to retrieve the relevant data would mean that the lion would be upon us before we remembered that we were supposed to climb a tree. For other tasks that require many transformations of a limited set of inputs – for example, a long multiplication sum – serial processors such as a desktop PC perform better than brains. The brain can, of course, perform these functions too, but they are only one of its many talents rather than its general mode of operation.

The superior virtues of parallel processing architectures do not matter to the philosophical question in hand. The former may be better if you want to avoid hungry lions, but here we are only interested in the ability to think at all, not the ability to think about certain sorts of things quickly and efficiently. It may be possible to separate thinking per se from thinking in the way that humans do, and digital computers might be able to think without the human mind serving as the example. Dennett, however, counsels us not to be too impressed by the differences, for ‘at the heart of the most volatile pattern-recognition system (whether connectionist or not) lies a [serial] engine, chugging along, computing a computable function’.10 Each individual neuron amounts to a tiny, number-crunching robot. Dennett points out further that a parallel processor can be simulated on a serial machine by calculating one connection at a time and agglomerating the results. The same tasks can be performed in this way, albeit much more slowly and one at a time. This leads to a quandary when we consider our own minds, for although we think with the speed that parallel processing allows, our thoughts are not a cacophony of competing voices. Dennett dubs the human mind a ‘Joycean machine’, as he believes that its parallel circuits produce a ‘virtual’, serial processor better known as the stream of consciousness.

The philosopher Jerry Fodor refers to the computational, or ‘number-crunching’, theory of mind as ‘the only game in town’. When I spoke to him in New York, he told me that:

Right now, we are sure that computation is the model for the mind. But 500 years from now, when real progress has been made, it’s wildly unlikely that our descendants’ favoured theory is going to look anything like anything that we can imagine today. The best approximation we’ve got now is the computational system, but what I take for granted is that if God told you the way the mind worked, you wouldn’t understand it. You wouldn’t be able to read the last chapter. This is why I don’t think it’s particularly important to have a popular view.

According to the futurist Nick Bostrom, Fodor is wrong that future minds will shy away from the computer model, because they are likely to be computers themselves. But this is also why Fodor is right that we would be unable to fully understand the final chapter on the mind – because it will be written by artificial minds with intellectual faculties far greater than our own. This would have severe consequences for human self-esteem, or at least the self-esteem of philosophers of mind. From Fritz Lang’s Metropolis to The Matrix of the Wachowski brothers, the movie industry has shown humans enslaved by their robotic creations. The real fear should be that such machines would render some of us superfluous. Certain philosophers are less worried by this prospect than others. John Searle writes:

Searle does not seem the type to worry about anything unduly, but his vision is hopeful to say the least. If the steel robots were intelligent individuals that passed the Turing Test and formed their own professional league, then human American football might stand in relation to it as the amateur game stands to the NFL today. And if artificial intelligences rather than human beings were making the scientific advances, writing the best novels and designing ever more advanced versions of themselves, then we would become as children or pets, left to amuse ourselves while superior beings did the important work.

It might be best for philosophers to stop arguing about the issue of artificial intelligence and simply wait for science to succeed or fail in producing machines that can hold a conversation. In the current literature, philosophy has two chief roles: first, to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible. The answer to the second is to wait and see, and then the answer to the first will be irrelevant. If talking machines are ever invented, it is likely that there will be no more argument about whether they are merely simulating conversation than there is today about whether pocket calculators merely simulate calculation. Calculators ‘do the job’, and a talking machine would ‘do the job’ of talking. There may not be a person attached to the mouthpiece, but all this means is to say that talking is the only thing the machine does, in which case, to achieve parity between man and machine we need only make sure the latter can do all our other jobs. One could object that the computer simulates lots of different small tasks, like chess playing, arithmetic solving and – hopefully, one day – novel-writing and joke-telling, but nowhere does it actually simulate a mind as opposed to one of the mind’s functions. However, we could ask whether there is anything to a mind once its functions are taken away, any more than there is something to a chair once its seat, legs and backrest are taken away. This question is the subject of the next chapter.