6. ROBOT NEW MAN

The benchmark for Artificial Intelligence (AI) is the famous Turing Test. Alan Turing’s 1950’s thought-experiment states that if a robot can convince you that you’re talking to another human being, then that robot can be said to have passed the Turing Test, thereby proving that there is nothing special about the human brain that a sufficiently powerful computer couldn’t do just as well.

Except the Turing Test proves no such thing. All it proves is that humans can be tricked, but everyone knew that already … except Alan Turing, alas, who in the last week of his life – and this is a true story – went to a funfair fortune-teller on Blackpool promenade. Nobody knows what the Gypsy Queen told him, but he emerged from her tent white as a sheet and killed himself two days later. But funfairs have had centuries of practice in the art of tricking punters.

Weirdly, a funfair nearly did for Isaac Newton. In a posthumous biographical sketch, his friend John Wickens says that when they went to Sturbridge County Fair, Newton had a complete meltdown, and was close to jettisoning his whole theory of how gravity acts on every object in the universe, after what Wickens describes as: ‘a frustrating hour at the coconut shy’.

In an interview with The Times about Artificial Intelligence, Brian Cox said:

There is nothing special about human brains. They operate according to the laws of physics. With a sufficiently complex computer, I don’t see any reason why you couldn’t build AI. We’ll soon have robot co-workers, the difference is we’ll even be taking them to the office party.

I wrote a letter to The Times. They didn’t print it. I don’t why. It was quite short. It just said: ‘No we fucking won’t’.

Emotional robots are a vision of the future to be found in the Gypsy Queen’s crystal ball but not in science. Not least because of these two uncontroversial scientific facts:

1. We are not machines, we are animals.

2. No experiment performed by anyone anywhere in the whole world at any time has found a shred of evidence to suggest the remotest possibility that a ‘sufficiently complex computer’ will ever be able to do literally the first thing that a mammalian brain does, and experience emotion.

We came crying hither.
Thou know’st the first time that we smell the air
We wawl and cry …

But to listen to AI cultists you’d think we were knee-deep in this sort of evidence. According to Radio 4’s Inside Science program, for example, we’ll soon have robot lawyers.

A senior IBM executive explained to Inside Science listeners that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant.

Interestingly, however, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused in 2010 of the largest hedge-fund insider trading in history, he hired one of those old-time humanoid defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew there’s no justice in automated law.

Not all the gigabytes in the world will ever make a set of algorithms a fair trial. There can be no justice in the broad sense without procedural justice in the narrow sense. Even if the outcome of a jury trial is identical to the outcome of an automated trial, due process leaves one verdict just and the other unjust. Justice entails being judged by flesh and blood citizens in a fair process. Not least because victims increasingly demand that the court consider their psychological and emotional suffering – which computers cannot do.

There’s a curious contradiction here that nobody ever talks about: at the same time as science proclaims its moral neutrality, proponents of AI want machines to become moral agents. Never more so than with what Nature has taken to calling ‘ethical robots’.

Ethical robots it seems will come as standard fittings on the driverless cars being developed by Apple, Google and Daimler. They will answer the big questions, automatically …

Should driverless cars be programmed to mount the pavement to avoid a head-on collision? Should they swerve to hit one person in order to avoid hitting two? Two instead of four? Four instead of a lorry full of hazardous chemicals? This is what the ‘ethical robot’ fitted into each driverless car will decide. How will it decide? In July 2015, Nature published an article, ‘The Robot’s Dilemma’, which explained how computer scientists:

have written a logic program that can successfully make a decision … which takes into account whether the harm caused is the intended result of the action or simply necessary to it.

Is the phrase ‘simply necessary’ chilling enough for you?

One of the computer scientists behind this logic program argues that human ethical choices are made in a similar way: ‘Logic’, he says, ‘is how we … come up with our ethical choices.’

But this can scarcely be true. For good or ill, ethical choices often fly in the face of logic. They may come from gut instinct, natural cussedness, a desire to show off, a vague inkling, a shudder, a sense of unease, or a sudden imaginative insight.

I am marching through North Carolina with the Union Army, utterly convinced that only military victory over the Confederacy will abolish the hateful institution of slavery. But I no sooner see the face of the enemy – a scrawny, shoeless seventeen-year old farm boy – than I throw away my gun and run sobbing from the battlefield. This is an ethical decision resulting in decisive action, only it isn’t made in cold blood, and it goes against the logic of my position.

Computer scientists writing the logic program for an ethical robot may appear as modern as modern can be, but their arguments come from the 1700s. The idea that ethics are logical appeals to what – in another context – Hilary Putnam describes as:

the comfortable eighteenth century assumption that all intelligent and well-informed people who mastered the art of thinking about human actions and problems impartially would feel the appropriate ‘sentiments’ of approval and disapproval in the same circumstances unless there was something wrong with their personal constitution.*

* Hilary Putnam, The Collapse of the Fact/Value Dichotomy and Other Essays, 2002.

The thinking may be strictly 1700s, but the technology isn’t. The US Department of Defense is at work on tiny rotorcrafts known as FLACs (Fast Lightweight Autonomous Crafts) that will that will be able to go inside flats and houses, office blocks and restaurants and deliver a one-gram explosive charge to puncture the cranium. These FLACs are types of Lethal Automative Weapons Systems. (LAWS). If drones weren’t bad enough, LAWS are on a whole new level. With drones, a human always makes the decision whether to kill, from however far away. But LAWS are a break with tradition. They are fully autonomous.

According to Stuart Russell, professor of computer science at University of California at Berkeley, this means allowing machines to choose whom to kill – for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’… ‘one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.’*

* Stuart Russell, ‘Take a stand on AI weapons’, Nature, 2015.

Presumably, LAWS just go through the kill list on the drop-down menu until their batteries run out.

If the US Defence Department want LAWS to be a ‘successful’ weapon system, their ethical data entry had better exclude the International Covenant on Civil and Political Rights (1966) with its provisions on the security of person, procedural fairness and rights of the accused. Or else the drone might turn tail and direct its fire at those who gave wings to its eternal mission of unlimited extra-judicial killing.

Delegating ethics to robots is unethical not just because what robots do isn’t ethics but binary code, but because no logic program could ever predict the incalculable contingencies, and shifting subtleties and complexities entailed in even the simplest case to be put before a judge and jury. By its very nature justice cannot not be impersonal and still be just. ‘Use every man after his desert’, Hamlet snaps at Polonius, ‘and who shall ‘scape whipping?’

Uploadable You

The fact that ethical choices may be prompted as much by a twisting of the guts or a trembling in the arms, may be what leads some to wish to escape our bodies altogether by means of a parascientific vision called ‘uploading consciousness’.

‘With powerful enough computers simulating the interactions in our brains,’ argues David Eagleman in The Brain: The Story of You, ‘we could upload. We could exist digitally by running ourselves as a simulation, escaping the biological wetware from which we’ve arisen, becoming non-biological beings.’

Uploading consciousness sounds futuristic but it speaks to those parts of Christianity heavily influenced by Plato and Pythagoras. Beyond bodily sin and earthly decay lies a shimmering non-physical realm to which the enlightened may be uplifted. But to the girdle do the Gods inherit, but from the neck does the software upload. The clergy have been trying to make us non-biological beings for thousands of years. It hasn’t worked because their Platonic dream of a disembodied intellect is biologically impossible. We should know this by now. Unlike the ancient Greeks who were forbidden to do autopsies, we have been dissecting human cadavers for millennia, long enough to know that there is no brain without a body.

Even if I am paralyzed from the neck down, I still have complex biochemical feedbacks looping between head and heart, hormones and hypothalamus, guts and glands. Even in my tetraplegic state, I am not just a head on a pillow. I am a body on a bed, and the cure for my bedsores is not beheading.

Hormones from the endocrine system reshape our neural pathways. A good half of the endocrine system is found below the neck in the thymus, adrenal glands, pancreas, testes or ovaries. This makes it incoherent to talk about the brain escaping the body. You can’t have one without the other.

In the nineteenth century the great biologist Thomas Henry Huxley reflected how many a beautiful hypothesis is slayed by an ugly fact. Those were the days. What has been happening lately is the strange phenomenon of the beautiful hypothesis enjoying more popularity after being slayed by the ugly fact than before.

The discovery that bodily hormones shape neural pathways in teenage brains should have left fantasies about simulating the human brain as obsolete as other monuments to old-time sci-fi such as derelict monorails, and yet these fantasies persist.

Claims that humans can transcend biology proceed as if the last few years had seen not the confirmation but the falsification of neuroscientific theories about, for example, endocrinal influence on the brain, of complex environments on synaptogenesis, of the effects of maternal enrichment of foetal brains. Any or all of these ugly facts (in the Huxleyan sense) should have put paid to ascetic fantasies of uploading consciousness and ‘escaping the biological wetware from which we have arisen’. That they did not, tells us something. To steal a phrase from the great geochemist Vladimir Vernadsky (1863–1945) they tell us that uploading consciousness is a ‘political idea not a scientific one’.

Or possibly a religious one, as in Ray Kurzweil’s The Singularity Is Near: When Humans Transcend Biology. The Singularity, says Kurzweil, is when non-biological intelligence takes over. People will transcend base flesh and discover that they can assume an utterly new ‘physical manifestation at will’.

The Singularity strikingly mirrors what fundamentalist Christians call The Rapture. Seven years before the Second Coming of Jesus Christ, true believers will be transfigured from their base biological selves into pure spiritual beings, ascending into the air. While unbelievers suffer the ‘time of tribulation’, the Rapture lifts true followers of Christ away from the carnality of the flesh and mortality.

The Singularity also shares with the Rapture the troublesome necessity of always pushing back the date of when it is supposed to happen. The Rapture’s currently predicted date is 2019. (An earlier, much trumpeted ETA of 2011 didn’t come off as hoped.) Kurzweil originally slated 2029 as the year of the Singularity, but has since revised the date to 2045. (‘No matter how much they dig up the pavement, Ray, the broadband don’t get no faster, innit?)

Another reason that AI fantasies are able to withstand so many of T. H. Huxley’s ‘ugly facts’ is because those ugly facts come from outside physics and are therefore beneath attention. Ernest Rutherford’s boorish remark that ‘there is physics and there is stamp-collecting’ has an unstated corollary, which is that the evidential standards of other disciplines are not worth bothering about. This being so, physicists need not trouble to find evidence to support any claims they feel like making about anything whatsoever outside the sanctum sanctorum of physics, such as biology, ethology or philosophy.

When Stephen Hawking says ‘philosophy has not kept up with science’ he doesn’t provide any evidence, but then he probably feels he doesn’t need to. It’s not as if he is talking about anything important like the different grades of space gravel to be found on different planets.

To support his claim, Hawking might cite a drop in publications dealing with the philosophy of science. Or he might produce a graph showing a decline in philosophers and scientists submitting jointly-authored papers to peer-review journals. Evidence that philosophy departments are dropping science modules would be nice. Any or all of these would give some intellectual rigour to his allegation, but why bother? No-one expects any kind of rigour when it’s only stamp-collecting we’re talking about.

The great irony of scientism, however, is that even as its proponents declare that philosophy has nothing to teach science, their every word is saturated with philosophical assumptions, often hoary ones. That’s why they stay true to metaphors based on ideas of nature now known to be false, such as the idea that organisms are just wet machines which we will look at in a moment, but first I want to examine its equally fallacious corollary that algorithms can do everything that lifeguards do.

What do lifeguards do all day?

In Homo Deus, Yuval Noah Harari imagines a future in which ‘algorithms push humans out of the job market’. It’s not just lawyers that are going to be replaced by robots, it seems chefs, waiters, security guards and lifeguards are on the way out, too.

Harari cites a couple of academic papers that give statistical predictions about the future of employment. There will be an 84 per cent drop in the number of security guards, apparently. Automation will also result in a 74 per cent decline in lifeguards.

But what does Harari imagine lifeguards do all day? Dive into the water and fetch people out?

The lifeguards at my local swimming pond are diplomats, care assistants, swimming instructors, cleaners, caretakers, venue managers, goodwill ambassadors and bouncers. Each and every one of these functions is vital to the smooth running of the pond. Not a single one of them could be done by a robot.

Their diplomacy, for example, takes many forms. Sometimes it involves finding a polite but firm way to tell a man that he is too drunk to swim. Tact is also required when telling parents that their child is too inexperienced a swimmer to swim in deep water on her own. As likely as not the parents will blow their top. Their whole plan for the day has been ruined. On top of which they are being told that they do not know their own child’s swimming ability.

As care-assistants the lifeguards gently guide the eighty-seven-year-old regular to the water, holding her hand as they walk her along the slippery promontory to the pond’s edge, providing a much-needed human touch to her day. They also help a one-legged man retain his dignity by helping him back up the steps after his swim, but so unobtrusively that it looks like he got out of the water without any assistance at all.

My reason for going into such detail about a lifeguard’s duties is to highlight the way in which arguments that robots will replace people in the workplace are based on incredibly simplistic fantasies about what people do all day. Rather than hands-on experience of what the job entails, these fantasies have the downsizer’s high-handedness. The lifeguard pulls people from the water. A robot could do that. A lawyer recites precedent to a judge. A robot could do that. An AI enthusiast looks at a complex human picture but is only able to assimilate a meagre 3 per cent of what is happening. Now there’s something a robot really could do!

Of course, there may well be a 74 per cent cut in lifeguards, but this will have nothing whatsoever to do with drones called Kingfishers that dive into the water and snatch out swimmers who have either got into difficulty or who, like me, have a flailing swimming style that makes it look as if they have. The 74 per cent drop in lifeguards will not be because of automation, and only indirectly because of algorithms. Councils are cutting back on public services to offset the catastrophic losses entailed by the mighty algorithms of a collapsed financial system.

Slaves to the algorithm

There is a touching naivety to so many of the fantasy scenarios of futurologists. In Homo Deus, for example, Harari sketches a futuristic dystopia that almost doesn’t bear thinking about:

wealth might become concentrated in the hands of [a] tiny elite… creating unprecedented social inequality.

Well, I shan’t live to see it at least!

When you read Elon Musk, Ray Kurzweil or Yuval Noah Harari’s dire predictions that algorithms might one day decide to do things according to their own logic rather than human needs and wishes, you want to say, ‘where have you been all your life? Aren’t you futurologists supposed to be unusually plugged into the zeitgeist? How come you haven’t seen what everyone else cannot avoid? You seem not to have been allowed out of the house very much. Let me tell you about something called the stock exchange and the bond market.’

‘What is good for a shareholder in a firm, you see, tends to be bad for the people who actually work there. Staff may find themselves doing unpaid overtime to fulfil a shareholder’s ambition to own one of those automatic up-and-over garage doors, where you can just remotely open it while still driving towards it … like you’re in the future already. What is good for people – full employment, say – is bad for bond holders, because full employment leads to inflation, and inflation lowers the value of bonds.’

One more reason for the rich to fantasise that algorithms may push humans out of the job market, I suppose.

Talking of which, the other thing that futurologists haven’t noticed but which everyone else has is the exponential growth in the numbers of security guards. They were supposed to be on the way out, remember, but inequality has been a boon for the private security industry. The only way there could be the 84 per cent decline in security guards – as Harari’s book predicts – would be an 84 per cent increase in equality. Any takers?

The Last Cyborg

The machine model of the organism was wound up and set walking in the 1600s by Descartes. In the following century, Julien Offray de La Mettrie was very impressed by The Digesting Duck, an automaton with over four hundred moving parts. Invented by an engineer called Vaucanson, the mechanical duck’s party piece was that a few minutes after ‘swallowing’ a piece of bread in its metal bill a green turd plopped out of its metal arse. That green turd was it for Julien de la Mettrie. He’d seen enough.

‘Let us then conclude boldly that man is a machine,’ he wrote in L’homme machine. The Digesting Duck clearly showed that ‘to make a talking man, a mechanism no longer to be regarded as impossible,’ was simply a matter of more moving parts.

This sort of talk remained more or less scientifically respectable until 1859, when Charles Darwin took a big sledgehammer and smashed the machine model into springs, cogs and sprockets, by way of showing that we are – deep breath – not machines but animals.

Everyone always talks about the hammer blow Origin delivered to the biblical story of Creation, which of course it did, but we should remember that it also snatched away science’s machine metaphor. This was no small matter of just depriving science of a figure of speech. The machine metaphor was the motherboard of an entire conception of the universe. Snatch it away and a whole world goes with it.

If animals are not machines, if brains do not operate according to the principles of physics, if neither tropical forests nor historical events follow mechanical principles, then how shall we explain their world? It proved too big an ask. After a while Darwin’s idea that we are not machines but animals was quietly dropped. Guiltily, like a dog eating its own faeces the Church Scientific went back to its seventeenth-century cyborgs, and has stayed faithful to them ever since, illustrating a much overlooked principle in the history of ideas: when the facts don’t fit the story, the facts have to go.

In The Structure of Scientific Revolutions, Thomas Kuhn advances his famous thesis that the accumulated weight of discarded evidence, discarded because it doesn’t fit the mould, ends up cracking the mould. This cracking of the mould, he says, takes the form of a scientific revolution, which ends up producing a new mould: the famous ‘paradigm shift’ that made Kuhn’s reputation.

Even though Thomas Kuhn’s shifting paradigms enjoy widespread acceptance, I’m afraid I don’t believe this Hegelian fairytale. I don’t believe that history follows organic patterns of growth and development in the way that watercress and slender lorises do. And I do not believe that history follows any kind of predictable rules like a cosmic mechanism. Just because the structure of science has cracked open once or twice before and found a bigger and better shell doesn’t mean that science must always act like a growing hermit crab. On the contrary, there is evidence to suggest that, far from cracking the mould, findings that don’t fit the old picture simply fall away.

I think the process by which the scientific community selects for and against ideas that fit or don’t fit the imaginative vision can be described a little differently from the Kuhnian paradigm shift.

The Glue and Glitter Painting

Rather than paradigm shifts, I think the process is more like primary school glue and glitter paintings. First the child paints a picture of a house in glue on a thick sheet of A3 art card. Four windows and a chimney. When colourful glitter is shaken over the card, glitter that sticks to the shape of the glue becomes part of the picture. Glitter that slides off the paper never becomes part of the house.

If, by the time we are in Year 6, we find ourselves waist-deep in glitter and still producing the same picture we came up with in reception, the famous paradigm shift is supposed to happen. The four windows and a chimney picture is thrown away and all the accumulated glitter on the floor is now used to make a new, more sophisticated, truer, more realistic, finer-grained picture of a house. But sometimes we just scoop up a handful of glitter, and show how it too can be used to make the shape of the four windows and a chimney house. Then we say: ‘Look! Doesn’t the fitting of the new evidence to the old shape show that the old shape was right all long?’

The only problem is that it becomes harder and harder to learn anything new in a classroom that is half-buried in multicoloured glitter.

The Glue and Glitter Painting process can be seen in the rejection of Darwin’s idea that we are animals, and the subsequent return to the seventeenth-century machine model. A return which opened the way for the Iron Laws of History that animate Kuhn’s Structure of Scientific Revolutions.

In neuroscience, the AI-influenced notion that your brain is a computer, of an idea sometimes called ‘the computational theory of the mind’, is so gorgeously science-y that it cannot be allowed to go, no matter the evidence stacked up against it, and no matter that search parties now have to be sent into the deep drifts of multicoloured glitter to try to locate the sound of buried children tapping on the pipes awaiting rescue.