While I was in Germany I attended a symposium on assistive and wearable robotics. I’d emailed ahead to see if I could join. The enthusiastic back-and-forth I had with the organiser didn’t mention the cost of a ticket, and when I turn up at the table of laid-out name tags, I’m asked which university I am with. I’m just an interested member of the public, I say. I’m handed the card reader and told it will be €300. Too embarrassed to retreat, I pay up and find a seat where I’ll be invisible – nearly – but not quite at the back of the room. The thirty-odd people attending are chatting away and for a dreadful moment I think the whole conference will be in German, but as soon as the facilitator stands and rubs his hands together, this pan-European group switches into English.
There are a few presentations during which I have literally no idea what they are talking about, it might as well have been in German (a very maths-heavy talk on control approaches, one on predictive models for human balance control, and another on online planning and control of ball-throwing by a VR humanoid robot …). I notice there is a tendency for the presenters to say the word ‘basically’ before explaining something that is anything but basic. Every time I hear basically, I feel a little further out of my depth and a little more stupid. But mostly the talks are fascinating.
I wasn’t sure exactly what an assistive and wearable robotics conference would cover. The most obvious example is the exoskeleton, and many of the lectures focus on the subject, but the field is far wider and seems to include any hardware that repairs or augments the body. And throughout the day we’re reminded that this isn’t just about healthcare. Slides are flashed up of exoskeletons designed for industry or the emergency services: one of a firefighter carrying a huge coiled hose up a stairwell, assisted by a compressed air-powered exoskeleton; two elderly Japanese farm labourers lifting apple crates while wearing pressurised air-powered suits (the point being made by the speaker that exoskeletons could prevent injury in manual labour and might keep an ageing workforce useful); and a Lockheed Martin lower-limb suit designed for soldiers and first responders – letting them carry more, further, with less fatigue.
My experience as an infantry soldier told me this would be helpful, theoretically – I’d had to carry vast amounts of armour, munitions and comms equipment across dusty places and would have loved anything that assisted – but in practice being dependent on yet another technology and its logistical burden (think batteries, charging and maintenance), not to mention the risk it might fail at the critical moment you wanted to scramble into cover, leaving you exposed under enemy fire like an upside-down insect, made me reckon there was a fair way to go before combat troops would be stomping around in anything like this.
These limitations seem to be why exoskeletons remain tools for rehabilitation rather than devices that significantly help in everyday living. Yes, there are stories of people completing marathons while in exoskeletons, and a handful of pioneers use them day to day, but these are still the exception. Many of the presentations focus on those very specific and technical problems that need to be overcome so the next generation of exoskeletons might let a paralysed person routinely walk out of the house to do a bit of shopping. The hurdles are ones of battery weight and power – we’re told how efficient the human body is, performing all day long on a few hundred watts and with only one charge (breakfast); it really is a miracle, the scientist says, we are so far from being able to replicate anything like this – and, as ever, the challenge of interfacing the tech with the person.
How do you get an exoskeleton to operate in complete synergy with the body? There’s the mechanical problem: fitting the suit to each human user safely and comfortably, every one of which will be a different shape and size. Adjustable straps and extensions and flexible materials help, but the joints of an exoskeleton need to mimic the body’s vast freedom of movement and various speeds. Take the human knee, for example: it is actually one of the most complex joints in the body. It doesn’t simply bend on a single pivot, it’s polycentric; as the femur slides over the tibia, the centre of the joint migrates as it bends. An exoskeleton needs to accommodate these sorts of intricacies, supporting us while not putting the biological joint under stress that might risk injury.
But the more difficult problem to solve is the control interface. Short of wiring up the brain (as Thibault had), how do the motors driving each robotic leg forward know what the user wants it to do, especially if the person is paralysed and has no way of initiating control or receiving sensory feedback? And how do you make the exoskeleton balance, stop it tripping over, and adjust to the infinite complexities of the real world?* The small instinctive reactions a human makes to stay upright and navigate our world are fiendishly difficult to replicate.
Moravec’s paradox was the surprising discovery by robotics and AI researchers in the 1980s that it isn’t high-level reasoning that is hard to replicate; what takes huge amounts of computation is actually the low-level sensorimotor skills we take for granted. We constantly use sensory information and turn it into useful movements; those movements then influence how we sense future stimuli, in constant feedback loops, which enables us to correct continually for errors and changes in the environment. Preventing a stumble is the sort of instinctive response we’ve become unbelievably good at – it has been encoded in us over billions of years. However, as Hans Moravec put it, ‘Abstract thought … is a new trick, perhaps less than a hundred thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.’
Take AlphaGo, for instance (the AI program that beat one of the top human players at the abstract-strategy board game Go in 2016, made by the now Google-owned company Deep-Mind). It’s an algorithm that decides which moves to make, based on knowledge it accumulated through reinforcement learning. It does this using artificial neural networks trained to identify the winning percentage of each move, which then teach the program to make ever better choices until it is human-beating. All done by playing itself, without any historical data or human intervention. But AlphaGo wasn’t a robot sitting opposite a human at the 19x19 Go board, it was an algorithm hosted on vast servers. A human had to move the pieces for it. Because we find moving the pieces so easy (to the point of it being automatic), it seems odd that it is more difficult for us to make a robot that can learn to reach out and pick up the Go pieces than it is for us to make one that can master the abstract problem-solving of a game we have to think so deliberately about.
How to solve these sorts of sensorimotor challenges are the most complicated lectures: endless slides of control algorithms, and gait-pattern analysis, and biological modelling. I’m left with a sense of how the engineering, computing and programming are iterative. Everyone seems to be working on something very specific and complex that, in isolation, is hard to grasp but, when understood in relation to all the other research, starts to make some sense – and this is just one small room of one conference.
After coffee there’s a talk from the company that makes my microprocessor knee. I hadn’t considered my prosthetic as a wearable robotic, but I suppose it is. Then two new upper-limb myoelectric hand prosthetics are presented.
Next, we’re on to robots proper. The word ‘perturbation’ keeps being used. It’s another maths-heavy talk, until we’re shown videos of a humanoid robot running on a treadmill: the noise of whooshing actuators and the square feet pounding the rubber. I’m surprised when a researcher’s hand comes into frame and gives the robot a push. So that’s what he means by a perturbation. The robot adjusts by stepping a leg out sideways to counter the thrust. It’s shoved again, harder, and the leg flicks further out; it nearly overbalances, but keeps on running. It’s an amazingly human-like response.
After the last lecture of the day I find myself unexpectedly in a minibus with a few others, being driven north. I’d been asked – I think by a PhD student – if I wanted to see their robots. I’d said sure, presuming they would be in the next room. It’s a half-hour drive to the Karlsruhe Institute of Technology (KIT), one of Germany’s top engineering and scientific-research universities. We’re shown two robots. The first is AMAR IIIb. He or she (we’re not told) has a humanoid upper body of blue panels covering a grey armature of pistons and wired workings. It’s twelve years old, but I feel like I’m standing in front of an antique – it reminds me so much of the robot in the 1986 film Short Circuit, one of my childhood favourites, that I feel they must be paying homage. AMAR IIIb wheels around a mocked-up kitchen with a chef’s hat on, opens the fridge, picks out the right ingredients and makes an omelette. It has slightly absurd white googly eyes, but we’re told they house cameras for peripheral and foveated vision and the system is now used for a number of different industrial applications across Europe.
I’d never really understood why anyone would bother creating humanoid robots. I thought it was a deluded pursuit of made-in-our-image entertainment and novelty fembots – surely there were easier ways to reach useful solutions than trying to mimic the strange, spindly instability of human bipedal anatomy. Take a robochef food-processor-type machine that you chuck ingredients in and it chops and cooks for you. It sits on the countertop like any other cooking aid and seems a more efficient and cheaper way of achieving the task. But watching AMAR IIIb wheel over, open the fridge, see the milk, name it, pick it out of the fridge, add it to the eggs it has just cracked and start whisking (even though there is a little spillage, and something slightly laboured about its movements) – added to the lectures earlier that day (the robot on the treadmill recovering its balance, for instance) – makes me realise why all these academics are so interested in humanoid robots. It is another way to learn about the body, and how we can best assist it when it is disabled.
Gait-pattern analysis helps us understand the way we balance, and an anti-stumble algorithm perfected in a humanoid robot might make a lower-limb prosthetic or exoskeleton more sure-footed. And if a robot can interact in our environment fully, it will be more useful – a robochef can cook only a few stew-like dishes; robots like AMAR IIIb can make an omelette, wipe clean the surfaces afterwards and fill the dishwasher. Watching this robot reach into the fridge makes me realise that the technologies it uses to pick up the milk are the same ones used in an upper-limb prosthetic that KIT presented during the morning.
The problem with the most advanced bionic-hand prosthetics (often called myoelectric prosthetics) is the way they are controlled by the human user. As the amputee tenses very specific muscles in their stump, carefully positioned electrodes in the prosthetic socket detect the electrical signals our muscles emit, and this activates the motors that open the fingers or twist the wrist into the right shape to perform a daily task. The drawback with this type of control interface is that it often requires a lot of training and is frustratingly laboured in practice. (It was what was so striking about the super-users racing down the Cybathlon course at the REHAB fair.) The electrodes also need to be in precisely the right spot, the skin needs shaving, and any sweating or movement can reduce effectiveness. Although they look amazing – the stuff of science fiction – the cognitive burden of controlling these expensive prosthetic upper limbs means people often end up not bothering, and around 50 per cent abandon them altogether.
To overcome this, the new KIT hand has a processor, distance sensor and camera embedded in the palm, which can identify a number of objects. Instead of the user having to shape the hand into the right grip with a series of tiring and counter-intuitive muscle movements, the hand could identify the object when it ‘sees’ it (a banana, a Coke can, a phone, keys, and so on) and naturally moves its fingers and wrist into the right shape to pick the object up, just like AMAR IIIb reaching into the fridge. And because the movements are preprogrammed, the user doesn’t have to think about it, and the whole thing looks more natural than even the best myoelectric prosthetic-hand user could manage.
I nearly say thank you to AMAR IIIb when we are led out of its kitchen and down the corridor to meet AMAR-6. Like its older sibling, AMAR-6 is on wheels, with a humanoid upper body. A shiny green shell gives it a more up-to-date look. It’s designed for the industrial setting and shows us a bit of cleaning, squirting a disinfectant bottle and wiping a surface. Then it rolls over to a dummy section of warehouse shelving and helps one of the PhD students unscrew and lift out a section. The student has hold of one end, AMAR-6 the other, and while the student walks around, explaining what’s going on, the robot mimics his every move. We are invited to take turns raising and lowering AMAR-6’s arms, which are ‘slaved’ to our movements. It’s effortless, even though the robot is holding a heavy section of shelf. It’s another example of a technology that would be invaluable in an exoskeleton or device to assist the disabled – or any of us as we age, for that matter.
At the end AMAR-6 asks us if we’d like a photograph, and everyone is smiling and taking turns. I ask one of the researchers how she feels about the robot: would she run in to save it, if there was a fire? She laughs. ‘Depends how big the fire,’ she says. ‘I’d feel sad for it if it burnt – but just in the same way I feel sad for anything I like.’
It’s my turn and I stand beside AMAR-6, its arm around my shoulder, and take a selfie.
The next morning the symposium continues. There’s a new robotic skin, designed to let robots work in closer cooperation with humans – an exoskeleton lined with these sensory cells could ‘feel’ and adjust the pressure it is putting on the body and reduce rubbing, or a robot caregiver could more tenderly lift a patient out of bed. And reverse-engineering some future generation of this tech could perhaps restore spinal-cord-injured patients’ sense of touch.
Then two lectures on soft robotics. These are assistive devices with components made from compliant materials, which can be integrated into textiles and clothes. There’s a soft wearable actuator grip: a glove with rows of bladder cells sewn into it. When the cells are inflated, they push against each other, curling up into a grip – it’s very organic-looking, like a fern frond furling and unfurling. A video shows the sort of assistance it can give: the empty glove holds a 9-kg weight. Then we’re told about a soft exoskeleton, made of textiles with a flexible tendon that runs up the leg through joints at the ankle, knee and hip. It assists by giving extra power in those first degrees of standing up, when gravity needs to be overcome, or in the stance phase of walking. I think of my elderly grandparents ‘oofing’ as they struggled out of chairs.
While a rigid exoskeleton can provide more power and hold its shape when assisting the human body – essential in the case of a spinal-injury patient – it is heavy, bulky and needs lots of power. Soft robotics can’t give as much support, but are lighter and designed to fit into our daily lives, to be used with a wheelchair, or in a car or armchair. I can see how they might have wider benefit in society, integrated into our clothes to make us stronger or faster, to keep us from ‘oofing’ as we age.
But perhaps the technology that I’m most impressed by, from the entire two-day symposium, is the simplest. It has the graveyard slot, when PowerPoint fatigue has set in and there’s a certain restlessness in the room – and I suspect it’s not as interesting for many here who are devoted to the high-tech. It’s a prosthetic hand that grips by simply being pushed against an object; each finger adapts to the shape of the object, so it can pinch or grasp and can pick almost anything up, but it’s purely mechanical – no heavy motors and batteries, or clever control algorithms. And because the user is pushing against something physical, the user feels more feedback. It’s one of the lightest prosthetic hands in the world, is cheap to produce and could be used in low- and middle-income countries, where (let’s not forget) the majority of amputees live. The young professor from Delft University of Technology who is presenting describes it as a smart technology. ‘You wouldn’t use a Formula One car to do the grocery shop,’ he says. It’s a good-natured jibe at some of the ideas that have been presented and gets a laugh.
I’ve just spent two days listening to very complex, expensive solutions, and the one that makes most sense to me is a smart-tech solution – it is still a cutting-edge mechanism, inspired by the human hand, but there are no batteries, it is 3D-printable and intuitive to use.
And then the symposium is over and I am heading for the airport.
A few years back I met Aldo Faisal, a professor of AI and neuroscience, in his office at Imperial College London – whiteboards covered in equations and algorithms and feedback loops, quite sparse, a little messy, with a few incongruous artefacts dotted about: a book about a minimalist artist, and another about a fashion designer, which I liked to think might give him tangential inspiration. I’d asked to meet him after I attended a lecture he’d given on a new Artificial Intelligence Clinician he and his team were developing to help treat sepsis (a leading cause of death worldwide, and most common cause in hospitals). This AI would sit alongside a patient in intensive care (no robots here, just software running on a pretty standard monitor, of the type you might already see) and measure vital-sign outputs and therapeutic inputs and recommend the best treatments. Professor Faisal had walked about the front of the lecture theatre holding the laser pointer, his enthusiasm bringing us along with him, and it was probably as close as you’ll get to the young rock star professor of the movies. I was a little in awe.
I’d gone along because some of my experiences in hospital had been grim. For the most part the nurses and doctors had been kind and brilliant – but human error, fatigue and maybe, once or twice, inexperience had meant that I’d often (when I was conscious anyway) asked about a drug I was being given, and whether it was the right dose or right time. The medics had frowned – ‘I’ll just go and check.’ And once I was given something when I shouldn’t have been, and my kidneys hurt and I felt dreadful. It gave new meaning to the saying ‘You have to own your own recovery’ (which is the best tip I can give anyone going through complex medical care). And then I’d got a fungal infection – no one’s fault, just bad luck – and they’d had to remove my other leg.
Could an AI have performed better and spotted, in all the information produced by the beeping monitors I was hooked up to, the signs of infection before the only option left was amputation? In his lecture Professor Faisal outlined how challenging and time-consuming it was to take huge datasets from intensive care units, which had tracked every intervention and observation in thousands of patients, and clean them up ready to be used; how difficult it was to cancel out the noise of human behaviour and biology; and how bad data-gathering often is, if it happens at all – getting the dataset ready from which to create your AI seemed to be half the battle.
But once Professor Faisal’s team had it, they created an AI that extracted knowledge from patient data that far exceeded the lifetime experience of a human doctor and, using reinforcement learning, they could predict mortality and suggest the best treatment. While a good doctor can make judgements from about five or six different parameters, Professor Faisal told the lecture room, the AI Clinician could track around twenty different variables in order to make recommendations. It had also uncovered a tendency for doctors to increase the dose of drugs as patients became sicker, which had little impact and even caused harm.
In his office Professor Faisal had eaten his sandwich and, between mouthfuls, explained more and even stood up to wipe away some old diagrams from the whiteboard and scribble a quick perception–action loop. His interest was creating AI by reverse-engineering from first principles the algorithms that drive human behaviour. He said his work was almost science fiction; he let the rest of the team worry about how it could help the end user. Slightly out of my depth, I’d said how brilliant I thought all this AI stuff was. But he’d cautioned me: remember those passenger jets that crashed shortly after take-off; the on-board AI was being given faulty data from sensors that said the aircraft was stalling, when it wasn’t. The AI bypassed the pilots and, much as they fought to bring the nose up, the AI kept forcing it down. AI is only as good as the data we give it.
I’d left his office and walked through students queuing for their next seminar and been reminded of a conversation I’d had with one of the developers of my microprocessor knee. He’d told me how responsible he felt when they let the first knees out to the patients. ‘In the lifetime of your leg it has billions of decisions to make,’ he’d said. ‘It takes information from all its sensors, adds some historic data and feeds it into our magic control algorithm, which is a cascade of very elaborate control approaches, and decides what to do next – take a small step, a big step, stop a stumble, yield down some stairs, et cetera.’ Then he’d asked me: ‘If one out of a million decisions is wrong, do you think it will be a problem?’
I said I wouldn’t have thought so.
‘Let’s say it’s two hundred billion step decisions, for all our new knees out there in the world being used by amputees over a year. If one out of a million decisions is wrong, that is two hundred thousand missed steps; if one out of ten missed steps leads to a fall, that would be twenty thousand falls; and if one out of ten falls leads to an injury, that’s two thousand injuries. If one in two hundred injures is fatal, that’s … well … not good. The health of the end user is a great responsibility for the control engineers. We have to get the whole system right. There’s no margin for error.’
We’ve long wanted to play God. It’s thought the ancient Greeks created automata that mimicked animals and people; da Vinci made plans for an artificial man – a medieval knight powered by cranks and pulleys and cables; the French inventor Jacques de Vaucanson created a mechanical duck that wowed crowds in the 1730s, flapping its wings, eating and defecating; and Thomas Edison brought a talking doll to market that was so uncanny it probably started the scary-doll horror genre. All these sit somewhere between magical illusion and mechanical wonder.
Now we can create truly useful robots, which are able to interact with us and our environment. In medical settings they can help treat patients with contagious diseases, clean wards and assist us with surgery. In nursing and retirement homes, experiments have shown patients developing an emotional attachment to humanoid robots, helping with dementia and Alzheimer’s. The toy company Hasbro sells a lifelike robotic companion cat that can purr, roll over and blink when stroked. In the future, robots will take on more of these roles – assisting hospital staff and becoming care-givers to the world’s ageing population, lifting, cleaning, interacting.
Robots inspired by human biology can function in a world designed for us, but they also become the prosthetics and assistive tech that can repair and replace our damaged bodies. And the human brain (so often touted as the most complex object in the known universe) becomes the inspiration for scientists like Professor Faisal who are trying to mimic its capabilities and create general intelligence. AI already helps us make decisions from data that we couldn’t hope to crunch, and makes assistive tech more intuitive. Where robotics and AI converge promises new technologies that might integrate with us more seamlessly, lessening the physical impact on our bodies and the cognitive burden. This will be particularly powerful for the hybrid humans of the future.
Extended cognition is the idea that our mental processes are extended into our environment. Andy Clark and David Chalmers first described the idea in the 1998 paper ‘The Extended Mind’, which opens with: ‘Where does the mind stop and the rest of the world begin?’ When we undertake a mental task we use the things around us: for arithmetic, for instance, we might count on our fingers; for longer sums, use pen and paper; and get out the calculator for greater speed or complexity. We are constantly using ‘the general paraphernalia of language, books, diagrams, and culture’. In using one of these external entities (the pen and paper, say) we create a coupled system in which we delegate part of the task to the technology. This coupling counts as a cognitive process, even though it’s not wholly in the head. If you remove the technology (take the pen and paper away), our ability drops, just as it would if you removed a part of our brain.
The paper uses a thought experiment: Inga wants to go to an exhibition at the Museum of Modern Art, New York. She remembers that the museum is on 53rd Street, walks there and goes in. The belief that the museum was on 53rd Street was in Inga’s memory, ready to be accessed. Otto also hears about the exhibition and wants to go. But Otto suffers from Alzheimer’s. To overcome his disease, he carries around a notebook in which he writes any new information he learns – it stands in for his biological memory. Otto refers to his notebook and it tells him the museum is on 53rd Street, so he walks there and goes in. Was Inga’s and Otto’s belief that the museum is on 53rd Street any different just because Otto’s memory was held external to his mind, delegated to his notebook? Perhaps not – it had the same result. If you replace the late 1990s references in the paper – the calculator and Filofax – and make my smartphone and prosthetics some of the cognitive resources I bring to bear on the everyday world, then I am very much a coupled system.
I was once asked by a friend what it was like when my leg broke and I couldn’t get a replacement. In reply, I asked how she’d feel if she lost her phone. (She had all the photos of her children on it, all her work contacts and emails, all her notes and passwords, and wasn’t sure it was backed up.) She said, ‘I’d be distraught … devastated, and I don’t know how I’d get anything done.’ She also talked of the anxiety of not being in instant contact with people – that was scary. ‘It would be like a part of me was missing,’ she said. ‘That’s because your phone is like a prosthetic for you,’ I said. Being suddenly without my legs creates the same feelings for me. The only difference is that sometimes, when we lose our phones, after a day or two we realise that we’re going to be okay and it’s rather nice not being coupled to it any more. I don’t get that epiphany when my legs aren’t working.
This coupling is intensifying as the technologies we use get more sophisticated. The advances in AI and robotics will make for the most intense couplings. As with all technology, there will be benefits and costs: some we can see coming, others are over the horizon. But the disabled will be at the forefront of testing what the future might be like – and where the human ends, and assistive technology begins, opens up all sorts of legal and ethical questions: what if you snatched Otto’s notebook away from him and tore it up – would that be the same as damaging Inga’s brain?
This isn’t a problem for science fiction; it is already here. In 2009 a 6-foot 6-inch, sixty-three-year-old tetraplegic Vietnam veteran, who had almost no functional movement of his legs or arms, had his mobility assistance device (an electric wheelchair he was completely reliant on) broken by an airline on a flight from Miami to Puerto Rico.* He didn’t receive a replacement for a year. Bedridden and without his assistive technology, he had to hire in people to help him. He claimed against the airline for the extra costs. But the airline wouldn’t pay, saying that because it was a baggage-claim incident, he wasn’t entitled to any compensation. They hadn’t damaged the man, they said, only his device – likening it to a car accident where the owner was not in the car. But the man managed to prove legally that the assistive wheelchair was his prosthetic, functioning as an extension of his body, and that by harming the device the airline had harmed him, and he won the case. As we become more enmeshed with technology, it’s increasingly difficult to separate the person from the devices they rely on – who would Stephen Hawking have been without his mobility and communication devices?
* In the rehab centre I always felt how well I was doing; the floors were glossy flat, there were lifts and ramps and handles to make life easy. It was only when I went out into the real world, where every surface was uneven and sloped and ever-changing, that I realised how much more difficult mobility would be, and how much less effective my prosthetics were.
* The lawyer who represented him wrote up the case in a paper titled ‘Case Study: Ethical and Legal Issues in Human Machine Mergers (Or the Cyborgs Cometh)’.