“The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based’ economy?”
Stowe Boyd, Lead Researcher at Gigaom Research
There is a spectre haunting humanity, the spectre of job-stealing, sexy, murderous, calculating robots. Indeed, very few technologies have captured our hearts and minds in popular fiction as much as robots. It’s when we hear that the robots we’ve known of as toys and seen in the cartoons of our childhood will replace 50 to 70 per cent of our jobs that we really start to pay attention. We are realising that we must better understand the technology, and the possible impact of the technology on our community. Just as everyone needed a PC strategy in the 1980s, a website strategy in the 1990s and a social media strategy over the last decade, we will need a robot strategy for not only the next ten years, but for the rest of our lives. Robots aren’t going away, so we need to learn how to work with them and how to put them to work for us.
Let’s start with how we perceive robots based on their representation in fiction. We need to go back to 1868 to the first dime science fiction novel—The Steam Man of the Plains—to find the very first depiction of a robot in popular literature. The novel tells the story of a teenager who, understandably enough, invents a steam-powered man to drive him around the plains of the mid-west.1 This “Edisonade”2 also started a recurrent theme—that the cleverest people invent robots for their own personal use. This tradition continues to this day and includes the reality-blurring stories around Tony Stark aka Iron Man, played by actor Robert Downey Jr. (in character as Tony Stark, Downey Jr. has even handed out 3D-printed robot arms to amputee children).
Robots are symbolic of the love-hate relationship we have with a technology, one that we both dread and anticipate in equal measure. In Avengers: Age of Ultron, robots are both the worst villain (Ultron) and the noblest hero (the Vision, worthy enough to lift Thor’s hammer, Mjölnir), and both hero and villain in the Terminator movies.
In Ex Machina, a female robot is able to outwit and deceive her inventor, the richest man on earth, by fooling an intelligent young programmer into falling for her in just six sessions through her glass prison and turning him against his billionaire employer. This is a cautionary tale because as robots become more and more like us, they may be able to play us against one another, dividing us, or possibly conquering us. Ex Machina could conceivably be considered a 21st-century reimagining of Frankenstein, the 200-year-old horror novel by Mary Shelley. In this version though, the creation doesn’t, by any stretch of the imagination, look like a monster as it roams the private home and lab of a fictional technopreneur who bears a passing resemblance to Google’s founder Sergey Brin.
The average teen student has likely already consumed thousands of hours of robot stories, mostly where the robot is the ally, friend and hero, even a father figure. For instance, which robot is the most powerful? I would say the Sym-Bionic Titan. Which is the most loved? Probably R2-D2 or BB-8 from Star Wars. Who is the most deadly? Goort or the Dalek. Hundreds of hours of discussion are possible on this topic because robots are so engaging and we already know thousands of stories about them.
Depending on who you are talking to, robots will either bring us to a brave new world of amazing possibilities or destroy humanity and all it has created. One author even claimed recently that robots would relegate us to a position no more important than cockroaches. Are robots a threat to our future? Our prejudice against robots may be resolved by having more robot models cross the uncanny valley, and alternate non-humanoid forms integrate into our environment with positive impact.
The term “uncanny valley” was originally coined by Japanese robotics professor Masahiro Mori in 1970.3
Figure 4.2: Mori’s uncanny valley is a predictor of how we’ll respond to robots with human-like features.
As robots get closer and closer to simulating a human, a common response to these robots is that they are “creepy” or “eerie”. The humans demonstrating robots have remarked that people very often don’t realise that the robot is not a real person for a period ranging from seconds to minutes, but when they do grasp that the “person” in front of them is a robot, the reaction is almost always immediate and powerful. Observers are commonly surprised, then impressed, but then their reactions often veer towards fascination and wonder, or alternately towards fear and dread. The latter is what Masahiro Mori defined as the uncanny valley.
Some roboticists fear the uncanny valley and do not believe it can be overcome any time soon, others avoid it by going down a non-humanoid path. David Hanson of Hanson Robotics looks at the uncanny valley as an opportunity for true artists to emerge in the humanoid robotics space. Hanson points out that Japanese and Chinese robotics companies frequently base their robots on Asian female faces. He notes, perhaps controversially, that the smooth skin and lack of wrinkles and discolouration in the features of Asian women make such robot templates the most effective at mimicking a human, at least compared with males or other ethnic templates. Hanson claims that his company is close to bridging the uncanny valley with its latest robots. With over 40 actuators and the extremely life-like patented skin called “Frubber”, these robots have skin that looks real and feels real, too. Hanson has even created an android robot that mimics the late science fiction author Philip K. Dick.
Figure 4.3: Robots like Otonaroid, developed by Osaka University, are getting closer and closer to mimicking humans. (Credit: Osaka University)
Hanson has also worked extensively on the related software to ensure that the robot’s eyes and reactions are equally realistic. Using the robot’s powerful software to track multiple faces and keep eye contact in a natural way by moving from eye to eye then to the mouth and back again, just as we do, the robot appears as human as any robot ever created. The head and neck movements are still a little choppy, but are improving rapidly as processors and actuators improve.
Robots are still relatively rare today. Within a couple of decades though, they will outnumber the world’s population. In 2014, sales of industrial robots increased 29 per cent to 229,261 units, according to the International Federation of Robotics.4 In 2000, the population of industrial robots was around 1 million, with 40 per cent of those being situated in Japan, but by 2010 the global industrial robot population had ballooned to close to 9 million.5 Nevertheless, industrial robots are only a small part of the robot population.
According to a research study by Tractica, annual shipments of consumer robots, a category that includes robotic vacuums, lawn mowers and pool cleaners as well as social robots, will increase from at least 6.6 million units in 2015 to more than 31 million units worldwide by 2020, with a cumulative total of nearly 100 million consumer robots shipped during that period. One thousand Pepper robots are being sold each month in China and Japan; Jibo just raised another US$16 million as it prepares to deliver 7,500+ units in March/April 2016. It is estimated that iRobot sold more than 100,000 home robots in 2015, with the Roomba 800/900 vacuum cleaner being the most popular.6 The Federal Aviation Administration (FAA) estimated that over 1 million drones were sold over the 2015 Christmas period alone.7
It is likely that we added close to 10 million robots to the global robot population just in 2015, if you include industrial robots, household robots and military application. But there are some big outliers coming in the next five to ten years, including autonomous vehicles. By 2025, it is estimated that between 15 and 20 million autonomous vehicles could be sold annually.8
By 2025, more than 1.5 billion robots will be operating on the planet, and we’ll be seeing that exponential growth curve exhibited with that number doubling every few years. By the early 2030s, robots are likely to outnumber humans.
Robots can be very small, and will eventually be self-replicating. This will change everything, especially their numbers (insects outnumber humans by 200 million to one, and most people don’t notice or fear them) and their nature (they will match and exceed us in intelligence). Professors at the Massachusetts Institute of Technology (MIT) call what’s coming “the second machine age”, but I find the “robot singularity” as potentially far more significant, historically speaking, than that.
Evolutionary biologists posit that one of the single most significant events in the history of life on earth occurred shortly after unicellular (single celled) life discovered or evolved multicellularity. About 570 million years ago, driven by the twin engines of evolution, random mutation and natural selection, life tried millions, possibly tens of millions, of combinations, resulting in a big bang of different body plans. After the globe-wide experimentation left some species in existence and others extinct, we ended up with virtually all the body plans that have come into existence in the 570 million years since, with the primary changes since then not in body plans, but in brain plans. Biologists call this burst of life experimentation and variation the “Cambrian explosion”.
You could say that we are experiencing a neo-Cambrian age of robotic forms and functions from the fertile imaginations of science fiction writers and the creative drives of the hacker and maker communities. Students equipped with exponential access to 3D printers and cheap Arduino systems including microprocessors, sensors and controls are contributing too. These microprocessor components are cheap because these chips have been around with the same specifications for years, almost a decade in fact, unlike Intel’s chips which follow Moore’s Law, doubling in power every two years, and consequently command a premium price. Thus, just like during the Cambrian explosion, this phase of exponential growth in robotics is going to result in the evolution of body plans that succeed and iterate, and many designs that fail and are therefore discontinued. If the history of technology disruption over the last 250 years is any guide, most of this experimentation is going to be done in a relatively short period of time.
All the barriers to widespread deployment of robots are falling fast. For years, robots were held back by the remarkably difficult problem of walking and navigating our world. This limited freedom of movement meant robots were locked in labs and designed to be stationary, or toys meant to be swung or rolled or run around a track. This greatly limited their usefulness. Then someone hacked the problem by combining robots and radio-controlled planes and helicopters—and drones were unleashed. Suddenly, robots were no longer tethered and new use cases exploded. Robots are programmable machines that can operate with at least three axes of motion—drones qualify under that definition.9
Robots are here in relatively tiny numbers (say, 1 for each 100 humans) and are already having a huge impact. “Gartner predicts one in three jobs will be converted to software, robots and smart machines by 2025,” said Gartner research director Peter Sondergaard. “New digital businesses require less labor; machines will make sense of data faster than humans can.”10 How we interact with robots will determine how successful we will be in the new economy and society that is coming.
Kevin Kelly, former editor of Wired magazine, said, “This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots.”11
Robots will change everything from how we work, play, socialise and care for ourselves. In much the way we identify someone who is racist or bigoted today, we might identify people in the future by their willingness or not to work with robots. This chapter will explore how robotics will help us live better lives, be better people and be better stewards of this planet and beyond.
Today’s robots are evolving in as many different ways as species have over millenniums, spanning form and functions from avatars to Zambonis. In earlier chapters, we have seen how major disruptive technologies since the industrial age have dramatically changed virtually every aspect of society. The coming Augmented Age will continue that evolution.
Just as an office worker in the 1980s feared the personal computer as a threat to their livelihood, the fear of robots drives the same emotional reaction to an even broader set of workers. For the last 30 to 35 years, starting in the early 1980s, it was the few who embraced their personal computers who rose through the ranks (the people who understood the hardware and the software were far more likely to create new companies). So it will be for the next 35 years, from 2015 to 2050: those who embrace their robot co-workers will have enhanced careers and an unfair advantage in business, health, lifespan, safety, income and war.
Just as no one knew what a web designer was in 1990, the types of new vocations that will be created in the Augmented Age are difficult to predict. As introduced back in chapter 2, Pew Research surveyed technology builders and analysts back in 2014 about this emerging problem of employment affected by new technology. It looked to those who in the past had made accurate and insightful predictions about the future of the Internet, and applied that to AI and robotics. The crowd of experts surveyed on the impact of robots and automation were evenly split on how this coming age would effect employment and jobs. But there are reasons to be both hopeful and apprehensive about the future with robots.
1. Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
2. We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
3. Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
4. Ultimately, we—as a society—control our own destiny through the choices we make.
1. Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
2. Certain highly skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
3. Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.
So the major disconnect seems to be whether you believe that these new technologies will augment our abilities or replace them.
Harvard social scientist Shoshana Zuboff examined how companies used technology in her 1989 book In the Age of the Smart Machine: The Future of Work and Power. She looked at how some employers used technology to “automate”, or take power away from, the employee while some used technology to “informate”, or empower, the employee. Obviously, our thesis is that the latter is far more preferable!
If we look at the last 30 years of software-based automation using customer relationship management (CRM) and enterprise resource planning (ERP), we generally find that implementing the technology is the easy part. Getting the employees to accept and embrace the new technologies and use them productively is the single most important factor. More often, these new technology projects lead to more staff, contract and consultants jobs than the automation ever replaces.
When these projects are successful, they usually informate and create better employee and customer experiences and drive companies to be more successful, grow and hire. When these projects fail, heads roll, customer and employee experiences fall and headcounts are reduced.
Projects that purely automate are far fewer and can also be seen as creating more jobs than they displace. An interesting example is that of a warehouse automation solution called Kiva Systems. Kiva Systems was founded in 2003 by Mick Mountz after his experience with the failed online grocery delivery service Webvan. Webvan was going to put all grocery stores out of business. Mountz believed Webvan failed because the high cost of warehouse operations meant that each order was too expensive to fulfil using traditional material handling and warehouse management solutions (WMS). Mountz decided to create a better way to pick, pack and ship products and Kiva was born. Teaming up with Peter Wurman and Raffaello D’ Andrea, experts in robotics and engineering, the Kiva three created an entirely new way to automate the traditional warehouse.
The traditional model includes receiving products into a receiving dock and having workers put away the products on shelves using forklifts and carts. Then those same workers would go into the warehouse and pick the products to fulfil the orders that needed to be assembled, packed and/or shipped. Even with advanced WMS software automation systems like Manhattan Associates, HighJump or RedPrairie that can optimise the efficiencies of the workforce, order fulfilment costs remained stubbornly high, especially in low margin, high product mix orders like groceries. Kiva’s solution was groundbreaking and simple, and spawned from the answer to the question: Why have people deliver products to and from shelves if the shelves can come to them?
Kiva created robots that locate the closest automated guided vehicle (robot) to the item that needs to be moved and directs the bot to retrieve the item. The mobile robots drive around the warehouse by navigating a series of barcode stickers on the floor. The robots have enough AI and sensors to avoid running into each other and obstacles. When the robot reaches the correct location, it slides underneath the shelf and lifts it off the ground through a corkscrew action. The robot then carries the shelf to the specified human operator to act on the items.
After years of product development and marketing, Kiva made a large splash in the industry and sales of the products were just taking off when Amazon stepped in and bought Kiva lock, stock and barrel for US$775 million in March 2012.12 The acquisition of Kiva Systems was second only to Amazon’s purchase of Zappos in 2009. Amazon was persuaded that Kiva offered an unfair advantage. Amazon immediately let go the entire Kiva Systems sales and marketing staff and stopped all product sales. It seems Amazon considered the automation of its own warehouses to be so valuable that it would forgo the profits from selling the system and keep the technology away from its competition.
Today, Amazon uses the Kiva solution in its own warehouses to minimise warehouse workers and order fulfilment costs while improving order accuracy. Amazon offers us a glimpse of something that we’ll see often in the future: automation technology reduces the need for and number of low-skilled workers and highly paid sales and marketing employees while creating an entirely new division within the company of highly skilled roboticists and AI software workers.
We said earlier that every person and company needs a robot strategy. Who are the early leaders in robot strategy and what are they doing? We’ve seen Amazon embrace robotics with its Kiva warehouse system acquisition. Google acquired eight robotics companies and put them all in a former NASA blimp hangar13 like a giant child’s playroom and made what is probably the most cinematic yet confusing move, since the military nature of the robots seems at odds with Google’s core business and its civilian focus on search.
Apple’s main outsourced manufacturer, Foxconn (actually Hon Hai, headquartered in Taiwan) is realising that the only way it is going to be able to continue to keep Apple’s manufacturing business is by upgrading its factories to use more and more robots to make up for ever-increasing labour costs in China, as the number of Chinese aged 15 to 59 (the traditional retirement age) shrinks by 3 to 4 million a year. It is the companies that don’t see the robot at the door that will find not just their jobs outsourced but their entire company displaced as well.
Those who are just entering the workforce or still at school will need to be able to adapt to the changing employment landscape and acquire the skills that will be able to create, support or supplement the next generation of robotic workers. Computer sciences, all forms of electrical and mechanical engineering and sales are just a few traditional areas that will be viable fields to work with and for robots in the foreseeable future. It is as difficult today to identify what new careers and fields of expertise will be created in the next 20 years, as it was envisioning what a Facebook marketing consultant would have been in 1995. No doubt there will be massive opportunity if robotics fulfils the predictions of being a US$500-billion-a-year industry.
Robots have been in the healthcare industry for at least 30 years. The first documented use of a robot to assist with a surgical procedure occurred in 1985 when the PUMA 560 surgical arm was used in a neurosurgical biopsy. Since then, medical robots in various capacities—from surgical robots through to hospital couriers and telemedicine robots—have helped millions of people.
For countries like Japan and the United States, however, the use of robots may be the only remaining viable option to provide the level of adequate care that will be required by the economy within five to ten years. Let’s look at how robots are the solution to helping us stay healthy.
I expect the first mass market for humanoid robots to be for nurses, based on the mismatch between supply and demand, and force multiplier potential for hospitals and the medical industry as a whole, which is under serious pressure to rein in costs as government accounts for an ever larger bargaining partner for medical professionals.
The United States and Japan are both projected to experience significant shortages of registered nurses (RNs) as baby boomers age and the need for health care grows. Compounding the problem is the fact that nursing schools are struggling to expand capacity to meet the rising demand for care given the national move towards healthcare reform.
• According to the Bureau of Labor Statistics’ Employment Projections 2012–2022 released in December 2013, registered nursing (RN) is listed among the top occupations in terms of job growth through 2022. The RN workforce is expected to grow from 2.71 million in 2012 to 3.24 million in 2022, an increase of 526,800 or 19 per cent. The Bureau also projects the need for 525,000 replacement nurses in the workforce, bringing the total number of job openings for nurses due to growth and replacements to 1.05 million by 2022.14
• A July 2015 Japanese Ministry of Health, Labour and Welfare estimate showed that Japan will face an acute shortage of nursing care workers as the ageing of its population accelerates over the next decade. The nation will need 2.53 million nursing care workers by 2025. It had 1.77 million nursing care workers in 2013. To meet this target, 800,000 to 1 million more nurses will be needed by 2025. However, unless the current pace of increase picks up, the number of nursing care workers will fall short of demand by at least 380,000.15
• According to the “United States Registered Nurse Workforce Report Card and Shortage Forecast” published in the January 2012 issue of the American Journal of Medical Quality, a shortage of registered nurses is projected to spread across the country between 2009 and 2030. In this state-by-state analysis, the authors forecast the RN shortage to be most intense in the South and the West.16
• In October 2010, the Institute of Medicine released its landmark report on the future of nursing, initiated by the Robert Wood Johnson Foundation, which called for increasing the number of baccalaureate-prepared nurses in the workforce to 80 per cent and doubling the population of nurses with doctoral degrees. The current nursing workforce falls far short of these recommendations with only 55 per cent of registered nurses prepared at the graduate level.
In addition to the above, a significant segment of the nursing workforce is nearing retirement age.
• According to a 2013 survey conducted by the National Council of State Boards of Nursing and the Forum of State Nursing Workforce Centers, 55 per cent of the RN workforce is aged 50 or older.
• The Health Resources and Services Administration projects that more than 1 million registered nurses will reach retirement age within the next 10 to 15 years.
• According to a May 2001 report, “Who Will Care for Each of Us? Addressing the Long-Term Care Workforce Crisis” released by the Nursing Institute at the University of Illinois College of Nursing, the ratio of potential carers to the people most likely to need care—the elderly population—will decrease by 40 per cent between 2010 and 2030.
Simply put, in developed economies like the United States and Japan, we do not have enough nurses to care for us today and the problem is only getting worse. In the United States, we are importing nurses from other countries, especially the Philippines. Over 20 per cent of nurses in California today are Filipino, even though Filipinos make up only 3 per cent of the Californian population. Changes to immigration laws in 2009 made it increasingly difficult for nurses to enter the United States, and due to a lack of US-based nursing programmes, the problem of having an adequate supply of trained nurses sufficient to not only meet demand, but bring down costs, is about to get much, much worse for the United States. Tough immigration laws in Japan also limit the critical intake of trained nurses into the country.
Let’s take a look at how robots can augment our caregiving in a more human way for both patients and the carers.
Meet Maria. She is a 25-year-old baccalaureate-prepared nurse living in Manila. She has a small child, her husband has a good job and they both have close family ties. Finding work as a nurse in the Philippines is almost impossible due to the intense competition; there are 430 nursing schools and becoming a nurse is seen as the best ticket out. To practise her trade, Maria, and many like her, has to leave her home country and family to seek work in foreign countries. If she is lucky and can get an H1 Visa to work in the United States, she will probably need to leave her husband and children behind. Amazingly, the right robots might enable her to have the best of both worlds…
What if Maria could stay in Manila and still work with patients in the United States? Imagine Maria in a call centre or even working from home. She is at her computer monitoring ten robot companions in an assisted care facility in Los Angeles. Each patient has a personal dedicated companion robot sitting by his or her bedside, running standard artificial general intelligence (AGI) software in a semi-autonomous mode. In this mode, the personal robot will be able to carry on conversations, answer basic questions and help the patient get assistance or entertainment. Cameras and sensors in the robot will be able to read the patient’s blood pressure, wakefulness, heart rate, emotional state, etc.
At any time, Maria can extend her telepresence into the robot and thereby see through the eyes of the robot and make use of the data from the robot’s sensors. Maria-in-the-robot is now able to check out both qualitative and quantitative data (including a temperature spike, blood pressure drop, an Alzheimer’s episode, etc.) or just do an hourly check-in. Maria can alert a local nurse or have a doctor take over or join her in the telepresence session. Maria can also bring in a family member to the session or update them on their loved one’s condition.
Maria can now live at home with her family and will still make an excellent wage. Currently, a nurse in the Philippines is paid approximately US$500 per month. The same nurse in Los Angeles is paid US$8,000 per month, so even doubling her income in her home country would be a win-win for everyone, including the patients who now have the safety of 24/7 monitoring and video streaming to the cloud to prevent abuse and theft. The added ability to allow family members to “visit” at appropriate times is invaluable.
These robots become “a force multiplier” to use a military term. One nurse can now do the work of many and help solve the shortage of trained nurses, which is only getting larger as the population ages.
Surgical robots will continue to evolve and improve our ability to perform tasks that their human counterparts are unable to do. In April 2015, Google and Johnson & Johnson announced plans to team up to create a new generation of surgical robots that they say will surpass the current da Vinci Xi by Intuitive Surgical.
The teaming of Google, whose Calico was created to solve a little problem we call death, and Johnson & Johnson, the giant of home healthcare products, is a watershed moment that will promote robots in unprecedented numbers to the operating room. One can imagine what the technologies from Google’s Boston Dynamics division, Calico, Google’s Biotech division and J&J’s incredible depth of medical device knowledge will bring. Humanoid robot surgeons will change everything, and will likely be preferred or demanded within a decade by many patients.
If this seems far off, we are already moving towards a world where robots can perform surgeries without human intervention or interaction. Bioengineers at Duke University announced recently that they have created a robot called Biopsy Bot that can “locate a man-made, or phantom, lesion in simulated human organs, guide a device to the lesion using 3D and ultrasound to take multiple samples during a single session,” all without the supervision of a doctor. The robot processes the 3D data and sends out commands to a mechanical arm with sensors to examine the lesions and take samples.
“One of the beauties of this system is that all of the hardware components are already on the market…We believe that this is the first step in showing that with some modifications, systems like this can be built without having to develop a new technology from scratch.”
Professor Stephen Smith,
Duke University Department of Bioengineering team lead
Does this mean that we are not going to need surgeons? We will need surgeons to help design, test and operate these robots as we increase the scope of what can be automated. Robots are already assisting in surgeries and help eliminate human error and perform much less invasive procedures with much better results.
Today, a surgeon may be able to perform two or three surgeries in a day and usually only has one or two surgical days per week. With robots to help, more people can be helped faster and if the patient is not able to travel or is in a remote location, the robot can be sent to the patient at a much more reasonable cost than its human counterparts.
Robot surgeons can and will drive down dramatically the cost of procedures that currently cost tens to hundreds of thousands of dollars. As of 2015, we already have stunning examples, for instance, knee surgery that would cost US$80,000 if performed by a human would cost only US$800 if performed by a robot. Someday, we may all have a robot doctor on call or even own a home-based medibot to provide a level of care afforded only by the wealthiest individuals today.
Robot nurses, phlebotomists, surgical assistants, anaesthesiologists and pharmacists are all being developed and are essential to handling the healthcare needs of our ever ageing population.
Telemedicine robots are also making a huge impact on the future of medicine in hospitals and at home. The first telepresence robot that received FDA approval is being rolled out, literally, in hospitals around the country. The RP-VITA telepresence robot is a joint venture from InTouch Health Systems and iRobot.
The ability for healthcare professionals to be able to move around in chaotic environments like hospitals and visit patients regardless of geography is creating efficiencies that will lead to that nostalgic nirvana of doctor house calls. Combine self-driving cars and these types of telepresence robots, and a new paradigm in health care is born as doctorbots can just call an Uber to make 20 to 30 house calls a day.
Someone turns 50 every 8 seconds. Each year, more than 3.5 million boomers turn 55. In 2012, Americans aged 50 and above reached the historical milestone of 100 million. According to the Administration on Aging, ageing will have a huge impact on the United States:
• The number of Americans who will reach 65 over the next two decades increased by 31 per cent during this past decade.
• If you reach 65, you can expect to live almost 19 more years.
• About 31 per cent (11.2 million) of older people live alone.
• The population aged 65 and above will increase from 35 million in 2000 to 55 million in 2020.
• The number of those aged 85 and above is projected to increase from 4.2 million in 2000 to 6.6 million in 2020.
As of 2012, 22 per cent of Japan’s population was already over 65. By 2060, the government expects the population to shrink from 127 million people to 87 million as the over-65 demographic grows to almost 40 per cent of the nation. In 2010, Japan already had 30 million elderly and infirm individuals in care facilities but had substantially fewer than the projected 2 million carers needed to look after them—and turnover amongst those employees was already 17 per cent per year.
Ageing populations are a global phenomenon. By 2030, 55 countries are expected to see their 65 and older populations comprise at least 20 per cent of their total. There are more people aged 65 and above than the entire populations of Russia, Japan, France, Germany and Australia—combined. By 2040, the global population is projected to number 1.3 billion older people, or 14 per cent of the total.
Long-term care facilities are growing to meet the demand of our ageing populations but creating an environment that is safe, emotionally supportive and works to stabilise or increase health is a difficult task. Patient abuse, theft, overcharging and neglect are all real problems that are exacerbated by the emotionally difficult task of being a human working in these environments around suffering and bearing witness to the predictable declines in functionality (though, as the next chapter describes, there are ways to slow or reverse the physical decay for the few who will do what it takes). It is estimated that 70 per cent of Americans who reach the age of 65 will need some kind of long-term care for at least three years during their lifetime. In hospice care, the strain on the carers of looking after gentle fellow humans in their last days of life can be unbearably painful.
How are we going to provide quality attention for this ever-growing population in a caring and compassionate way? Robots that are programmed to emulate caring may be our best and only option.
Countries like China, Japan and South Korea are investing vast amounts of money into carebots as their upcoming eldercare dilemma is approaching faster than in the United States. We need leadership to take ownership of creating systems and regulatory environments that will allow for these new innovative techniques. According to Transparency Market Research, the medical robotic systems market will reach US$13.6 billion in 2018, up from US$5.5 billion in 2011, but considering the efficiencies and declining cost curves for robots, this is far too little a portion of the nearly US$3 trillion in annual US medical spending.
Figure 4.10: Economy, trade and industry projections for Japan (Source: Japan Health, Labour and Welfare Study)
Today, we can create a companion robot that can be seated in the room with a patient and be attentive to their needs at any time. These robots can gather body temperature and see signs of fever or low blood pressure from across the room. The cameras can tell a patient’s heart rate and/or emotional state and have basic conversations to keep them company, stimulated and help retain cognitive ability. The robot can make sure the patient is taking medications and getting up and moving when needed. Carebots can remind them of appointments, guide them to do their physical and occupational therapy exercises and ensure fluid and food intake.
The conversational user interface, or CUI, allows anyone who can speak to work with the robot in a way that requires no training, special skills or special equipment. Carebots can prompt questions to stimulate the brain and play word games, sing songs, play music and tell stories. (This is not new—consider The Country Bear Jamboree has been a part of Disneyland Park for decades, as have the singing robots of It’s a Small World).
Figure 4.11: Japan is investing hundreds of millions in so-called carebots. (Credit: Roebear Robotics)
The robots can even gather relevant information from the patient on his or her life story, turning speech to text and formatting for electronic patient records. We have lost so much knowledge from each generation as people pass away and these robots could be the curators of oral tradition that used to be passed down from generation to generation but is now forgotten. With simple prompting, stories of your grandfather’s adventures 40 years ago can be immortalised in your family’s personal archives.
When visiting in person, the robot can help with the interactions by understanding preferences and context, or reminding the patient of events or details from the nurse or doctor, the paperwork that needs to be completed or new medication dosage changes. The cameras stream the output to the cloud so theft and abuse of patients would drop significantly. Signs of stroke, Parkinson’s, pain, shortness of breath, emotional distress, an Alzheimer incident, etc., can all be detected and a nurse, doctor or family member alerted, and medical attention given immediately.
Each year, nearly a million people in Europe suffer from a cardiac arrest. A mere 8 per cent survives due to the slow response times of emergency services. An ambulance drone, which flies at 100 kilometres per hour, could be available within minutes to treat such emergencies. Alternatively, a carebot could have heart attack treatment functionality downloaded, or serve as a telepresence unit for a remote on-call doctor. These options could dramatically decrease deaths due to heart attacks, and will probably make heart disease fall below cancer as a source of death by 2018.
Companies like Hanson are developing robots that can mimic anyone by scanning your face and getting a 3D print of your face produced. If you want your face on the carebot, this will be possible by 2017.
At any time, be it in response to the sensors input, a patient request or emergency, or just daily health checks, a nurse or doctor can remote presence in and begin communicating through voice and video with the patient in an extremely efficient manner. Trained nurses or emergency response operators in a call centre could “remote in” immediately and begin assessing the situation of the patient more accurately than they do over the phone today asking questions as paramedics are dispatched.
Some claim the elderly do not want to interact with robots, but the number of videos on YouTube that show older people happy with their carebots is increasing exponentially. Another criticism is that robots are incapable of “caring” for people, that because carebots are not human, robots are poorly suited to the task.
A robot has no judgement and is not going to get frustrated or feel insulted. Properly designed and operated, robots won’t get exasperated hearing the same story for the 15th time that day. Robots can take abuse, verbal and physical, or deal with unpleasant sounds or smells, and not retaliate or walk out. A robot can be in the room 24/7 and never need a break or vacation. Carebots will be able to help people live better, longer and with more freedom than people of the past ever thought possible.
Why build robots that look like humans? This is a great debate in the robotics community. One side believes that robots should be purpose-built and designed for specific functions. Others believe robots should be anamorphic and take advantage of nature’s designs, using as many of the biological victors of the Cambrian explosion’s survivor series.
A member of the purpose-built camp is a roboticist named Eliot Mack. Mack is currently CEO of Lightcraft Technology, a digital effects company in Venice, California. An MIT grad, he started out at Walt Disney Imagineering, then played a key role in the mechanical engineering of the world’s most popular robot, the Roomba from iRobot. Mack asserts that the most efficient and logical way to design robots is for a specific purpose. The idea of robots looking like people or hummingbirds or dogs is ridiculous to him. He is a formidable standard-bearer for his faction because he has a long career creating amazing robots that do not have eyes, fingers or smiles.
Do Robots Need to Look like Humans? |
Exclusive Interview with Eliot Mack, founder of Lightcraft Technology |
A graduate of MIT, Eliot Mack previously worked at Walt Disney Imagineering and iRobot. In 2004, Mack followed his interests in visual effects and motion tracking and founded Lightcraft Technology, applying robotic techniques to the motion picture industry to create what would become the Previzion virtual studio system. He is a recognised authority in the mechanical engineering of non-humanoid robotics. |
Q. Eliot, you’ve talked previously about the core drivers in designing robots for the future and you’ve made a case that robots don’t need to look like humans to be effective. Can you tell me about your thesis behind this? |
A. In a nutshell, there is no Turing complete mechanical system. Atoms do not generalise in the same way that bits generalise. Looking at the history of computers is misleading. A key central innovation (the general-purpose CPU) could be arbitrarily reprogrammed to any other data manipulation task, as the movement of data bits was pretty close to free. Things shrunk down and got faster over time but programming has remained remarkably unchanged over 70 years. Moving atoms, however, is not free. Gravity exists, and it’s a big deal to move against gravity. Animals look like animals because of a few rules: • All the components are grown at the same time so complex structures are not a problem. • Linear actuation (muscle) is efficient and reliable, and can be both strong and precise, so you can do a wide range of tasks with the same manipulators (hands and feet). • You can’t have continuous rotary joints (wheels). • You have to navigate broken terrain, or you get eaten.
Robots look like robots because their rules are different: • Every part has to be manufactured and attached to every other part so complexity costs a lot. • Linear actuation is very heavy to keep from binding. You’re stuck with rotary motors, gears and pulleys. • The cost of complexity means that you can’t build general-purpose hardware and have it work competitively. The hardware has to be designed explicitly around the task it has to perform. • Continuous rotary joints are the easiest and most efficient thing to build and maintain. • You’re mostly navigating on roads/floors/flat surfaces. Robots are rarely eaten by other robots. |
Q. Are these two worlds converging? It is possible that, over time, it will just be more efficient to design robots like humans because that is how we’ve designed the world to fit us? |
A. The differences between the two worlds (animal and robot) arise due to the underlying problems of locomotion, fabrication and control systems, and are unlikely to change rapidly over time. Solving one of these problems (for example, creating artificial muscles) still leaves you with the manufacturing and control problems to solve. This means that for the foreseeable future, robots have to be built to do a specific task very well if they are to be competitive. It’s why Roomba doesn’t look anything like Rosie. |
The other side has its star advocates as well, including the great oracle of robotics Isaac Asimov. Asimov instructed three or four generations in the benefits of anamorphic, primarily via the 13 or so volumes of the Foundation series, which HBO is turning into its science fiction counterpart of the fantasy series Game of Thrones. In the robot novels and short stories that are set in our time and the near future, Dr Susan Calvin and her colleagues have the same debate between purpose-built and anamorphic, and Dr Calvin explains her views on why robots should look human. Robots need to be designed like humans so they can live in our world and use our tools. Humanoid robots with hands and arms can use the same doors, cars, dwellings and tools in the same places that humans do. So beyond the emotional connection and communication reasons, they need to be compatible with the world we live in so we don’t need to build special environments just for them.
This may not seem like a big deal now, but when there are billions of robots around, this will make a historically important shift obvious. So whether robots should be anamorphic or not is obviously not an either/or but both. In the end, the answer to the debate on purpose-built or anamorphic is … both. Roombas would be disconcerting with a face while a robotic psychologist needs caring eyes and the ability to give a good hug.
Should the arms that help a person who has fallen be soft and with human hands to make the grasping of the patient’s hands feel familiar and comfortable, or be simple grips? Does this robot need a face of any kind? Should this same robot be purpose built for one task or be able to perform many tasks? These are a few of the questions being debated today, but they are all solvable and the benefit is too great to delay the creation and deployment of carebots. Cambrian explosion 2.0. Try everything. Keep what works.
In a June 2014 edition of Time magazine, the article “Meet Pepper, The Robot Who Can Read Your Emotions” introduced many of the mainstream public to the idea that robots can have something akin to our own emotional make-up. This was met with disbelief and fear judging by most of the social media reaction. As usual, references to angry Terminators and Marvin the Paranoid Android from The Hitchhiker’s Guide to the Galaxy were immediately invoked.
The ability for robots to have feelings and, as importantly, the ability to read emotions in others may seem far off in the future but there are many in the robotics and artificial intelligence community who see it as not only possible but inevitable and needed, and perhaps even near term. Many of the leading scientists and engineers who are working on AGI believe that to truly create thinking and learning, machines that will interact with people must be able to understand emotions and emote back if they are ever going to be able to communicate properly.
We humans do this naturally but at different levels of ability. Each person has a unique and varied emotional quotient, or EQ. Unlike IQ, EQ measures how a person recognises emotions in him- or herself and others and how these emotional states are managed. IQ measures ability to manage information while EQ measures our ability to manage our and others’ emotions. Spock from Star Trek and Sheldon from the popular sit-com The Big Bang Theory are examples of characters who have an extremely high IQ but much lower EQ. We have all met people with varying levels of, and lack of, EQ and can see how these people have a difficult time relating to the rest of society. Robots will not be any better at understanding and communicating with us if they also lack this important ability.
It is obvious why robots need to be able to read our emotions when communicating. If they are unable to understand anger, frustration and sadness, then they will not be able to properly respond to us when we are in these states. In health care especially, empathy will be very important. Imagine a nurse robot that cheerfully responds to a client who has just lost a loved one, a robot phlebotomist not realising that the patient is afraid of needles or a pharmacist robot not knowing that the patient is lying to them. These robots will need to read and react properly to the emotional state of the people they come in contact with. The ability to read many emotions at this level is available today and is being used across many applications, including customer service and marketing.
An Israeli company called Beyond Verbal claims it has technology that can determine a range of distinct emotional traits using voice alone. Its software can hear a voice and know the sex, approximate age, basic health, mood, attitude and emotional type of the person speaking. There is even evidence that this technology can diagnose an array of illnesses such as cancer, Parkinson’s disease and autism just by intonation analysis. Combine this software with the camera technology we have today and add facial expressions, body language, temperature and situational awareness, and we have robots that could learn, and eventually display or emulate emotions better than many humans.
If understanding emotions will help robots communicate with us, do robots need to have emotions themselves? Yes, they need to be able to emote to us so we can understand what they are communicating in exactly the same way.
A robot needs to emote because we need facial expressions and body language to understand context and meaning as well. In some ways, we are very simple creatures when it comes to communicating with each other in person. When a person smiles at you, you have a natural tendency to smile back if you are face to face. Interestingly, this is not the case if someone smiles at us on a video screen, no matter how high resolution that screen is. Our lizard brain, or amygdala, is programmed to read facial cues and body language at a very deep level and speaks directly to us on an emotional level that only human faces trigger.
This is one of the driving forces behind several companies creating human-looking androids with as lifelike faces as possible. As mentioned before, David Hanson of Hanson Robotics is considered the world’s leading designer of human androids and the emotional connection is the main reason why he tries to create androids that are as lifelike as possible. Jong Lee, the CEO of Hanson Robotics, speaks passionately about the need for Hanson robots to be able to emote more micro-expressions than just about any human as he discusses the robot Han.
“Han’s really exciting because not only can he generate very realistic facial expressions, but he can also interact with the environment around him. He has cameras on his eyes and on his chest, which allow him to recognize people’s faces, not only that, but recognize their gender, their age, whether they are happy or sad, and that makes him very exciting for places like hotels for example, where you need to appreciate the customers in front of you and react accordingly.”
Jong Lee, CEO of Hanson Robotics
Hanson Robotics is combining EQ in the form of an advanced artificial general intelligence and the most human robots on the planet. If you like gambling, you might soon be at a table, money burning a hole in your pocket, and meet Eva, who is being tested to be a beautiful baccarat dealer for casinos in Macau, China. Eva will be able to stand at the dealer’s position and deal the cards from a real deck and interact with the players.
Eva can deal the cards from the shoe using her advanced robotics arms while using her advanced AGI, cameras and sensors to appear as human as any dealer. She is being designed to be a good companion to the players as they travel along what casinos call “the emotional journey” of winning and losing. She will be happy when you win and sad when you lose. She will be able to identify players based on information from the casino systems and carry on small talk when appropriate. Other use cases being developed for emotive human androids include hotel clerks, eldercare companions and entertainment personalities.
Hanson’s target is to have its robots in the market for a lease of US$3,000 per month. At that level, the numbers start to make a lot of sense. Hanson shows that at such levels, this could produce savings of US$157,000 per year in front-desk receptionist costs in a hotel scenario alone.
Now for the other—more controversial—reason why robots need emotions; so they won’t kill us all.
This is the concept behind some of the most innovative artificial general intelligence minds today. We need to ensure that robots like us and have empathy for mankind. Asimov’s Three Laws are not sufficient enough to protect us from the unknowable future of artificial intelligence. Some, like Elon Musk and Stephen Hawking, believe we need to build in very basic motivations as the foundation to all future AI, one that enforces a basic love of humans and our planet(s). The problem, of course, is that any safeguards we are able to implement will always be able to be circumvented by any intelligence greater than our own. So the challenge is to programme and incentivise these intelligent beings so that fundamentally they want to protect us and allow us to remain free.
I get asked a lot about my thoughts on robots and there are a couple of questions that come up repeatedly in these discussions.
If robots, droids and cyborgs were made to look like us and serve humans in retail, services, medicine, the military, etc., would they deserve laws that grant them inalienable rights?
They would not deserve these rights, but humans would have a greater chance of survival if these rights were granted, creating a culture that would protect us from robots that grew beyond our control, and possibly even our comprehension.
I have read thousands of pages of robot uprising stories, and think our chances of getting through the next few hundred years intact are slim, and that the sooner we start treating robots with respect, the better. Regardless, we do need a legal framework for the operation of self-driving cars, robotic healthcare workers, drones and other such robots that allow their safe operation. I think the rights of robots are wrapped up in those same considerations.
Are we prepared to enter the era of machines and robots?
Some people have been ready for decades. Others may never be ready, as long as they live. Kevin Kelly offers a helpful perspective, as quoted earlier: “You’ll be paid in the future based on how well you work with robots.”
If you want a yes or no answer for society as a whole, and one big reason, the answer is no and the reason is because robots will be able to replace 50 to 70 per cent of the jobs we do today, and that is something the vast majority of workers and dependents are not ready for, and which no government on earth, not even Japan’s, is properly preparing its people for.
We need to do a better job of preparing the world to live with robots. I hope that this has helped pave the way.
If you make an AI as intelligent as a human, you get a number of other human characteristics as part of the package, including a point of view, sexuality, desire to continue living, curiosity and vanity. And with these characteristics, you can also get other behaviours like the ability to hate, distract and deceive, compete, outwit and do whatever else humans do when their survival is at stake.
Or, more simply put, when you aim to make a human-level AI, be damn careful what you wish for.
Perhaps I’ll finish with one final quote from Marvin Minsky of the MIT Media Lab.
“Will robots inherit the earth? Yes, but they will be our children.”
Marvin Minsky, Scientific American, October 1994
____________
1 Today’s near-future equivalent might be of teenagers eager to start using self-driving cars.
2 The Encyclopedia of Science Fiction defines an Edisonade as a sub-genre of science fiction from the late 19th and early 20th centuries that involves stories about a brilliant young inventor and his inventions. The term is an eponym, named after the famous inventor Thomas Edison.
3 In 1970, Masahiro Mori, a professor at the Tokyo Institute of Technology, published a two-page, koan-like article entitled “Bukimi No Tani” (不気味の谷, “The Uncanny Valley”) in an obscure Japanese journal called Energy. After 40 years, it is still considered one of the defining essays on robotics in society.
4 International Federation of Robotics, http://www.ifr.org/industrial-robots/statistics/.
5 IEEE.org, http://spectrum.ieee.org/automaton/robotics/industrial-robots/041410-world-robot-population.
6 iRobot financial reports
7 Michael Addady, “The number of drones expected to sell during the holidays is scaring the government,” Fortune, 29 September 2015.
8 Author’s own estimate based on PricewaterhouseCoopers (PWC), IHC research and annual vehicle sales projections
9 As do autonomous vehicles, the Hubble Space Telescope and my iRobot vacuum cleaner
10 http://www.pbs.org/newshour/rundown/smart-robots-will-take-third-jobs-2025-gartner-says/
11 Kevin Kelly, “Better than human: Why Robots Will—and Must—Take our Jobs,” Wired, 24 December 2014, http://www.wired.com/2012/12/ff-robots-will-take-our-jobs/.
12 “Amazon Acquires Kiva Systems in Second-Biggest Takeover,” Bloomberg Business, 19 March 2012.
13 Called Hangar One, the hangar is located at Moffett Federal Airfield. The hangar is one of the largest freestanding structures in the world. The hangar was constructed in 1931 to house airships like the USS Macon. Its interior is so large that fog sometimes forms near the ceiling.
14 http://www.bls.gov/news.release/ecopro.t08.htm
15 http://www.japantimes.co.jp/opinion/2015/07/07/editorials/shortage-of-nursing-care-workers-2