[TWENTY-ONE]
A ROBOT REVOLT? TALKING ABOUT ROBOT ETHICS
DYS ON : You’re judging me on things I haven’t even done yet. Jesus.
How were we supposed to know?
—TER MINATOR 2 : JUDGMENT DAY
“Any machine could rebel, from a toaster to a Terminator, and so it’s crucial to learn the common strengths and weaknesses of every robot enemy. Pity the fate of the ignorant when the robot masses decide to stop working and to start invading.”
Daniel Wilson’s fascination with robots began when he was young. “As a kid, I fell in love with Transformers, but all my parents could afford were crappy Go-Bots. Did I care? No. A robot is a robot.” By the time Wilson hit middle school, “I fell in love again, this time with Vickie, a child star who played an android girl on the TV sitcom Small Wonder.” Wilson went on to get a PhD in robotics from Carnegie Mellon University, and has worked on projects for Microsoft and Intel.
While working on his doctorate, Wilson decided to try his hand at book writing. The result was How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion. Wilson’s book was essentially a faux guide to robot revolt, based on real technology. It goes through all sorts of Hollywood scenarios of how robots might try to take over the Earth and then shows how a real roboticist would respond. Wilson is chock full of helpful advice for “when the robots inevitably come.” He details the warning signs that one should look for to know whether your robot is planning a rebellion (“sudden lack of interest in menial labor” and “repetitive stabbing movements”), how to detect robot imposters (“Does your friend smell like a brand-new soccer ball?”), how to escape a robot chasing you (distract it by throwing decoys and obstacles in its path; “Just check twice before you toss the baby seat out of the window”), and a series of real-world technologies and tactics useful for fighting back against our future robot foes (from EMPs to radio-frequency pulse guns).
Wilson doesn’t really think that your Roomba is poised to suck your breath away while you sleep. As he explains, “I believe the chance of a Hollywood-style robot uprising happening is about as likely as a Hollywood-style King Kong attack on New York City.” On the other hand, “Humans are designing plenty of all-too-real robots to do things like ‘neutralize enemy combatants,’ or ‘increase troop survivability.’ Is it just me, or does that sound suspiciously like ‘KILL, KILL, KILL?’ ”
ROBO-FEAR
Wilson originally wrote his book to “strike back at Hollywood,” mocking its many inaccurate portrayals of both robots and the people who make them. Hollywood instead ate it up, and the young roboticist ended up selling the rights to Paramount, where it is presently being turned into a Mike Myers movie. Wilson’s story, though, goes from amusing to odd when he mentions in an aside that he has lectured at the U.S. Military Academy and done work for the Northrop Grumman defense firm.
Wilson’s lighthearted take actually taps into a longer history of genuine fear over what our man-made creations might do to us one day. As far back as 1863, the English scholar Samuel Butler weighed in on the heated debate that Charles Darwin had opened about human evolution. In “Darwin Among the Machines,” Butler argued that the scholars arguing over evolution should look forward rather than back. “Who will be man’s successor? To which the answer is: We are ourselves creating our own successors. Man will become to the machine what the horse and dog are to man.”
Today, the concept of machines replacing humans at the top of the food chain is not limited to stories like The Terminator or Maximum Overdrive (the Stephen King movie in which eighteen-wheeler trucks conspire to take over the world, one truck stop at a time). As military robotics expert Robert Finkelstein projects, “within 20 years” the pairing of AI and robotics will reach a point of development where a machine “matches human capabilities. You [will] have endowed it with capabilities that will allow it to outperform humans. It can’t stay static. It will be more than human, different than human. It will change at a pace that humans can’t match.” When technology reaches this point, “the rules change,” says Finkelstein. “On Monday you control it, on Tuesday it is doing things you didn’t anticipate, on Wednesday, God only knows. Is it a good thing or a bad thing, who knows? It could end up causing the end of humanity, or it could end war forever.”
Finkelstein is hardly the only scientist who talks so directly about robots taking over one day. Hans Moravec, director of the Robotics Institute at Carnegie Mellon University, believes that “the robots will eventually succeed us: humans clearly face extinction.” Eric Drexler, the engineer behind many of the basic concepts of nanotechnology, says that “our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short.” Freeman Dyson, the distinguished physicist and mathematician who helped jump-start the field of quantum mechanics (and inspired the character of Dyson in the Terminator movies), states that “humanity looks to me like a magnificent beginning, but not the final word.” His equally distinguished son, the science historian George Dyson, came to the same conclusion, but for different reasons. As he puts it, “In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” Even inventor Ray Kurzweil of Singularity fame gives humanity “a 50 percent chance of survival.” He adds, “But then, I’ve always been accused of being an optimist.”
Scientists’ fears are not merely that machines will “surpass” humans and then peacefully, logically take over, as in Asimov’s I, Robot. Instead, many believe that future AI might have some evil intent, or even worse. Marvin Minsky, who cofounded MIT’s artificial intelligence lab, believes that we humans are so bad at writing computer software that it is all but inevitable that the first true AI we create will be “leapingly, screamingly insane.”
The refuseniks had concerns that the military might misuse their research. These scientists’ concerns reach a whole new level. Some just accept it as an unavoidable consequence of their research that their creations will one day surpass humans and even order them about. Professor Hans Moravec observes, “Well, yeah, but I’ve decided that’s inevitable and that it’s no different from your children deciding that they don’t need you. So I think that we should gracefully bow out—ha, ha, ha. . . . But I think we can have a pretty stable, self-policing system that supports us, though there would be some machines which were outside the system, which means became wild. I think we can co-exist comfortably and live in some style for a while at least.”
Others believe that we must take action now to stave off this kind of future. Bill Joy, the cofounder of Sun Microsystems, describes himself as having had an epiphany a few years ago about his role in humanity’s future. “In designing software and microprocessors, I have never had the feeling I was designing an intelligent machine. The software and hardware is so fragile, and the capabilities of a machine to ‘think’ so clearly absent that, even as a possibility, this has always seemed very far in the future.... But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of technology that may replace our species. How do I feel about this? Very uncomfortable.”
WHEN SHOULD WE SALUTE OUR ROBOT MASTERS?
These fears of robot rebellion go back to the Karel Capek’s very first use of the word “robot” in his play R.U.R. (Rossum’s Universal Robots). His choice of the word was deliberate, as he knew that the original “robotniks,” the Czech serfs, had rebelled against their masters in 1848. This theme continued in science fiction, such as HAL in 2001, which kills its human crew and decides to take over, or A.M. of the Harlan Ellison story “I Have No Mouth and I Must Scream.” A.M. stood for “Allied Mastercomputer,” as it was an AI designed for coordinating defenses, just like the real-world battle management systems of today. But the name Ellison gave to the computer was also a reference to Descartes’ “I think, therefore I am.” Once it evolves in thinking power, A.M. decides to launch a war, and torture the human survivors for sport.
A machine takeover is generally imagined as following a path of evolution to revolution. Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.” Or as the AI Agent Smith says to his human adversary in The Matrix, “Evolution, Morpheus, evolution, like the dinosaur. Look at that window. You had your time. The future is our world, Morpheus. The future is our time.”
This evolution turns into a revolution at the point at which machine intelligence starts to act on its own, beyond its human programmers’ original intent. Many see this runaway as inevitable. As military robotics pioneer Robert Finkelstein describes, “The first thing it [true AI] likely does within nanoseconds is jump into the Internet, because of the access to unlimited computing resources. We won’t be able to stop it. The military will only reach a point of concern when it fails to work like we want it to. But that is too late.”
Of course, many feel that the fears of a machine rebellion should stay put in the realms of humor and science fiction. Rod Brooks of iRobot, for example, says that a robot takeover “will never happen. Because there won’t be any us (people) for them (pure robots) to take over from.” His explanation is not merely that the idea is hogwash, but that there is also an ongoing convergence of human and machines through technologic implants and enhancements. By the time machines advance enough to reach the level of intelligence that those with fears of revolt think is necessary for the machines to want take over, people will be carrying about computers in their brains and bodies. That is, the future isn’t one of machines separate from humans, plotting our demise. Rather, Brooks thinks it may instead yield a symbiosis of AI and humans. Others think that this still could yield conflict, pointing to the parallels in the Dune series of novels, where such “enhanced” cyborgs and strong AI fight it out, with regular old humans caught in the middle.
This debate among both scientists and science fiction will likely go on as long as robots are around, or until Skynet orders us meat puppets to shut up and get back to work. From my perspective of a security analyst, however, the only way to evaluate the actual viability of a robot revolt is to look at what exactly would be needed for machines to take over the world. Essentially, four conditions would have to be met. First, the machines would have to be independent, able to fuel, repair, and reproduce themselves without human help. Second, the machines would have to be more intelligent than humans, but have no positive human qualities (such as empathy or ethics). Third, they would, however, have to have a survival instinct, as well as some sort of interest and will to control their environment. And, fourth, humans would have to have no useful control interface into the machines’ decision-making. They would have to have lost any ability to override, intervene, or even shape the machines’ decisions and actions.
Each of these seems a pretty high bar to cross, at least over the short term. For example, while many factories are becoming highly automated, they all still require humans to run, support, and power them. Second, the ability of machines to reach human-level intelligence may be likely someday, even soon, but it is not certain. In turn, there is a whole field, social robotics, at work on giving thinking machines the sort of positive human qualities like empathy or ethics that would undermine this scenario, even if strong AI was achieved. Third, most of the focus on military robotics is to use robots as a replacement for human losses, the very opposite of giving them any sort of survival instinct or will to control. Fourth, with so many people spun up about the fears of a robot takeover, the idea that no one would remember to build in any fail-safes is a bit of a stretch. Finally, the whole idea of a robot takeover rests on a massive assumption: that just when the robots are ready to take over humanity, their Microsoft Windows programs won’t freeze up and crash.
Of course, eventually a super-intelligent machine would figure out a way around each of these barriers. In the Terminator storyline, for example, the Skynet computer is able to trick or manipulate humans into doing the sorts of things it needs (for example, e-mailing false commands to military units), as well as rewrite its own software. However, Rod Brooks makes perhaps the most important point on the question of seriously evaluating the fears. If it ever does happen, humanity will likely not be caught off guard, as in the movies. You don’t get machines beyond control until you first go through the step of having machines with little control. So we should have some pretty good warning signs to look out for; that is, beyond Daniel Wilson’s helpful suggestion to monitor your robot for any “repetitive stabbing movements.”
The whole issue of humankind losing control to machines may instead need to be looked at in another way. For all the fears of a world where robots rule with an iron fist, we already live in a world where machines rule humanity in another way. That is, the Matrix that surrounds us is not some future realm where evil robots look at humans as a “virus” or “cattle.” Rather, we’re embedded in a matrix of technology that increasingly shapes how we live, work, communicate, and now fight. We are dependent on technology that most of us don’t even understand. Why would machines ever need to plot a takeover when we already can’t do anything without them?
ROBOT INSURANCE
If any place should be concerned with a robot takeover, it is the red-light district.
Few robotics firms issue press releases about their latest multimillion-dollar pornography contract. But just as pornography helped launch such common consumer products as digital cameras, instant messaging, Internet chat rooms, online purchasing, streaming video, and webcams, many experts in robotics believe that sex will drive many of the commercial advances in robotics, because, well, sex sells. On a number of occasions, I interviewed scientists about military robotic systems, who at the end would quietly ask whether I was also looking into the “robotic sex” sector. One dirty old scientist even described it as “something we all await with excitement.”
Henrik Christensen, a member of the Robotics Research Network ethics group, explains the simple rationale for why he thinks the robot sex industry will take off in the next decade. “People are [already] willing to have sex with inflatable dolls, so initially anything that moves will be an improvement.” Christensen raises this not because he is excited about such a prospect, but because he is concerned about whether society is prepared for the ethical dilemmas that this trend will bring. For example, should limits be placed on the appearances of such robotic systems? Christiansen believes “it is only a matter of time” before sexbots are made to look like children. “Pedophiles may argue that those robots have a therapeutic purpose, while others would argue that they only feed into a dangerous fantasy.” Likewise, what happens as robots become more sophisticated, and have self-learning mechanisms built into them? Are these the sorts of “experiences” we want intelligent machines learning from, and what will be the impact on how they then behave?
Professor Ronald Arkin, a roboticist at the Georgia Institute of Technology, has been one of the few scientists to go into depth on the various ethical issues looming from robotics advancement. To him, the issues—not just in sex, but in war—revolve around one key question: What are the boundaries, if any, between human-robot relationships? This question, he explains, lays open a series of ethical concerns that must be dealt with soon, perhaps even in a moral code developed by humans but embedded in our robots. “How intimate should a relationship be with an intelligent artifact?” “What authority are we going to delegate to these machines?” “Should a robot be able to mislead or manipulate human intelligence?” “What, if any, level of force is acceptable in physically managing humans by robotic systems?” “What is the role of lethality in the deployment of autonomous systems by the military?”
But this is only to look at the issue of what robots should be programmed to do. Another ethical concern is the reverse: what humans should be allowed to do with robots. For example, what should be done with the massive amounts of data that robots will collect, data that will invariably be uploaded online and which might be used against people? Explains iRobot’s Rod Brooks, “I am sure there will be new dilemmas, just as happens with every new technology. No one expected computers to bring so many concerns of privacy. Robotics will bring even more of the privacy concerns.” The Los Angeles Police Department, for instance, is already planning to use drones that would circle over certain high-crime neighborhoods, recording all that happens. Other government agencies and even private companies are purchasing smaller drones able to land on windowsills and “perch and stare” at the humans inside. With all this observation, Andy Warhol’s description of fame may then have to be reworked for the twenty-first century, tells IT security expert Phil Zimmerman. “In the future, we’ll all have fifteen minutes of privacy.” Indeed, when he talks about this aspect of the future of robotics, author Daniel Wilson goes from humorous to ominous. “That is what scares the shit out of me.”
YOU’LL HAVE TO PRY THIS ROBOT OUT OF MY COLD, DEAD HANDS
Given the depth and extent of problems that further advancement in robotics and AI might raise, from the machine-led destruction of humanity to the world learning that you are a thirty-two-year-old closet Gilmore Girls fanatic, many think that the best ethical answer is to stop the research altogether. As Bill Joy argues, “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.”
Proponents of relinquishment argue that it is not without precedent for people and countries to forgo researching or making certain technologies, even when they could yield great weapons. Most nations on the planet, for example, have chosen not to build nuclear weapons, even though it would offer them immense power, and all have agreed not to engage in biological weapons research anymore. Indeed, the situation with robotics and other unmanned technologies appears easier to resolve. Unlike the powerful states we faced during the Manhattan Project or the bioweapons research that took place during the cold war, argues Bill Joy, “We aren’t at war, facing an implacable enemy that is threatening our civilization; we are driven, instead, by our habits, our desires, our economic system, and our competitive need to know.”
Yet this ignores that many do feel we are at war with an implacable foe, and that this sense of military need is driving much of the research. Moreover, good old-fashioned human nature also would get in the way of attempts at self-restraint. “We are curious as a species,” observes Dr. Miguel Nicolelis, a Brazilian scientist whose research has linked a monkey’s brain to a two-hundred-pound walking robot. “That is what drives science.” It is not merely that humans just can’t help themselves from experimenting with technology, but that constantly pushing the envelope is the very essence of science. As Albert Einstein famously said, “If we knew what it was we were doing, it would not be called research, would it?”
Even if the world was able to come to an unlikely consensus to set up some sort of system to stop researchers from working on new technologies (like the “Turing Police,” in William Gibson’s novels, who hunted down anyone who worked on strong AI), there would still likely be work going on, just hidden away. There is just too much money to be made, and too many motivated actors, not just for military applications, but in everything from transportation and medicine to games and toys, to force robotics and AI research to stop anytime soon. As one analyst put it, “We would have to repeal capitalism and every visage of economic competition to stop this progression.”
The challenge grows even bigger as advanced technologies migrate from the research labs to the military to the open market. As one blogger describes, “We’ll be chasing our fucking tails about Lego robotics sets and the kids ‘CSI’ DNA testing kits they’re selling at Target.” AI expert Robert Epstein draws a parallel to the problem of illegal computer file downloading. While the music and movie industries have tried everything from creating new laws and launching heavy-handed lawsuits against college students to a public relations “shaming” campaign, people still keep on downloading pirated music, movie clips, and TV shows. “No matter what we do, there will always be something happening outside of that. And it will be huge.”
PLAN FOR SUCCESS (AND FAILURE)
“You can’t say it’s not part of your plan that these things happened, because it’s part of your de facto plan. It’s the thing that’s happening because you have no plan. . . . We own these tragedies. We might as well have intended for them to occur.”
William McDonough was writing about environmental issues, but his statement is frequently cited by those concerned about the future of the robotics field. It perfectly captures that while relinquishment may not be an option, there is no excuse for failing to plan ahead. As nanotech expert Eric Drexler puts it, “We’ve got to be pro-active, not just reactive.”
In facing this, “There are two levels of priority,” tells Gianmarco Verruggio, of the Institute of Intelligent Systems for Automation. “We have to manage the ethics of the scientists making the robots and the artificial ethics inside the robots.”
On the human side, “managing” the ethics is hindered by the absence of professional codes or traditions that robotics scientists might look to when trying to figure out the ethical solution to a difficult science problem. Almost no technical schools require any sort of ethics classes and the robotics field certainly has nothing equivalent to the medical profession’s Hippocratic oath. Even worse, the sort of twenty-first-century questions that people working with robots and AI care about aren’t really dealt with in the broader fields of philosophy or ethics. There are few experts or resources for them to turn to, let alone any sort of consensus.
Even if you were an inventor, funder, or developer who wanted to do the moral thing, as Nick Bostrom, director of the Future of Humanity Institute at Oxford University, explains, you wouldn’t have any ready guides. “Ethicists have written at length about war, the environment, our duties towards the developing world; about doctor-patient relationships, euthanasia, and abortion; about the fairness of social redistribution, race and gender relations, civil rights, and many other things. Arguably, nothing humans do has such profound and wide-ranging consequences as technological revolutions. Technological revolutions can change the human condition and affect the lives of billions. Their consequences can be felt for hundreds if not thousands of years. Yet, on this topic, moral philosophers have had precious little to say.”
For many, the obvious guide would be to follow science fiction and simply mandate that all systems obey Isaac Asimov’s “Three Laws of Robotics.” Asimov’s laws initially entailed three guidelines for machines. Law One is that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Law Two states that “a robot must obey orders given to it by human beings except where such orders would conflict with the First Law.” And Law Three mandates that “a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.” In later stories Asimov added the “Zeroth Law,” above all the others. This states that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
There are only three problems with these laws. The first is that they are fiction. They are a plot device that Asimov made up to help drive his stories. Indeed, his tales almost always revolved around robots’ following the laws but then going astray and the unintended consequences that result. An advertisement for the 2004 movie adaptation of Asimov’s famous book I, Robot put it best: “Rules were made to be broken.”
For example, in one of Asimov’s stories, robots are made to follow the laws, but they are given a certain meaning of “human.” Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as “human.” They follow the laws, but still carry out genocide.
The second problem is that no technology can yet replicate Asimov’s laws inside a machine. As Rodney Brooks puts it, “People ask me about whether our robots follow Asimov’s laws. There is a simple reason [they don’t]. I can’t build Asimov’s laws in them.” Daniel Wilson is a bit more florid. “Asimov’s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that? ”
Finally, much of the funding for robotics research comes from the military. It explicitly wants robots that can kill, won’t take orders from just any human, and don’t care about their own lives. So much for Laws One, Two, and Three.
While there is no Asimov-like code embedded in robots yet, it doesn’t mean that many in the field believe that the design of robots should take place without some sort of guidelines. Indeed, just as they would for any other consumer product, Japan’s Ministry of Trade and Industry has set up a series of rules for the design of office and home robots. Every robot must have sensors that prevent it from colliding with humans by accident, be made of softer materials at contact points, and have an emergency shutoff button. These rules came about only after Japanese authorities “realized during a robot exhibition that there are safety implications when people don’t just look at robots but actually mingle with them.”
The rules in Japan parallel a growing concern among robot makers about the financial costs that would come from a robot screwing up. As one executive put it, “You don’t want to tell your management ‘We had a bad day yesterday; our system killed four civilians by accident.’ ” Thus, the most powerful incentives for building precautions into robot designs are now mainly coming from the marketplace. “There is a lot of push to make these things damn safe,” says Rod Brooks. He goes on to detail the three different sensors put in his company’s Roomba vacuum cleaner to make sure it doesn’t fall down the stairs. “If you have a multipound robot crashing down the stairs, it can get pretty bad . . . and not just for the robot.”
While it is good that businesses are starting to think this way, it is certainly not enough. Any ethical codes and safeguards that come mainly from the fear of lawyers and lawsuits are certainly not going to be sufficient for civilian robots, let alone for robots in war. As science writer Robert Sawyer puts it, “Businesses are notoriously uninterested in fundamental safeguards—especially philosophic ones. A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none have accepted an absolute edict against ever causing harm to humans.”
With this huge gap in ethics, there is a growing sense in the robotics field that scientists will soon have to start to weigh the implications of their work and take seriously their moral responsibilities, particularly as their inventions shape humanity’s future. As Bill Joy puts it, “We can’t simply do our science and not worry about these ethical issues.”
ROBOT RULES
In the TV comedy The Office, the witless character Dwight Schrute describes the perfect design for a robot. “I gave him a six-foot extension cord, so he can’t chase us.”
In building real-world safeguards for robots, something a bit more complex will be needed. Many roboticists describe the need for an ethic of “design ahead,” which tries to take into account all the various problems that might arise, and set up systems and controls to avoid them.
There are a number of useful starting points for this design ethic. One is that the design of robots should be as predictable as possible (perhaps contrary to the growing interest in evolutionary designs). As Daniel Wilson puts it, “There is no sense in having any dangerous features . . . unless you want them.” That is, the system should work the way it was originally designed to, all the time, rather than being able to change itself over time into something new, unexpected, and thus potentially dangerous.
With machine autonomy growing, mechanisms that ensure a human can take control and shut down a robot must also be built in. But contrary to Asimov’s original laws, which entail that any humans must be able to order about any robot they meet, the controls of real-world robots should be designed to ensure limits on their masters. In a world of hackers and the like, we should aim not merely for control, but also security, so that robots can’t be easily hijacked or reprogrammed for wrongful or illegal use.
Wherever possible, multiple redundancies should be built into any systems. “Redundancy can bring an exponential explosion of security,” says Eric Drexler. He explains with an illustration. Imagine a suspension bridge, like the Golden Gate Bridge, which needs five cables to stay up. Each cable has an average risk of breaking one day per 365 days out of the year. If the designers of the bridge use one extra cable as backup (so, six total), the bridge would be expected to last ten years. If they add just five cables as insurance (ten total), enough of the bridge’s cables shouldn’t break in a million years. A little insurance goes a long way.
Scientists are also starting to recognize that information itself carries risks. This goes for the design of systems (anything that might be dangerous should not be open-source, where anyone could potentially copy, build, and misuse it), as well as whatever information is collected by the systems (data should not be publicly sharable unless there is a compelling need). In turn, there must be some required mechanism that allows information on the robot’s activity to be stored and collected by public authorities. That is, the only way to ensure accountability if something goes wrong with a system is for each and every robot to have a unique identifier, even something as simple as a bar code, as well as traceability to track the actions that the system took.
In the long long term, some scientists even hope that an amended form of Asimov’s behavior rules might be required in robots’ software. This would mean robot makers have to look at design in a whole new way, not reactively trying to avoid lawsuits, but proactively trying to build in greater respect for the law and ethics. Georgia Tech’s Ronald Arkin, for example, writes that autonomous systems in future wars might be endowed “with a ‘conscience’ that would reflect the rules of engagement, battlefield protocols such as the Geneva Conventions, and other doctrinal aspects that would perhaps make them more ‘humane’ soldiers than humans.” Of course, while a machine may be guided by ethical rules, this does not make it an ethical being. Software codes are not a moral code; zeros and ones have no underlying moral meaning.
The key in all this is that ethics apply not just to the machines but also to the people behind them. Scientists must start to conduct themselves by something equivalent to the guidelines that the ancient Greek physician Hippocrates laid out for future generations of doctors. “Make a habit of two things—to help, or at least to do no harm.” Martin Rees, royal astronomer of the United Kingdom (a position that is like the top science adviser to the queen), calls for the implementation of the “precautionary principle.” It isn’t that scientists should stop their research work altogether if anything bad might happen, but rather that they must start to make a good-faith effort to prevent the potential bad effects that might come from their inventions.
These kinds of guidelines won’t arise overnight, but many scientists note that there already are models of how they might come about in high-tech fields. In the 1980s, for example, there was huge consternation over the Human Genome Project. Geneticists knew that their research could save literally millions of lives, but they also began to worry about all the various ethical and legal questions that the increased availability of genetic information would cause. Who “owned” the genes? What could be patented or not? How much of the information should be shared with the government, police, insurance companies, and other institutions?
The geneticists knew they didn’t have the answers to such thorny questions, and that they should not try to answer them on their own. So they took the interesting step of setting aside 5 percent of the project’s annual budget for a multidisciplinary program to “define and deal with the ethical, legal and social implications raised by this brave new world of genetics.”
The world of genetics began this program at the very start of their research, which means that it is now years ahead of the world of robotics in the depth of its ethical discussions. This gap becomes even more unsettling with robotics’ growing use in war. As General Omar Bradley once said, we have given ourselves the destructive power of “giants,” while remaining “ethical infants.” “We know more about war than we know about peace, more about killing than we know about living.”
Just as with the new laws of war, the research and discussion on the ethics of robotics and roboticists can’t be limited to the scientists. “We have reached a point where technology development can no longer flourish in a policy vacuum,” describes analyst Neal Pollard. Scientists often fail to consider the policy ramifications of their research. In turn, says one research center director, scientists “don’t have a seat at the table” when scientific issues are discussed in the halls of power in Washington.
If the dialogue between the policy world and science doesn’t occur, a double whammy results. The good prescriptions that might come out of the scientific world are unlikely to go anywhere without political support. In turn, an uninformed political world might take decisions that could make things worse.
One answer may be to require new unmanned systems to have a “human impact statement” before they enter production, analogous to the environmental impact statements now required of new consumer products and buildings. This will not only embed a formal reporting mechanism into the policy process, but also force tough questions to be asked early on. Indeed, if we are concerned enough about the spotted owl to require studies about potential environmental harms before a new consumer product is released, we should be concerned enough about humanity to require the same reporting on the legal, ethical, and social questions of our new cleaning and killing machines.
Ultimately, government is both of and for the people. The burden of weighing the ethical issues of our new technologies is shared not just by researchers and policymakers, but also by the wider public. Too often, when issues of robot ethics are raised, it comes across as science fiction, and is all too ripe for the kind of mocking that Daniel Wilson did so well. Indeed, this is perhaps why so many roboticists avoid talking about the issue altogether. That is an ethical shame.
Robots may not be poised to revolt, but robotic technologies and the ethical questions they raise are all too real. For scientists, policymakers, and the rest of us to ignore the issue only sets us up for a terrible fall down the line. As military robotics expert Robert Finkelstein explains, many may want to “think that the technology is so far in the future that we’ll all be dead. But to think that way is to be brain dead now.”