1942
ASIMOV’S THREE LAWS OF ROBOTICS
As AI and robotics advance in the coming decades, what constraints or codified laws should be developed to ensure that such entities do not take actions that harm humans? In 1942, author and edu-cator Isaac Asimov (1920–1992) introduced his famous “Three Laws of Robotics” in a short story called “Runaround,” which features a smart robot’s interactions with people. The Three Laws are (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov went on to write many stories that illustrated how these simple laws could have unintended consequences.
Later, he provided an additional law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” These laws have been influential, not only for science-fiction writers, but also for AI experts. AI researcher Marvin Minsky (1927–2016) noted that after encountering Asimov’s laws, he “never stopped thinking about how minds might work. Surely we’d someday build robots that think. But how would they think and about what? Surely logic might work for some purposes, but not for others. And how to build robots with common sense, intuition, consciousness and emotion? How, for that matter, do brains do those things? ”
The laws are notable and useful for the countless questions they raise. What other laws might we add to the Asimov set? Should robots never pretend they are human? Should robots “know” that they are robots? Should they always be able to explain why they acted as they have? What if a terrorist used multiple robots to harm people, without each robot knowing the entire plan, and thus not violating the First Law? We may also consider how these laws may have an impact on robot army medics who must perform triage when they cannot tend to multiple injured soldiers, or autonomous vehicles that must determine whether to crash into playing children or drive off a cliff and kill a passenger. Finally, could a robot really decide what it means to “harm humanity,” given that its interactions could have repercussions for years into the future?
SEE ALSO Lethal Military Robots (1942), Ethics of AI (1976), Blade Runner (1982), Autonomous Vehicles (1984)