A robot is a machine that has been designed to do things that people and animals do. Some robots work in car factories, assembling and painting automobiles. Other robots mow lawns and vacuum carpets. Still others play games with people.
Robots come in all shapes and sizes. The world’s tiniest robot can sit on a dime and has an onboard computer. It moves around on treads like an army tank. Its inventor says it can be equipped with a video camera, a microphone, or chemical sensors. One of the first things it will probably be used for is spying on people.
The largest robot in the world weighs 3,500 tons (as much as a million cats) and stands almost 250 feet tall (as high as a dozen two-story houses stacked on top of each other). It is designed to remove rocks from coal mines, and it can scoop up 150 tons of crushed rock at a time.
The word “robot” was invented by a Czechoslovakian author named Karel Capek. In 1921, he produced his play R.U.R. (Rossum’s Universal Robots), which featured machines designed to look and act like people. Capek (who happens to be my great-great-grandfather) derived the word “robot” from the Czech word for boring or undesirable work, robota. In Capek’s story, the robots revolt against the human beings and take over the planet. But there’s no reason for that to happen in the real world. Humans and robots can get along well with each other, if we are patient with each other’s special needs.
In 1940, another science fiction author, Isaac Asimov, introduced three laws that he thought should be programmed into all robots in order to make them safe and obedient. Here they are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Robbie, a robot featured in Asimov’s famous novel, I, Robot, followed the Three Laws. Robbie’s owner said of his robot, “He can’t help being faithful, loving, and kind. He’s a machine—made so.”
Many children who read Asimov’s exciting science fiction stories later grew up to become robot scientists, and they tried to incorporate the Three Laws of Robotics into the robots they created.
But in the sixty years since Asimov laid down the rules for robots, scientists have started to think in new ways about creating robots. To make smart robots that are able to adapt themselves to changing conditions, some roboticists say that we need to allow robots to evolve in the same way that life evolves on Earth. To do this, we need to stop trying so hard to control a robot’s every action, and instead allow it to try new things on its own and learn from its experiences.
Of course, that means that the robots would no longer be under our complete control. We would have to watch them carefully to make sure that they don’t hurt people, animals, or the environment.
As robots become more complex, the people who design them will have to think about more than motors and gears and computers. They’ll have to think about how their creations will affect the world and everything in it. For example, what if you could build a robot that could make copies of itself? Would it be OK to set it free in the world? Probably not, because if the copies could in turn make their own copies, the whole world would soon be covered in robots. Or how about making a tiny flying robot that could easily devour mosquitoes? Yes, it would be great to get rid of diseases, like malaria, that mosquitoes sometimes carry, but what would happen to the birds, bats, and insects that depend on mosquitoes for food?
The robotic experiments in this book are relatively simple and just for fun. But it wouldn’t hurt to start thinking about what kind of relationship we should have with robots as they become smarter and more lifelike in the years to come.