[SEVEN]
ROBOTIC GODS: OUR MACHINE CREATORS
You have to remember that these supposedly evil scientists are actually just guys with families and dreams.
—DANIEL WILSON
“Each year some forty-two thousand people are killed in preventable traffic accidents. That is unacceptable. That is some fifteen World Trade Centers each year.”
Sebastian Thrun is director of the Artificial Intelligence Laboratory at Stanford University. Speaking in a clipped accent that reveals his German roots, Thrun tells how his motivation for making artificial beings comes from wanting to save the lives of real ones. When he was younger, “a very good friend died of a car accident because of a split-second decision.” For Thrun, robotics is a way “to avoid that waste,” and a means for him as a scientist to “have a major impact on society.”
Before he came to Stanford, Thrun had worked on robotic tour guides for museums in Germany, an interactive humanoid robot that worked in a nursing home near Pittsburgh, and a system of robots that could search out mines. It was all interesting stuff, but none of it had that major impact he was looking to make. In 2004, however, opportunity knocked.
The Grand Challenge is a robotics road race sponsored by DARPA, the Pentagon’s main research lab. The agency offered a $1 million, winner-take-all prize for the first team that could drive a robot across a rugged 142-mile cross-country course in the California desert. The Challenge was conceived as a way for the government to accelerate military R&D, by bringing in new talent, new ideas, and new technologies. In making it an open competition, DARPA hoped to entice innovators who normally would not work with the military. In their race for the cash, the side effect would be to help the Pentagon solve the problems it was having in designing robots for warfighting, as well as meeting the congressional requirement to have one-third of all its ground vehicles unmanned by 2015.
The rules of this robotic Amazing Race were fairly simple. The vehicle had to autonomously complete the race course within ten hours to get the money. No human intervention was allowed, meaning no control commands could be sent to the vehicles while the race was on. Finally, none of the race cars could intentionally “touch” any other competing vehicle.
As Thrun describes, when you combine these rules with the rugged terrain of the 142-mile desert race course, which DARPA picked for its similarity to the rough trails in combat zones like Afghanistan and Iraq, “it’s an endurance race of unmatched proportion.... This is the first one ever where the human is not involved and the vehicle has to make all the decisions.”
Unfortunately, “the first Grand Challenge came off as something of a Three Stooges affair,” writes Popular Science magazine. Of the 106 applicants in the 2004 race, only a few even made it beyond the start line. For example, the Oshkosh firm showed up with a bright yellow autonomous version of its six-wheeled Marine Corps combat truck. But the “TerraMax” only made it one mile before a software glitch shut it down. Sandstorm, a converted Humvee designed by Carnegie Mellon’s Red Team, went the farthest. But after seven and a half miles, it caught fire and got stuck on an embankment.
Thrun watched the 2004 race but didn’t compete. “After the first one, it was obvious we could do better,” he says. “It was a no-brainer.” Having just joined Stanford’s faculty, he saw it as “an amazing opportunity to be part of a fundamental change for society... a win, win, win, win for everybody.” Thrun and his team of graduate students entered the 2005 race.
When DARPA doubled the prize money to $2 million, the field of competitors grew dramatically. Some 195 teams applied from thirty-six states and four countries; 160 of them were new to the event, and they included 35 university teams and 3 high schools. The entry names often sounded like something out of a fantasy football league, ranging from the mundane, like Team South Carolina, to the inspired, like Cajunbot from Louisiana and Viva Las Vegas, oddly enough from Oregon. The contestants also ranged from all-volunteer teams like CyberRider (which used Wiki collaborative software to bring in the advice of computer whizzes around the world) to research labs paired with corporate sponsors. The early favorite was Carnegie Mellon’s Red Team (backed by Caterpillar) that had done the best in the last race and this time showed up with two cars to compete. All told, DARPA estimated that as much as $100 million in investment was spurred, as well as the equivalent of $155 million worth of free labor. One military robotics company executive joked that “the best part of the Grand Challenge is using the college kids like cheap slave labor.”
Thrun’s upstart Stanford team took a Volkswagen Tuareg SUV and rigged it out with five LADAR sensors, GPS, a video camera, and onboard computers that had some hundred thousand lines of code specially written by the Stanford School of Engineering. They called the vehicle “Stanley.”
Stanley from Stanford worked by using its sensors to build a multilayered map of the world around it, much like many of its competitors. The robot car’s unique feature was that it also fed its experiences, as well as a log of human driver reactions during its test runs, into a learning algorithm. As Thrun puts it, “We trained Stanley.... The relationship was teacher and apprentice, as opposed to computer and programmer.”
Thrun recalls it all came together three months before the race, when he was riding in Stanley during a test run. “We were driving in the Sonoran Desert, and at some point I realized that I was trusting my life to the car. At that moment it became crystal clear, this is the future. It was like a binary flip in my brain.”
On October 8, 2005, Stanley won the Grand Challenge, completing the course in six hours and fifty-four minutes, with top speeds of thirty-eight miles per hour. It might have gone even faster except for a flock of birds that landed in the middle of the raceway, confusing Stanley for a period. Four more teams crossed the finish line, but the Stanford team took the entire $2 million prize under the rules of the competition. Thrun was gracious in his victory. “We all won. The robotics community won.”
Soon after, Thrun would be named to Popular Science magazine’s “Brilliant 10,” as one of the ten best and brightest minds in all of science. When asked how the victory changed his life, he responds, “Oh, big time! It is even changing Stanford as a university.” He goes on to excitedly detail all the various collaboration projects that came out of the race, such as new links with the automotive industry and plans for an 8,000-square-foot research facility at the campus. The students who worked on the project are “cranking out papers. There are difficult technologic problems to be overcome and each one of these is a paper or a thesis.”
And what happened to Stanley? After its win, Stanley was declared the number one robot of all time by Wired magazine, beating out a list of fifty other real and fictional robots that ranged from Spirit, NASA’s Mars rover, to Optimus Prime from Transformers. Stanley is now at the Smithsonian Museum of American History, sharing the stage with such other important historical artifacts as the “Star-Spangled Banner” that flew over Fort McHenry and the jacket that Fonzie wore on Happy Days.
THERE IS NO EUREKA
In the world of science fiction, research usually takes place in super-secret government labs or mysterious places like “Area 51,” the Nevada desert setting of more than sixty movies, TV shows, and video games. The Science Fiction Channel even has an entire TV series about a quirky town set up in secret by the Pentagon for scientists to live and work in, called Eureka. (“Every small town has its secrets, but in the town of Eureka the secrets are top secret.”)
While the military is the major funder of robotics research, much of it actually plays out in public view. We just aren’t watching. One estimate of applied research spending in the unmanned field was that 40 percent was flowing through private industry, 29 percent via military centers and labs, and 23 percent through university programs. At the center of it all is the National Center for Defense Robotics, a congressionally funded consortium of 160 companies, universities, and government labs. Work on military robotics isn’t so much top-secret labs fueled by UFO power sources as it is a simple synergy of military money, business organizations, and academic researchers.
The outcome is that major changes in warfare are being driven by the last people you might associate with combat. When you meet robot scientists, you quickly discover that it is hard to make blanket assessments. They range from prototypical geeks wearing actual pocket protectors to brawny he-men, who look like they spend more time at Gold’s Gym than at the lab. While many are introverts, others are real jokers. At one research visit, for example, I watched a scientist ride a prototype military ground robot down a set of stairs like a surf board. The only general rule is that they are all breathtakingly smart.
People begin working on robots for all sorts of reasons. Brian Miller, for example, is an engineer who started working at Ford Motor Company, designing and developing race cars. He had no visions of robots dancing in his head growing up. “But NASCAR was boring,” he says (as Dale Earnhardt likely rolls over in his grave), “too many rules specifications.” Miller liked working on off-road vehicles and so joined Millenworks, a company in Orange County, California, that makes rugged vehicles. Today, instead of race cars, he makes unmanned ground combat vehicles. By contrast, Helen Greiner, the chairman and cofounder of iRobot, says that she first got into robotics in 1977, watching the original Star Wars movie as a mathematically inclined eleven-year-old. To this day, she calls R2-D2 her “personal hero.”
Daniel Wilson, a writer and Carnegie Mellon University researcher, probably has the best explanation for why people decide to work on robots. “Hands down, robots are just plain cool as hell. Ask any roboticist why they do it, and that’s the answer you get.” As he explained, “When you are deciding on what to do for your life, there’s nothing like the sense of making something so tangible, so active.”
The increasing use of robots in war, though, has changed the equation a slight bit. Today, roboticists also can take pride in saving lives. As Colin Angle, one of Greiner’s cofounders at iRobot, says, he spent his time as a student developing the “most sophisticated, cool, crazy-ass robot.” But “it left [me] with an empty feeling.” Today, Angle builds robots that he is quite happy to see get destroyed. “Getting a robot back, blown up, is one of the more powerful experiences I’ve lived through,” he says. “Nothing could make it so clear that we have just saved lives. Somebody’s son is still alive. Some parent didn’t just get a call.” Greiner similarly describes receiving postcards from soldiers using her PackBot in the field as the most gratifying experience, including one that just said, “You saved lives today.” As she tells it, “There are people coming home because of our work.” But beyond that, she goes back to why she and her schoolmates founded the robotics company. “We always knew we would change the world.”
“ GI T ROCKIN’: GOVERNMENT IT ROCKS, DO YOU? ”
The invitation letter reads, “GIT Rockin’ is government IT’s first annual battle of the bands.... This friendly competition allows executives—from government and industry alike—to network with peers, colleagues and spouses in a high-energy, out-of-the-industry-norm environment. Come out and showcase your alter egos and talents.”
The battle of government information technology bands takes place at the State Theatre in Falls Church, Virginia. And it does not disappoint, fulfilling all your expectations of the government IT music scene. The ultimate winner of the event, which raises money for charity, is Full Mesh, perhaps the only rock band in the world whose particular highlight is that it “featured talent from Juniper Networks.” Federal Computer Week magazine (akin to the Rolling Stone or Vibe of the IT music world) summed it up. “Folks from around the federal information technology community really let their hair down.”
The history of government support for the scientists, engineers, and programmers like those at GIT Rockin’ goes back decades; for the computing world, it especially took off in World War II and then the cold war. By one estimate, up to a third of major university research faculty was supported by national security agencies after 1945. So the battle of the government information technology band was, if unfortunate, likely inevitable.
The primary player in the world of funding new research in IT, computers, and robotics is DARPA. DARPA’s overall mission is to support fundamental research on technologies that might be common twenty to forty years from now, and to try to make them happen earlier to serve the needs of the U.S. military today. As Washington Post writer Joel Garreau describes, its strategic plan is to “accelerate the future into being.”
The agency was started in 1957 after the Soviets stunned and embarrassed the United States by launching the Sputnik satellite. President Eisenhower worried that America was losing the science arms race and set up an agency so that the United States would never again be surprised by the technology of foreign powers. Since then, DARPA has shaped the world we live in more than any other government agency, business, or organization. For all the claims that “big government” can never match the private sector, DARPA is the ultimate rebuttal. The Internet (DARPA’s first visionary name for it was the “intergalactic computer network”), e-mail, cell phones, computer graphics, weather satellites, fuel cells, lasers, night vision, and the Saturn V rockets that first took man to the moon all originated at DARPA. And now it’s focusing on robots and other related unmanned technologies.
DARPA works by investing money in research ideas years before any other agency, university, or venture capitalists on Wall Street think they are fruitful enough to fund. DARPA doesn’t focus on running its own secret labs, but instead spends 90 percent of its (official) budget of $3.1 billion on university and industry researchers “who work at the forefront of the barely possible.” One business article notes, “By the time a technology is far enough along to attract venture capitalists, DARPA is usually long gone.” As a result, scientists are often very positive on the agency. Sebastian Thrun says, “DARPA has been good to me, helping me to develop my dreams.... It’s a very successful agency,” he explains, because “it takes risks and gets spectacular results.”
Today, DARPA’s headquarters is located just down the street from a shopping mall in Arlington, Virginia. It is supposed to be a secret location, but the security policy of having a police car parked permanently in front of a supposed suburban office building gives it all away. So does the immense popularity of a barbershop just two blocks away. Its specialty is bad 1950s buzz cuts, but its hairdressers do offer five-minute rubs of the patron’s skull and neck afterward. It is usually filled with men wearing DARPA badges, savoring some all too rare human contact.
DARPA has some 140 program managers on staff, mainly PhDs in the hard sciences, with a few others from social sciences and medicine. Joel Garreau, who wrote a book on DARPA, notes that the organizational culture is to seek out problems that staffers call “DARPA-hard.” These are “challenges verging on the impossible.” The presentations at its annual conference (DARPATech) illustrate with such panels as “The Future of Aviation,” “Obtaining the Unobtainium: New Materials,” and, of course, the ever popular “Letting Schrödinger’s Cat out of Pandora’s Box: Quantum Mechanics for Defense.” The location is equally instructive; the agency that tries to make the future come true holds its conference in Anaheim, the home of Disneyland.
For all its success, not everyone is a huge fan of DARPA. Its critics in the blogosphere use such descriptors as “creepy” or call it the “Frankensteins in the Pentagon.” Part of this animosity lies with a fairly flawed public affairs operation. While DARPA should be better known for developing the Internet and funding projects like Sebastian Thrun’s Stanley, the last time it made major headlines was a failed project in 2003 to set up a terrorism prediction index. This was a plan for experts to participate in the equivalent of a football pool, betting on likely events such as terrorist attacks and the deaths of world leaders. As one defense industry expert put it, he had never come across such “a mind-numbing mix of brilliance and tone deafness” as at DARPA.
Public perception aside, there is also concern within the defense field that DARPA invests too much time and money on fanciful ideas. Even robotics scientists sometimes describe DARPA’s staff as “real madmen.” The criticism seems to be centered on the fact that the agency, in trying to think out of the box, can forget that the D in its name stands for its primary funder and customer: the Defense Department. As one official says, “I spend an inordinate amount of time trying to delineate between DARPA-hard and DARPA-stupid.”
More recently, others critique DARPA for just the opposite. They feel that funding pressures from the wars in Afghanistan and Iraq have started to make it too short-term in its thinking. “Today DARPA imposes six-month go-no decisions on all their researchers, which stifle innovation and creativity—very un-DARPA-LIKE,” says a congressional staffer. “I have had everyone complain to me about this—from universities to small hi-tech businesses to the big defense contractors.” They contend that the true technology problems worth solving don’t get solved within six months or less.
PIMPING AIN’ T EASY
A baseball throw down the street from DARPA is the Office of Naval Research (ONR). In keeping with the odd way that the most advanced defense agencies are integrated into the mundane of Americana, across the street is a TCBY: The Country’s Best Yogurt store (judging from the small crowds, especially compared to the barbershop, it is not).
The start of ONR dates back to 1907. A naval commander visited the construction of the battleship U.S.S. North Dakota and saw terrible flaws in design and construction, which the navy had been unaware of because it lacked its own scientists and engineers. Since then, ONR has focused on helping the navy to maintain technological superiority on, under, and above the sea, as well as in space. It led the development of such varied programs as submarine-launched ballistic missiles, tilt-rotor aircraft, deep-sea exploration, fiber optics, and how to battle tooth decay (dental hygiene being the key to naval readiness). As one historian noted, the naval research program has been responsible for a bevy of “ideas that literally changed the world.”
Among those who work at ONR is Dr. Thomas McKenna. A balding, portly man, McKenna looks the part of a genial father; indeed, proud pictures of his children fill his office’s walls. McKenna is also a father figure to the wider military robots world. How he works very much illustrates the relationship between the military and the world of research.
McKenna has been at ONR since 1988; his earliest work was on legged robots. Today, among his main tasks is administering financial grants (his typical award is around a million dollars a year for five years) to universities and labs. His usual approach is to identify promising researchers for support when they are still graduate students. He then helps their careers to a point at which they become professors and have their own labs. “I was supporting some one hundred top graduate students at a time.” The graduate students are not just Americans or even all located in the United States. For example, McKenna is especially proud of having funded the graduate student who now runs the “Blue Brain” project in Switzerland. In collaboration with IBM, the project is trying to make a simulated brain using a Blue Gene supercomputer, which might yield massive jumps in computing power and ultimately create strong AI.
McKenna also helps projects gain funding via the Department of Defense’s Small Business Innovation Research (SBIR) and Small Business Tech Transfer Research. These programs provide almost $1 billion in total grant money (given out in baskets of up to $850,000) to help jump-start early-stage R&D for small companies and entrepreneurs working with the Pentagon and research universities.
Usually, researchers will apply to McKenna’s office for grants, and he will kick ideas back and forth with them, in a collaboration to hone their research proposals and tweak them to ONR’s needs. Or “sometimes I just find them on the Web.” He tells how he surfs about to different sites of research until he sees something that intrigues him. He will then e-mail the researcher: “Send me a proposal along these lines, because I really like what you’re doing.”
The process McKenna lays out is quite common in the nexus between the military and its university and business researchers. Some describe this cross between a funder and seducer as akin to “pimping” for the military research system, while others liken it to “an idea and technology hummingbird.” Just as a hummingbird flits back and forth, spreading pollen, so too does such a funder serve as a critical link in connecting scientific ideas and research with military needs.
Currently, McKenna is sponsoring lots of funding in “human activity recognition,” where a robot learns to understand and identify what a human is doing. For example, many universities have research on teaching robots how to play or referee baseball and other sports. Combining vision systems with processors that know the game’s rules and track trajectories allows robots to do things like predict where a ball will land and race to retrieve it. By scrutinizing a pitcher’s fingers with a high-power lens, they can even predict whether he is going to throw a fastball or a curve. The hope is that systems will similarly be able to learn how to recognize certain patterns of behavior in war and do such things as IED prediction and detection.
McKenna is also very interested in cross-disciplinary teams, describing himself as being far more likely to fund projects, for example, that bring together biologists with engineers. He proudly says, “When it comes to bio-inspired robotics, there isn’t any other place in the world better than us.”
One such program is BAUV, the Biomimetic Autonomous Undersea Vehicle. As the poster on McKenna’s door proclaims, the goal of BAUV is an exciting (well, exciting for ONR) blend of “shark-like low power, shrimp-like noise, and fish-like low speed maneuverability.” BAUV is essentially a pole the length of a desk with three fishlike fins on either end and a neural brain. The fins are about fifty times more efficient than a propeller, plus incredibly quiet, meaning BAUV is “undetectable by sound.” The neural processor, which came out of research sponsored at the New York University Medical School on rat brains, allows the robot to autonomously adjust to any change in the environment, allowing it, for example, to hold the same spot in the ocean for weeks.
The BAUV currently can power itself on battery for up to three weeks, but ONR is exploring ways to extend this. Projects include giving it solar power, for when it operates near the surface, or even the ability to use a “mud battery.” This is a bacteria-powered cell that is set on the muddy bottom of the ocean’s floor. When bacteria break down organic matter, they produce a stream of electrons that, if captured, can produce electricity. The mud battery would refuel BAUVs like an undersea robot gas station.
McKenna’s pimp hand is strong. The relatively small-scale research he supported on BAUV could potentially revolutionize undersea warfare. A major challenge the U.S. Navy faces is how to patrol shallow waters, especially against quiet diesel-powered submarines like those the Chinese and Iranians use. Instead of risking the navy’s valuable nuclear subs, BAUVs would be able to silently swim in the shallow waters for weeks at time, creating a virtually undetectable network of floating listening posts. A little bit of McKenna’s start-up money might well create a big PLUS, or what the navy calls its ultimate dream of “Persistent Littoral Undersea Surveillance.”
MAKING KEVLAR UNDERWEAR
Once the researcher has produced a prototype, places like ONR and DARPA turn the project over to what McKenna calls “the customer,” the military. Military labs and units will then test the robot (“You basically beat the snot out of ’em,” explains one scientist), explore its uses, and even make suggestions for improvements. Often, they prove to be just as innovative as the original researcher. McKenna tells how one unit of marines took three different prototypes and cobbled them together into one machine that does “countersniper” work like iRobot’s REDOWL. Whenever a sniper shoots at the marines, the technology automatically points a machine gun at where the bullet came from.
An example of one of these places that brings together tactics and technology is the Marine Corps Warfighting Lab, located at the massive base in Quantico, Virginia. The lab develops and tests out new technologies, as well as sources commercial market solutions for military needs. Dragon Runner, a nine-pound robot that looks a bit like a model car, is a prototypical example of the lab’s development work. It came out of collaboration between Carnegie Mellon University, ONR, and the Marine Lab. Incredibly tough, troops use it to “see around the corner.” They can toss it through a window, up some stairs, or down a cave; the robot will land on its feet and send back video of whatever it sees.
The military labs also serve a valuable function by end-running around the normal procurement system to get soldiers in the field what’s already available in the stores. During the first days of the Afghanistan operation, for instance, special forces units sent back requests for a remote camera that could be linked to satellite communications and a “pointer,” a man-portable UAV that could beam video back to an operator. It took eleven days for the labs to get them the camera, and eight months for the UAV, which compares quite well to the years that normal weapons development might take (the F-22 jet, for example, took twenty-five years to go from concept to deployment). No request is too small. The Marine Lab even made special Kevlar-lined undershorts for marines to wear while on patrol. One news article jokingly called the program “Saving Ryan’s Privates.” Besides protecting marines’ unmentionables, the special shorts also shield the femoral artery from being nicked by shrapnel. In either case, as one commented, “When your butt’s on the line, you want it protected.”
Such defense-funded labs pop up all over the place. Perhaps the most surprising is the Idaho National Lab. More akin to a national park than a traditional laboratory, it has huge tracts of land for testing out ground and air robots, including its own airstrip for UAVs. In the words of one research scientist there, “Our lab is just a little bit smaller than Rhode Island.” The Idaho team reflects your expected western hospitality. They have a standing offer to other roboticists: “Any one who wants to play around with one of their systems, come on down.”
This kind of neighborly vibe carries across the robotics field. When I asked people who they most respected in the field, the name that consistently came up was H. R. “Bart” Everett. Everett is a retired commander in the U.S. Navy, who is now technical director of robotics at the Space and Naval Warfare Systems Center (SPAWAR) in San Diego. As one scientist put it, “He is one of the true graybeards in the field of robotics.” Everett even maintains a “lending pool” of robots, which are loaned out to those who can’t afford them.
Everett, who is now working on a book called Children of Dysfunctional Robots, tells how “my obsession with robots began early on in life, when I was about eight years old. I had become enthralled with a particular episode of The Thin Man one evening at a friend’s house. The plot was centered upon a murder supposedly committed by a robot, and of course the Thin Man had to prove some dastardly villain and not the robot really committed the crime. I had never before seen a robot and was forever changed by that experience.”
By junior high, Everett was tinkering with robots. He kept his interest going after he joined the navy. While attending the Naval Postgraduate School in Monterey in 1982, he built the very first behavior-based autonomous robot for his thesis project. This revolutionary robot was controlled by a single-board Synertek computer (the cousin to the first Commodore personal computers). Much as how Commander Data in Star Trek was made in the image of his designer, Bart Everett called this first robot RO-BART. His robot soon paid him back for the gift of autonomy. “There weren’t a lot of mobile robots in those days, so it attracted a tremendous amount of media attention, which in turn landed me a job.”
ROBART-II came out of Everett’s tinkering in his basement in the following years. In 1986, he turned the model over to the navy and joined the San Diego center. This was followed by ROBART-III, a test-bed robot that has continually evolved since 1992. ROBART-III looks a bit like Robby the Robot from Lost in Space, except it has a six-barrel Gatling gun as an arm. It has been used as a platform for such new technologies as natural language understanding and automated target acquisition and tracking. It was named number sixteen on Wired magazine’s list of the best robots of all time.
CUSTOMER FEEDBACK
The four enlisted men sat onstage, evidently uncomfortable to be the focus of so much attention. But to the scientists and businessmen gathered in the hotel conference room in Georgetown, they were the real stars of the robotics industry convention. The four had recently served in Iraq and had used their robots nearly every single day. The “Warfighters’ Perspectives” panel was the ultimate opportunity for customer feedback.
For the next ninety minutes, the soldiers talked about their experiences with robots in Iraq and various suggestions they had for improvement. They asked for better batteries and interchangeable parts that could be fixed in the field, rather than always having to send a broken robot to the robot repair yard. Army staff sergeant Robert Shallbetter even offered feedback on the robots’ colors. Having robots painted black made them stand out as targets and the 140-degree heat in Iraq made them hard to even touch. Plus, “Heat and computers don’t mix well.”
The audience’s ears perked up when the soldiers began to talk about which robots they liked more, knowing that this sort of feedback could determine their programs’ and companies’ futures. They complained that Foster-Miller’s Talon didn’t have its own light source, so the soldiers had to duct-tape flashlights on it at night. On the other hand, they noted that PackBot did have its own light source, but it drained the batteries fairly quickly. They complained that the PackBot needed as long as two minutes to boot up and enter the PIN number for access. “After we’re out for about thirty minutes, we had to start planning on being attacked, or having an ambush waiting for us on the way back,” said Specialist Jacob Chapman, so the loss of two minutes can be fatal. On the other hand, having a PIN number makes it harder for enemies to use the robots if they ever capture them. In the end, there was no clear favorite between the two robots from Boston. As Byron Brezina, robotics director of the navy’s EOD technology division, said, “If you’ve ever gotten into the Ford versus Chevy argument, that’s pretty much what it goes like.”
The soldiers were incredibly blunt, however, about one robot. The Vanguard is manufactured by Allen-Vanguard Inc. of Reston, Virginia. As the other soldiers nodded, Chapman called it “completely unreliable” and told how his robot would turn off after going about ten feet from the truck. “We ended up trying to get rid of [the Vanguards] as soon as we could.” When asked what he would do to fix it, he gave the ultimate soldier’s reply. “Make it work.” Standing at the back of the room was an executive from the company. At that moment, he looked very ill.
The soldiers ended their talk by thanking the researchers and executives gathered in the room. “I’m very fortunate due to the current [robot] technology to be standing here today,” Shallbetter said. Navy aviation ordnanceman first class Bryan Bymer chimed in, “They most definitely saved people’s lives.”
Tom Ryden, director of sales and marketing for iRobot, was one of the hosts of the conference. He thanked the soldiers in turn, promising them, “We’re going to take a lot of that to heart and see what we can do to make improvements.” The panel closed with the more than one hundred scientists giving a standing ovation to the soldiers.
This sort of interaction between soldier and scientist is actually far more common than one would think. Mack Barber at Remotec tells how “sometimes we get phone calls and we can hear the gunfire in the background.” Many credit the troops in the field with some of the best ideas. Researchers at iRobot, for example, are especially proud that soldiers had “direct input into the design” of the Packbot and recall that during the early deployments to Iraq they would update the robots’ software with feedback from each mission. The company makes it a point to fly in soldiers on their way back home from Iraq to its office in Burlington for feedback, and even has a place on its Web site where soldiers can post their improvement ideas.
The soldiers at the robots’ feedback session also requested that the scientists try to understand their needs better. “If you can put yourself in our shoes and imagine what we’re going through, we would really appreciate it,” said Sergeant Shallbetter.
For most of history, that was an impossible request to meet. Scientists have long been involved in war, but they were usually separate from soldiers and the battlefield. As some military historians note, “The scientist did not need physical courage to do his work.... The soldier, unlike the scientist, might be called on to face death. This was the soldier’s badge of honor, and in his mind, made him the rightful ruler of the battlefield.”
Yet this division of labor is also breaking down, oddly enough through unmanned systems. While robots are moving some soldiers off the battlefield, they are also bringing the geeks out to war. Robot researchers from firms like iRobot and Foster-Miller are now going out in the field in search of feedback and updates to ever-changing technology. As one military analyst put it, “There are tons of guys now wearing Kevlar pocket protectors” on today’s battlefield.
Unlike past weapons systems, the new robots don’t even need the soldiers to initiate the feedback; the robots can also report back on their own. As Jim Rymarcsuk, a vice president at iRobot, explains, “Our robots have logistic information on them. They track the hours of operation, how it has been operated, what it has been used for. We can track a lot of that.”
This kind of back-and-forth between the people who design and make robots and the users in battle produces a pattern of almost continual improvement. For example, one navy robot, the Mk. 3 RONS, went through some thirty-five different changes in its first five years of operation. The constant communication between the battlefield and research lab can also take some humorous turns. Joe Dyer, a former navy admiral turned vice president at iRobot, describes how his firm once received a box shipped from Iraq. It was filled with the bits and pieces of a PackBot that Iraqi insurgents had blown up. Attached was a request for “warranty repair.”