10

OPERATION DESERT LAB

pg299

When you’re at war, you think about a better life. When you’re at peace, you
think about a more comfortable one.
1

—NOVELIST AND PLAYWRIGHT THORNTON WILDER,
FROM THE SKIN OF OUR TEETH

For much of modern history, the Middle East has been a fertile land of invention where science and technology flourished. From the seventh century onward, while Europe wallowed in the war, hunger and disease of the Dark Ages, the Middle East enjoyed a golden era, an enlightened renaissance from which a steady stream of life-improving inventions flowed. Water turbines, navigational astrolabes, glass mirrors, clocks, the fountain pen and even an analogue computer that calculated the date were just some of the innovations of the time. While Europeans were busying themselves burning libraries and fighting over who God favoured, Islamic scholars were laying the foundations for many of the world’s modern institutions by opening the first hospitals, pharmacies and universities. They also laid the groundwork for a number of modern sciences, including physics, chemistry, mathematics, astronomy and medicine.

Things have changed dramatically in recent times. The past few decades of political turmoil, war and, in some places, religious fundamentalism have largely crippled the region’s intellectual institutions. An area that was once the envy of the world for its progressive thinking now lags in just about every intellectual and technological measure. In Iran, the former seat of the once-powerful Persian Empire, literacy rates are well below that of the Western world. In Iraq and Afghanistan, the current centres of conflict, barely 40 percent of the population can read. Internet use is well behind the developed world, and where people are actually surfing the web, censorship is rampant. While the blocking of pornographic sites isn’t too surprising in Muslim countries, the definition of “questionable” content also extends to political and free-speech websites and tools. During Iran’s election turmoil in 2009, the popular messaging service Twitter was blocked to prevent details of uprisings from spreading. Spending on science and technology in the region stands at a woeful 17 percent of the global average, ranking not just behind the West, but also behind some of the poorest countries in Africa and Asia.2

Technological advances have occurred in the Middle East in recent years, but perversely, they’ve been deployed by Western militaries. Since the early nineties, the United States and its allies have used the area as a sort of a laboratory for a vast array of new technologies, testing out their capabilities to see what works, what doesn’t work and what can be improved. The impetus behind all new war technology, the military tells us, is to save lives, but as we’ve already seen, there’s also the important by-product of technological spinoff into the mainstream, which is a key driver of Western economies. The recent conflicts in the Middle East are perhaps the best example to date of this terrible duality of military technology: while new war tools and weapons inflict tremendous pain, suffering and hardship on one group of people, they also create prosperity, convenience and comfort for others.

Another Green Revolution

Iraq’s invasion of Kuwait in 1990 sparked a new wave of Western technological development. The ensuing liberation through Operation Desert Storm was of course motivated by oil interests, but it also provided an opportunity for the American military to field-test new technologies, some of which had been sitting on the shelf from as far back as the Vietnam War.

Smart bombs, or precision-guided munitions, were the natural opposite of dumb bombs which, when dropped from a plane, simply used gravity to find their target. Smart bombs were developed during the Vietnam War and used lasers to find their mark—the target was illuminated by a beam that the bomb homed in on. The new weapons promised two key advantages over their precursors: they could improve the efficiency of bombing missions by decreasing the number of munitions needed, thus saving on costs and maximizing damage, and they could lower so-called “collateral damage,” or the destruction of non-military targets and civilian deaths. American forces used such bombs in small numbers in Vietnam, as did the British military during the Falklands War of 1982, but they proved to be of limited use in poor weather. It wasn’t until the Gulf War that they were improved and deployed on a large scale.

General Norman Schwarzkopf, the American commander of the coalition forces, set the tone of the war in January 1991 when he dazzled reporters with a videotape of a smart bomb zooming through the doors of an Iraqi bunker to blow up a multi-storey command centre. The early stages of the war were going exactly as expected, Schwarzkopf announced, thanks largely to the incredibly accurate bombs being used. “We probably have a more accurate picture of what’s going on ... than I have ever had before in the early hours of a battle,” said the veteran general, who began his military service way back in 1956.3 Like radar back in the Second World War, smart bombs were hailed by an impressed media as a “miracle weapon” that pounded the Iraqi military into a sorry state, making the ensuing ground war short and easy. Only 7 percent of the munitions dropped on Iraqi forces, however, were of the “smart” kind; the rest were the traditional variety.

Still, smart bombs had proven their worth and their usage has steadily increased in each subsequent conflict. Fully 90 percent of the bombs brought to Iraq by American forces for the second go-round in 2003 were “smart.”4 The laser guidance used in the weapons, meanwhile, has gone mainstream in recent years, primarily in cars, where it has been incorporated into collision-avoidance systems. Toyota, for one, introduced a laser cruise control system in its 2001 Lexus which, like many of the robot vehicles in the DARPA road races, used beams of light to track other cars ahead of it.

If the videos of smart bombs flying into Iraqi targets in stunning first-person view were not enough to impress the public, most of whom were watching it all unfold on CNN, then the images of night-time air attacks were. The scenes I remember best involved the volleys of Iraqi anti-aircraft fire arcing upward at unseen stealth bombers high above. First, an orchestral cascade of lights would fly into the sky, followed shortly thereafter by a brilliant, expanding explosion on the ground. It seemed to be clear evidence of which side was winning. Like many people watching, I was awed by the technology and not thinking of the lives lost. All of it, of course, was broadcast in its full green-tinged glory.

Night vision was another technology that had been sitting around for a while. The earliest version of it was invented by the American army during the Second World War and saw small-scale use in sniper-rifle scopes in the Pacific. About three hundred rifles were equipped with the large scopes, but the poor range of only about a hundred metres limited them to defending the perimeter of bases. Nazi scientists also developed night-vision “Vampir” rifles and mounted similar units on a few tanks. The problem with both versions was that they used large infrared searchlights to illuminate targets so that gunners equipped with scopes could see them. This gave away the searchlights’ position, making them easy targets.

By the Vietnam War, American scientists had improved the technology to use available light, such as moonlight, which again limited use to when weather conditions were good. By 1990 the technology had entered its third generation and evolved to use “forward-looking infrared” (FLIR) image intensifiers, which electronically captured and amplified ambient light onto a display, such as a television monitor or goggles.

A FLIR device displays a monochrome image, usually green or grey, because it uses light from just below the spectrum visible to the eye. The technology therefore needs no additional light sources and functions well in any sort of weather. The new goggles were small, light, low-power and cheap (you can buy them online today for a couple hundred dollars), which is why the U.S. Army bought them by the truckload for Desert Storm. Night-vision was also incorporated into a lot of the military’s sensor and video technology, including the cameras that captured those green-tinged bombing images broadcast on CNN. If smart bombs were miraculous, the night-vision sights used by pilots and the goggles worn by ground troops were even more so, because they allowed coalition forces to “own the night.” “Our night-vision capability provided the single greatest mismatch of the war,” said one American general.5

After the war, night-vision technology was adopted quickly by the mainstream, particularly in security. Parking enforcement, highway rest stops, tunnel surveillance, transit systems, ports, prisons, hospitals, power plants and even pest inspectors all found it amazingly useful. The spread of night vision closely paralleled the rise of digital cameras, which also underwent their baptism of fire during the Gulf War. Both technologies became remarkably cheap, remarkably fast and began to converge, with night vision becoming a standard feature of video cameras early in the new millennium. As prices continued to drop on both technologies, they soon became standard in just about every camera available, which means that anyone can now create their own green-tinged Paris Hilton–style sex video.

On the military front, night-vision technology continues to evolve, with scientists currently working on doubling the field of view and adding thermal-imaging abilities to goggles. Lord only knows what sort of sex videos will come out of that.

The “Technology War”

The coalition forces had one other fancy new enemy-locating technology at their disposal: the Global Positioning System or GPS we have all grown to know and love. GPS units allowed troops to locate enemy positions and movements with pinpoint accuracy, further increasing the efficacy of smart bombs and units equipped with night-vision. It was a new holy triumvirate of American weaponry that reinforced the old saying, “You can run, but you can’t hide.” If one technology didn’t find you, the others would.

GPS had its origins in Navsat, a satellite navigation system first tested by the U.S. Navy in 1960. The original system used five satellites and only provided a fix on the user’s location once an hour. The technology was slowly upgraded throughout the seventies and early eighties, when tragedy hit. In 1983 a Korean Air Lines flight was shot down for straying too far into Soviet air space, prompting President Reagan to declare that the GPS, which could prevent such disasters, would become available for civilian use when it was completed. The twenty-four second-generation GPS satellites were scheduled for launch between 1989 and 2000. When the war started in 1991, however, only sixteen had been launched, eight short of the required number to provide worldwide coverage. Nevertheless, the incomplete system—which still provided three-dimensional navigation for twenty hours a day—was pressed into service at Schriever Air Force Base in Colorado. In the Kuwaiti desert, which was largely devoid of landmarks or way posts, GPS finally delivered on space technology’s long-held promise of making conflicts on Earth easier to fight. “It was the first war in which space systems really played a major role in terms of the average soldier, sailor, airman and Marine,” said a general with the U.S. Air Force Space Command. “This was the first time that space affected the way our troops fought in the battle.”6

With the war concluded, President Bill Clinton signed the system’s dual military-civilian use into law in 1996. Civilian access, however, was not as accurate as the pinpoint precision enjoyed by the military, a discrepancy the government fixed in 2000 to enhance GPS’s usefulness to the public. The amendment made consumer GPS devices ten times more accurate, to the point where location could be determined within a few metres. “Emergency teams responding to a cry for help can now determine what side of the highway they must respond to, thereby saving precious minutes,” Clinton said. “This increase in accuracy will allow new GPS applications to emerge and continue to enhance the lives of people around the world.”7

Right there to take advantage of the newly opened market was Garmin, a company started in Kansas in 1989 by two electrical engineers. Gary Burrell, a native of Wichita, and Taiwan-born Min Kao—the company’s name is a contraction of their first names—spent much of their early careers working for military contractors. Their first product, launched in 1990, was a dashboard-mounted GPS for marine use that sold for $2,500, while a follow-up handheld unit proved popular with troops in Desert Storm. The company wasted no time in jumping into the consumer market and has since ridden the quickly growing market to riches. As of 2008 the company had sold more than forty-eight million personal navigation devices,8 more than half of the total worldwide market, which is expected to continue growing by 20 percent a year until 2013, when it will exceed $75 billion.9 A substantial part of that growth will come from the current wave of “smartphones,” which since 2004 have incorporated cellular-assisted GPS chips.

All of the new sensor and navigational technology meant there was a ton of electronic data pouring in, but this could only be turned into useful intelligence if there was some way to crunch it all. As luck would have it, the war coincided perfectly with the rise of personal computers. While the first desktop computers were made available in the late seventies, sales didn’t begin to really ramp until the early nineties when the devices became standardized and simple enough for the average user. Among the first major business or “enterprise” buyers was the U.S. military, which used PCs during the Gulf War for everything from organizing the movement of troops to sorting through satellite photos for targets. American brass even used computers to simulate Iraqi responses to their battle plans, some of which turned out to be more efficient than the actual reactions.

By 1990, the U.S. military was spending $30 billion a year on desktop computers.10 One of the biggest beneficiaries of this huge outlay was Microsoft, which was in the process of standardizing the operating system that personal computers run on. The third version of Windows, released in May 1990, was the first to have a slick visual interface that required minimal training to learn, and the first to really take off with users. By 1991 Windows 3.0 had sold more than fifteen million copies, a good portion to the military, and helped give Microsoft three-quarters of the operating system business worldwide.11 After combat ended, General Schwarzkopf gave appropriate credit. Calling Desert Storm “the technology war,” he said, “I couldn’t have done it all without the computers.”12

The Gulf War was indeed the first technology war and it set an interesting precedent. The entire conflict lasted less than three months, while the ground campaign took only a hundred hours. Coalition casualties were low and the end result was total victory for American-led forces. It was easy to view the Gulf War as the perfect war, if such a thing could exist. Certainly Schwarzkopf saw it that way and his crediting of new weapons and tools as the key to victory cemented a long-held belief in American policy, that technology was the country’s biggest advantage over the rest of the world—not just in war, but in business as well.

The average Joe sitting on his couch watching the war on CNN couldn’t help but agree. The Vietnam War had dragged on for years, took a high casualty toll and looked like plain hell, with its grime and misery. The Gulf War, on the other hand, was quick, painless, and quite simply, looked good on TV. The green-hued battles, the first-person bomb views, the alien-looking stealth bombers—it was like a slick, Michael Bay–directed science-fiction movie come to life. Americans came home every night from work and turned on the tube to watch their boys, decked out in all the latest high-tech gear, kick Iraqi butt. It was a far cry from seeing them flee in disgrace from Vietnam. No wonder American morale, both military and public, was riding high after the war. Technology had re-established itself as America’s not-so-secret weapon.

Humans 2.0

American morale, however, took a massive blow with the destruction of the World Trade Center in the terrorist attacks on September 11, 2001. Whatever smugness was left over from the Gulf War quickly turned to a desire for revenge. The people responsible for the attacks, the al Qaeda terrorist network, would feel the full fury of America’s technological arsenal. Once again, the weapons developed in the decade since the Gulf War would be unleashed for field testing against the country’s enemies.

While the weapons were first deployed in 2001 in Afghanistan, where al Qaeda’s leader Osama bin Laden was apparently hiding out, they again found their way to Iraq with a second invasion in 2003. The situation again played itself out as if it were a movie, a sequel to the original Gulf War. And, like all sequels, the second go-round was bigger, louder, deadlier and ultimately less successful than the original. The new two-front war posed some unique challenges to the American-led coalition forces. In Afghanistan, the enemies were holed up in mountains and caves, making it difficult to find them and get to them. In Iraq, the invasion was relatively quick, but it soon morphed into an urban insurgency where guerrillas disguised themselves as civilians and melted into the city throngs. Smart bombs, night vision and GPS were handy, but new-and-improved technologies were needed to tackle these challenges.

Some of the technologies deployed so far in Afghanistan and Iraq, like the robots we heard about in chapter nine, have already begun to spin off into commercial uses. As both conflicts drag on, however, more and more new technologies will be developed and tried out. With some of them, their potential commercial applications can only be guessed at, while others have more obvious uses. One example of the latter are bionics and prosthetics, which are in some ways by-products of all the work being done on robotics. DARPA’s work on prosthetic limbs to help amputee veterans has been nothing short of spectacular. Scientists working for the agency have in recent years designed a Terminator-like robotic arm that displays a range of function far beyond that of even the best conventional prosthetics, allowing the user to open doors, eat soup, reach above their head and open a bottle with an opener, not to mention fire and field strip an M16 rifle. The prototype arm only weighs four kilos, has ten powered degrees of freedom with individually moving fingers, eleven hours of battery life, and like the arm I tried out at the robotics conference in Boston, has force-feedback so the amputee can actually “feel” what he or she is touching. The arm entered advanced clinical trials in 2009 with commercial availability expected to follow soon after. The next phase of the program, DARPA says, is to implant a chip in the patient’s brain so that he or she can control the arm neurally. The chip will transmit signals to the arm wirelessly, allowing the amputee to manipulate the arm with mere thoughts. The idea isn’t science fiction—DARPA expects to make submissions to the Food and Drug Administration for approval in 2010.

In a similar vein, defence contractor Lockheed Martin unveiled its Human Universal Load Carrier exoskeleton system in 2009. The HULC, which is the only name it could possibly have, allows soldiers to carry up to a hundred kilos with minimal effort. Presumably, like the green-skinned comic-book character, it will also let them “smash puny humans.” The exoskeleton transfers weight to the ground through battery-powered titanium legs while an on-board computer makes sure the whole thing moves in sync with the wearer’s body. The HULC is surprisingly nimble, too, allowing the soldier to squat or crawl with ease.

The exoskeleton is designed to alleviate soldier fatigue while carrying heavy loads across long distances, but Lockheed is already investigating industrial and medical applications, some of which are obvious. Like the giant exoskeleton used by Sigourney Weaver in Aliens, a juiced-up version of the HULC could easily find work in docks and factories where heavy lifting is required. Honda, in fact, is testing a similar system, but its device has no arms, only legs. The carmaker’s assisted-walking device, which looks like a bicycle seat connected to a pair of ostrich legs, is designed to support body weight, reduce stress on the knees and help people get up steps and stay in crouching positions. For workers who spend the whole day on their feet, like my mom the hairdresser, such a device would be fantastic. “This should be as easy to use as a bicycle,” a Honda engineer said. “It reduces stress and you should feel less tired.”13

Not to be outdone, Lockheed competitor Raytheon—the folks who brought us the microwave—is also getting in on the action. The company says it too has an exoskeleton that, like Lockheed’s, can lift a hundred kilos but is also agile enough to kick a soccer ball, punch a speed bag and climb stairs with ease. The company began work on the system in 2000 when it realized that “if humans could work alongside robots, they should also be able to work inside robots.”14 The media dubbed Raytheon’s exoskeleton “Iron Man,” after the comic book character, which should prove the perfect foil to Lockheed’s HULC. (I wonder what a smackdown between the two would look like?)

Scientists working for the military have also made significant strides in health and medicine over the past few years. DARPA researchers have even managed to come up with a simple cold medicine. In trying to alleviate the cold and flu symptoms soldiers often experience after strenuous exertion, researchers discovered the natural anti-oxidant Quercetin. In an experiment that involved three days of hard exercise, they found that half of one control group became ill with colds and flu. The incidence in the other control group, which took the anti-oxidant, was only 5 percent. Quercetin has since been commercialized as RealFX Q-Plus chewable pills.

On a grander scale, DARPA is also influencing the way pandemics are fought. The impetus behind the agency’s Accelerated Manufacturing of Pharmaceuticals program was to greatly reduce the length of time between when a pathogen is identified and when a treatment is widely available, which has typically been a very long fifteen years. DARPA is seeking to reduce that to a mere sixteen weeks or less, simply by changing the way vaccines are produced. While treatments have traditionally been grown in chicken eggs, DARPA scientists are experimenting with growing them in plant cells and have found that a single hydroponic rack, about five metres by three metres by three metres tall, can produce sufficient protein for one million vaccine doses, thus doing the work of about three million chicken eggs at a fraction of the cost. Moreover, the plants are ready within six weeks of seeding and produce vaccines that eggs can’t, like one to fight a strain of avian flu. The new technique is inspiring pharmaceutical companies to try different approaches. In November 2009 Switzerland’s Novartis, for example, won German regulatory approval for Celtura, an H1N1 vaccine manufactured using dog kidney cells.

There’s also a gizmo known simply as “the Glove.” It looks like a coffee pot except it has a cool-to-the-touch metal hemisphere inside, where users place their palm. Researchers at Stanford started working on the device in the late nineties and got DARPA funding in 2003. They had developed the theory that human muscles don’t get tired because they use up all their sugars, but rather because they get too hot. When users place their hand inside the Glove, their body temperature cools rapidly, allowing them to resume in short order whatever physical activity they were performing. The net result is that the user can exercise more. “It’s like giving a Honda the radiator of a Mack truck,” says Craig Heller, the biologist behind the device.

One of Heller’s lab technicians incorporated the glove into his workout regime. When he started, he was managing 100 pull-ups per session, but by using the device he was able to do more sets. Within six weeks, he was doing 180 pull-ups and in another six weeks he was doing more than 600. Heller himself used the glove to do 1,000 push-ups on his sixtieth birthday.15 The Glove’s military uses are obvious—because it effectively duplicates the effects of steroids, which allow users to train harder and more frequently, it’s going to result in stronger and faster soldiers. Its commercial applications are also apparent; every athlete and gym in the world is going to want one. If we thought athletes jacked up on steroids were playing havoc with sports records, wait till they get hold of the Glove. The device also has humanitarian potential, because it works in reverse. It can rapidly increase body temperature as well, which means it could save people suffering from hypothermia and exposure.

Then there’s the questionable stuff. DARPA has historically steered clear of biological research, but everything changed after September 11. Tony Tether, DARPA’s director at the time, adopted a more open attitude toward bioengineering and picked a fellow named Michael Goldblatt to lead the charge. The move was perhaps the best example yet of a bombs-meets-burgers crossover as Goldblatt had spent more than a decade working for McDonald’s, most recently as the company’s vice-president of science and technology. The same man who tested low-fat burgers for McDonald’s was all of a sudden in charge of bioengineering better soldiers. Goldblatt even referenced his past while spelling out his priorities at the annual DARPAtech convention in 2002:

Imagine soldiers having no physical limitations ... What if, instead of acting on thoughts, we had thoughts that could act? Indeed, imagine if soldiers could communicate by thought alone or communications so secure there is zero probability of intercept. Imagine the threat of biological attack being inconsequential and contemplate for a moment a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-thru.16

The idea of bioengineered soldiers has been around for decades, mostly in the realm of science fiction. Marvel Comics’ Captain America, the product of an injected “super-soldier” serum, and the Hollywood stinker Universal Soldier starring Jean Claude Van Damme and Dolph Lundgren as genetically and cybernetically jacked-up commandos come to mind. For much of the past decade, some of this science fiction has become, as Goldblatt calls it, “science action.” DARPA is funding dozens of “human augmentation” projects around the world, all of which are geared toward changing the old army slogan “Be all you can be” to “Be more than you can be.”

Scientists at Columbia University in New York, for example, are working on using magnetic brain stimulation to lessen a person’s need for sleep. Defence contractor Honeywell is using electroencephalographs to detect neural spikes in satellite analysts’ brains before they consciously register what they are seeing, which is resulting in faster action. Boeing is using near-infrared technology to monitor pilots’ brains with the hopes of eventually allowing them to fly several planes at once. At the University of Alabama, scientists have successfully kept lab mice alive with 60 percent of their blood gone through injections of estrogen, a process they believe can be replicated on humans.

And that’s just the stuff we know about. While many of the publicly known programs stop short of full-out genetic engineering, there’s little reason to believe that some cloning research isn’t being done with military applications in mind. Congress has raised some concerns over this sort of biological research, but the net effect so far has been the delay of funding for some projects or the simple renaming of others. “Metabolic Dominance,” for example, was changed to the less ominous-sounding “Peak Soldier Performance.”17

Found in Translation

Like any good business, the military is looking to streamline operations, cut costs and introduce efficiencies. That’s the impetus for the U.S. military’s “network-centric operations,” or a fully networked battle force. The plan is to get all those robots in the field to communicate better with their human masters, and also with each other. It also means coming up with better ways to deal with the ever-increasing amounts of data pouring in from the battlefield. Part of the solution is better communication systems, such as the delay-tolerant network Vint Cerf is working on. Another idea is “cognitive computing,” an effort to reduce the amount of data humans have to sift through by letting computers do it for them. The computer then only passes on the most pertinent details to its human master, perhaps with a suggested course of action. This rudimentary artificial intelligence is intended to reduce the military’s “tooth-to-tail” ratio—the number of support staff it has to field compared to its actual fighting forces.

In 2009 Robert Leheny, DARPA’s acting director, made the case for smarter computers in a speech to a House of Representatives committee on terrorism. “Without learning through experience or instruction, our systems will remain manpower-intensive and prone to repeat mistakes and their performance will not improve,” he said. The Department of Defense “needs computer systems that can behave like experienced executive assistants, while retaining their ability to process data like today’s computational machines.”

DARPA’s Personalized Assistant That Learns program, or PAL, is doing just that in military hospitals. The computer system is capable of crunching large amounts of data and then taking action by itself. Receptionists, not programmers, are teaching the system to find vacant appointment slots and make referrals.

If that sounds like the beginning of a Terminator-style apocalypse, the work being done on translation is where things really get scary. Franz-Josef Och looks like your average mild-mannered computer programmer, although his German accent might frighten some into believing he is the quintessential mad scientist. He grew up in a small town near Nuremburg and discovered a passion for computer science early on. Around 1997, while attending the University of Erlangen-Nuremberg, Och became interested in something called statistical machine translation, a method of understanding languages using algorithms rather than grammatical rules.

The idea of using computers to translate languages has been around since the beginning of the Cold War, when the United States was focused on understanding Russian and vice versa, but very little quality progress was made over the intervening fifty Sex, Bombs and Burgers years. The problem, Och explains, was twofold. The grammatical approach, in which a computer is programmed with the rules of two languages, say English and Russian, didn’t work very well, because there are too many little differences, slang uses and idiosyncrasies to provide an accurate translation. The statistical approach, in which a computer algorithm analyzes patterns in the languages and then compares them, was potentially more promising, but it also had big issues. First, Cold War–era computers didn’t have the processing power to analyze reams of digitized data. Second, those reams of data, which are all-important in producing the statistical sample for algorithms to analyze, simply didn’t exist. By the late nineties, however, both problems were no longer issues as computer processors were packing impressive horsepower and the internet had made digital data plentiful.

In 2002 Och went to work at the University of Southern California’s Information Sciences Institute. The same year DARPA sponsored a contest, not unlike its robot car races, to develop new algorithms for statistical machine translation, particularly ones that could translate Arabic into English. Och entered the contest in 2003 and built a translation engine using publicly available documents from the United Nations, which are automatically translated by actual humans into the six official languages: Arabic, Chinese, English, French, Russian and Spanish. With this gold mine of millions of comparable digitized documents, Och’s algorithm scored an impressive accuracy rate and won the DARPA prize. The following year he was snagged by search-engine giant Google which, like the military, had a significant interest in computerized translation. The two actually have reverse interests: while the military wants to translate languages, particularly Arabic and Chinese, into English so that it can understand its real and potential enemies, Google wants to translate English into other languages to open up the largely Anglo web to the rest of the world, which will dramatically increase the size of its advertising market.

Google launched Translate in 2001 and refined it with Och’s methods when he came on board. As of 2009 this online tool, where the user simply pastes in the foreign text and presses a button to have it translated into the language of their choice, handled more than forty languages. And it works—unlike those other gibberish-spouting attempts that came before it, Google Translate generally gives you the gist, and a little bit more, of the text. As a result, the search company tends to score at the top of annual machine-translation tests run by the National Institute of Standards in Technology. Still, the tool isn’t perfect and the trick now is to get its success rate up to 100 percent. To that end, the company in 2009 announced “translation with a human touch,” in which actual human translators can suggest improvements to the algorithm’s results. The success rate is also bound to improve, Och says, now that the tools to create better systems are freely available. “It’s so easy now because some PhD student somewhere can download all the data and some open-source tools that various people have written and build an end-to-end system. It was virtually impossible before.”18

Google is also applying the statistical approach to voice translation. In 2007 the company launched a 411 phone service that people could call to find businesses they were looking for. The caller spoke their query into the phone and Google computers would either speak the answer back or send a text message with the information. The point behind the service was to amass a database of voice samples, similar to Och’s U.N. documents, from which Google’s algorithm could work. The experiment has borne fruit, with the company launching a voice search service for mobile phones that lets users speak their query into the device, rather than typing it.

DARPA is now testing similar technology in Iraq. Soldiers there are being equipped with iPod-sized universal translator machines that can interpret and speak Arabic for them. The devices have the basics down, like discussions about infrastructure or insurgents, and are slowly improving in other areas as well. “We knew that we couldn’t build something that would work 99 percent of the time, or even 90 percent of the time,” says Mari Maeda, who runs the translation program for DARPA. “But if we really focused on certain military use cases, then it might be useful just working 80 percent of the time, especially if they don’t have an interpreter and they’re really desperate for any kind of communication.”19

Some linguists believe that computers, which have already become better chess players than humans, will eventually surpass our ability to translate languages as well. “Human translators aren’t actually that great. When humans try to figure out how to translate one thing, they drop their attention as to what’s coming in the next ‘graph,” says Alex Waibel of Carnegie Mellon, who was also born in Germany and does translation work for DARPA. “And they’re human. They get tired, they get bored.”20

Welcome Our Robot Overlords

The potential for machine translation far outstrips simple military and commercial uses. The ability to understand all languages may take the internet’s equalizing and empowering abilities to a higher level, providing a greater chance at world peace than we’ve ever known. Since the advent of mass media, the public has had its opinions of other people in other parts of the world shaped largely by third parties: newspapers, books, radio, television, movies. While the internet opened up direct links between the peoples of the world and theoretically cut out those middlemen, the language barrier still prevents real communication. With truly accurate and instantaneous text and voice translation only a matter of years away, that final obstacle is about to fall. In a few years time Americans, for example, will no longer have to take the media’s portrayals of Middle Eastern Muslims at face value. They’ll be able to read, watch and understand Arabic news as easily as they view the New York Times’ website or CNN. And people from around the world will be able to communicate and interact with each other directly, one on one. Pretty soon, we’ll be getting friend requests on Facebook and Twitter from people in China, Tanzania and Brazil. Our social circles are about to broaden massively and we’re going to learn a lot more about people who have been alien to us thus far. As Och puts it, “There’s a real possibility to affect people’s lives and allow them to get information they otherwise couldn’t get. Machine translation can be a real game-changer there. That seems to me to be a good thing.”

Indeed, with this greater communication will come a greater understanding of other people, which will make it more difficult to go to war against them. If governments—the democratically elected kind, anyway—find it difficult today to muster public support to attack another country, it will only be harder when there are direct communications between the people of those two countries.

Where things really get interesting is in the application of statistical machine translation to more than just languages. Because the algorithm is designed to identify patterns, its potential uses in artificial intelligence are mind blowing. Google has identified as much and is taking baby steps toward the idea. In 2009 the company announced plans for a computer vision program that will allow machines to identify visual patterns. The project, still in its research phase, uses the same sort of statistical analysis as Translate. Google fed a computer more than forty million GPS-tagged images from its online picture services Picasa and Panoramio and came up with a system that could identify more than fifty thousand landmarks with 80 percent accuracy. In announcing the project the company said, “Science-fiction books and movies have long imagined that computers will someday be able to see and interpret the world. At Google, we think computer vision has tremendous potential benefits for consumers, which is why we’re dedicated to research in this area.”21

Science fiction is in fact proposing the next direction that statistical machine translation could take. Caprica, the prequel to the hit series Battlestar Galactica—the best show ever, I might add—explores the idea of using pattern-identifying algorithms to create an artificially intelligent (AI) personality. In Caprica’s two-hour pilot, a teenager named Zoe uses such an algorithm to create a virtual AI of herself by feeding it with all the personal digital data she has produced in her lifetime. After Zoe’s death her father, roboticist Daniel Graystone, discovers the AI in a virtual world created by his daughter. The AI, a perfect replica of Zoe, explains to him how it was done:

AI Zoe: You can’t download a personality, there’s no way to translate the data. But the information being held in our heads is available in other databases. People leave more than footprints as they travel through life. Medical scans, DNA profiles, psych evaluations, school records, emails, recording, video, audio, CAT scans, genetic typings, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, TV shows, even prescriptions for birth control.

Daniel: A person is much more than usable data. You might be a good imitation, a very good imitation, but you’re still an imitation, a copy.

AI Zoe: I don’t feel like a copy.

As we learned with sex robots in the previous chapter, the lines between real, thinking and feeling human beings and well-programmed machines are likely to blur in the future. If, as in Zoe’s case, a computer can be programmed to statistically infer how individuals might act based on everything they’ve done before, we may very well be forced to treat them as real people. Och, who has watched Battlestar Galactica, isn’t sure that his algorithms will eventually result in killer Cylon robots, which is what Zoe’s AI eventually becomes, but he does think they will enable smarter machines. “Many people see different things in the term ‘artificial intelligence,’” he says, “but it will definitely lead to more intelligent software.”

The Future Is Invisible

While many of the technologies outlined in this chapter are already having an impact on the world outside the military, there are also some way-out-there lines of research that could lead us in directions we’ve never even dreamed of. In that respect, Sir John Pendry, a theoretical physicist at London’s Imperial College, may one day be viewed as the “godfather of invisibility.”

Pendry is an institution in British physics, having received numerous honours over his decades of work in optics, lenses and refraction, culminating in his knighting by the Queen in 2004. In 2006, he explained to the BBC his theory of how invisibility would work:

Water behaves a little differently to light. If you put a pencil in water that’s moving, the water naturally flows around the pencil. When it gets to the other side, the water closes up. Special materials could make light ‘flow’ around an object like water. A little way downstream, you’d never know that you’d put a pencil in the water—it’s flowing smoothly again. Light doesn’t do that of course, it hits the pencil and scatters. So you want to put a coating around the pencil that allows light to flow around it like water, in a nice, curved way.22

It turns out that’s not so hard to do after all. The key is something called a metamaterial, a composite material constructed at a macroscopic level rather than at a chemical level, which is how a traditional substance such as copper is made. Because they are designed on a much smaller level, metamaterials can conduct electromagnetism in ways not found in nature. Not surprisingly, DARPA has taken an interest in metamaterials and in 2004 held a conference in Texas to discuss potential uses of these new substances. Pendry was invited to give a presentation, wherein he suggested that metamaterials could be used to influence electromagnetic forces to bend light. He wrote the idea up for DARPA—the agency occasionally sponsors research from non-American nationals, as long as they are friendly to the United States—and since then, two separate groups at Berkeley and Cornell universities have used the idea to build metamaterial “invisibility cloaks.” The cloaks were only a few millimetres wide and could cover only two-dimensional objects, but they successfully bent light to flow as a fluid around their subjects. With proper funding, which will doubtlessly come from the military, it will only be a few years until large, stationary three-dimensional objects can be made invisible. Moving objects, however, are more complex, so it may be a while before Harry Potter’s fabled invisible cloak becomes a reality. The first realization will probably be a static object, more like Harry Potter’s gun turret than Harry Potter’s cloak.

Metamaterials offer a world of possibilities. Bending light to confer invisibility may be just one of their nature-defying capabilities. More applications will become apparent only as scientists come to understand them better, as will the mainstream benefits. One spinoff is already being seen. Because they are tremendously light, metamaterials are working their way into radar systems, making these less bulky and extending their use to new applications such as the car collision-detectors mentioned earlier. As iRobot’s Joe Dyer says, these sorts of far-out technologies tend to come in “on cat’s feet,” one small step at a time.

That said, when it comes to invisibility, the British military isn’t waiting for the technology to creep in slowly. In 2007 the U.K. Ministry of Defence and its contractor QinetiQ announced it had successfully made tanks invisible, albeit not with metamaterials but with more pedestrian technology. Using video cameras and projectors mounted on the tank itself, QinetiQ researchers fooled onlookers into completely missing the vehicle by projecting its surroundings onto its surface, an advanced form of camouflage. “This technology is absolutely incredible. If I hadn’t been present I wouldn’t have believed it,” said a soldier present at the test. “I looked across the fields and just saw grass and trees, but in reality I was staring down the barrel of a tank gun.”23 British military experts expect such tanks to be in the field by 2012—which, from the way things are going, means they could see action in Afghanistan or Iraq.

And yes, it is deliciously ironic that British scientists, who worked so hard to make things visible with radar sixty years ago, are now putting so much effort into making them disappear from view.

The Pornography of War

It’s a paradox that the longer the wars in Afghanistan and Iraq go on, the more technological advances there will be. In a way, the more death and destruction the West visits on the Middle East, the more economic benefits it will reap, since the weapons of today are the microwave ovens and robot cars of tomorrow. Like hunger and poverty, the desire to test out new technologies is a major driving force behind such conflicts. The more politically minded would-be terrorists see this unfortunate cause and effect as a form of imperialism, so they fight it by joining al Qaeda and the Taliban. For the general population, it must also rankle. Rather than creating the inventions, as they’ve done for centuries, the people are instead having foreign technologies tested on them. It’s a far cry from the Islamic Golden Age of science and reason.

Such is the way of war, however. When two opponents are evenly matched, there is less likelihood of one side trying anything radical; while a crazy new technology may promise ultimate victory, it can also bring disaster. Far-out new technology is usually only deployed after it has been thoroughly tested, or when one side has an apparent advantage. The Second World War is a perfect example. Nazi Germany only started deploying its futuristic weapons, like the experimental and highly volatile V-2 rockets and jet fighters, when the tide of the war had turned against it, while the United States only dropped the atomic bomb once victory was a foregone conclusion. The longer a war goes on, however, the less apparent it is that one side or another has an advantage, which is certainly the case in Afghanistan and Iraq.

The United States government, however, still believes it has the edge in those conflicts, so it continues the steady rollout of new technologies. For the immediate future, American defence spending will focus on smaller, more flexible and more personal technologies. Small, light robots will be a priority and individual soldiers will get a lot of new equipment to help them find terrorists who have melted into the urban landscape. Some of this technology will be biological, like the DARPA experiments into areas such as regeneration and heightened cognition, while some of it will be oriented around communications and sensors.

Since the first conflict with Iraq in the early nineties, we in the West have come to believe that technology is a key factor in deciding who wins a war. Images from the Second World War, Korea and Vietnam painted horrific pictures: soldiers suffering in disease-ridden trenches or military hospitals with their limbs blown off, bodies being carried off the battlefield, dirt-smeared faces. Recent conflicts, however, have all but erased those images and replaced them with scenes of laser-guided bombs, futuristic-looking planes, bloodless and victimless destruction, green-tinged battles of lights. War has become sanitized, safer, almost fun. While troops in the Second World War had to sing to each other to raise morale, soldiers in Iraq can now while away their leisure time playing Xbox games. To those of us who are insulated from the day-to-day horror, war is more like a game— or at least it is sold as such, if the Air Force’s video-game-heavy website is any indicator.

This sanitization and video-game-ification has affected the general public. With his smart bomb video, Norman Schwarzkopf gave rise to the “war porn” phenomenon, which has grown in lockstep with the rise of the web. YouTube is rife with videos depicting American tanks, jets and drones blowing up Iraqi and Taliban targets, many set to rollicking heavy-metal soundtracks. The same is true in reverse, with the website full of videos of insurgents setting off explosives and blowing up American forces. While only a small subset of the population enjoys watching such footage on a regular basis, the reaction of many to a video of a Reaper destroying a building tends to be, “Oh, neat.” Never mind the people inside that building who have just been obliterated.

I believe this is how we subconsciously want to deal with war. We know it’s happening and we know the real human cost, but we prefer to think of it as a necessity that produces “neat” results. That has certainly been the case in the Middle Eastern desert, where the death of thousands of innocents over the past twenty years has indirectly supplied us with more comfort and convenience than we know.