CHAPTER 4

Computer Autonomy

Computer systems with greater autonomy, though not specifically robots, are perhaps the emerging military technology that has the most potential for near-term application. The increasing pace of technological development is now running into human limitations on speed, cost effectiveness, and overall capability. At the same time, hard limits for computer technology seem to be near. Although increased computer autonomy is in many respects welcomed, these systems are unlikely to escape the need for human supervision in the near term. This is especially true if the Western way of war continues to emphasize low collateral damage and flexibility.

In the long term, the potential for “thinking machines” and thinking weapons raise many questions. Some concern practical matters of doctrine—how these systems are to be used, and investment—how to achieve the types of computer autonomy desired by the military. Others are more philosophical and ethical, such as the question of whether reducing the human costs of war, both in financial terms and casualties, and whether the incentives to resort to military action increase for those that possess these technologies. The conjunction of practical and the philosophical concerns forms the legal discourse over autonomous military computer systems. There is already some concern over the legal status of warfighters making use of the latest in remote warfare.1 Overall, the most critical questions being where and when this removal of the human element in war is appropriate, if at all.

The Problem with the Term Robot

Although there is common usage of the word “robot,” there is quite a bit of dispute as to what the word should specifically be applied to. A remote control (RC) car is not often given the label “robot,” but common police bomb-disposal robots are given just that label despite being just as reliant on human control to govern its actions. With munitions, “fire-and-forget” missiles and torpedoes are nothing new and under some definitions would qualify as “robots.” Once unleashed, a “fire-and-forget” weapon will autonomously seek after a target based on its limited perception of the world. Despite its limited interactions with the world, and short autonomous lifespan, a smart weapon can be said to make more “decisions” about what it is doing than a remote-controlled machine such as an RC car or police bomb-disposal robot.

The word robot in the English language is attributed to the Karel Čapek play R.U.R., where the word robot comes from the Czech word “robota.” The subject of the play and source of its title, Rossum's Universal Robots, are biological constructions, more akin to the controversies over genetic engineering and other biotech of chapter six. The Czech word “robota” is variously translated to mean slave, serf, or one who performs drudgery. R.U.R. itself has an uprising by the robots midway through the play, perhaps being an allegory to slave/serf/peasant uprising. R.U.R. often comes up as a cautionary tale about artificial intelligence (AI) running amok, a media trope that has certainty affected public perceptions of real-world use of autonomous military systems.

Lack of a physical presence has not inhibited the use of the term robot. In many of today's more complex computer games, the player competes against computer-controlled agents, or “bots,” that instead of following a preset pattern of actions, have some capacity to react to the player's actions, the game environment, and even other “bots.” An antivirus program struggling on its own between updates to detect new and unfamiliar computer viruses also falls under a generalized concept of “robot.” These limited forms of autonomy exist only within a computer's memory. These two examples are autonomous agents that have a capacity to perceive, make decisions on its actions, and interact with its environment—these agents and the environment all being purely information in the form of computer code and data files. At its core, the ability to make decisions is what sets a true robot apart from other forms of automation.

Now any random-number generator can produce decisions of a sort, so it is important for the autonomous computer system to be able to make, if not the perfect decision consistently, reasonable decisions based on the supplied data. The speed, accuracy, and capacity of computers to process data, as long as it is in a form it can use, has led to great interest in such decision-making systems. Getting ahead of the competition in the decision-making cycle is applicable to both warfare and business. Among the more popular decision-making models, USAF Colonel John Boyd's OODA (observe, orient, decide, act) loop places a premium on getting ahead of an opponent's ability to make decisions. This is true in the world of business, where expert systems and other forms of limited AI are making stock-market transactions, reacting to events faster than human traders could.

The limited autonomy found in some antivirus software tries to uncover computer virus through recognition of virus behaviors and characteristics. In theory this allows the antivirus to detect viruses newer than its last update. These programs, if the user allows, have the capability to decide on their own what files could be threats and take action on their own. Computer viruses represent an arms race where simply trying to keep up with new virus and variants of existing virus would always leave the antivirus vendor playing catch-up. This capability is, however, also capable of misidentifying, and then deleting, perfectly harmless files, including those critical to a computer's normal operation. This form of autonomy is often an optional capability used at one's own risk as mistakes can and do happen with antivirus software deleting innocent critical files. This is the computer equivalent of collateral damage.

At the far end of autonomous systems research is the nebulous concept of “strong AI.” Strong AI is often described as the point where machine intelligence meets or exceeds human intelligence and ability to reason. The definition of strong AI, however, often omits the problematic definitions of what constitutes human intelligence and ability to reason. It is a growing area of philosophical, theological, and legal debate, as the concept is basically the creation of artificial life; if not, then a form of artificial sentience. The United Kingdom has in a recent series of scientific reports on emerging technologies studied the concept of AI rights.2 Strong AI when portrayed by mass media usually warns of it deciding to destroy humanity.

In the short and medium terms, the challenges and promise of strong AI are unlikely to be factors in real-world military autonomous systems. Although it may be useful at some time in the future to create an artificial soldier with all the versatility of the human mind and form, it may not be necessary to take it to the level of strong AI. The point of many autonomous systems is to produce inhuman endurance and precision, not to produce machines capable of reciprocating feelings of loyalty and fellowship, as some soldiers have expressed toward their unit bomb-disposal robot.3 At present it remains a challenge for AI to recognize humans in general, let alone abstract concepts such as “loyalty” or “harm,” leaving seemingly compelling problems such as Asimov's three (sometimes four) laws of robotics4 conflicting with the notion of military robots as fanciful debates for years to come.

Rapid advances in basic computer technology are leading toward computer systems that rival some measures of the human brain (keeping in mind that the function of the human brain is far from completely understood). The convergence of many of the technology areas covered in this book, especially when combined with traditionally high levels of military and intelligence-agency funding, leave many strong AI proponents hopeful. Some, such as prolific technology writer Douglas Mulhal, even speculate that strong AI may emerge unintentionally out of future military or intelligence-agency projects.5

What Is a Computer and Why Everyone Has Great Expectations

The word computer means different things to different people. Convergence with telecommunications has given midlevel cell phones more processing power, and arguably versatility, than desktop computers that were commonly available at the turn of the century. Essentially what is commonly thought of as a computer is a general-purpose electronic device, able to load a variety of programs to perform functions involving the retrieval and manipulation of information. In contrast a desktop calculator and very basic cell phones are limited-function electronic devices; despite sharing many of the basic operating principles such as digital logic and integrated circuit technology, they cannot be easily repurposed for more than what they were manufactured for.

Modern computers are based on digital logic—mathematical operations based on finite values.6 Specifically everything done by a modern computer is eventually reduced to the storage and manipulation of numbers in a two-digit (binary) numbering system, the 0s and 1s now ingrained into popular culture's concept of computers. Boolean algebra, a form of mathematics specialized for a two-value system, 0s and 1s, on and off, true and false, is the underlying principle of digital computing. In terms of hardware, any Boolean function can be produced by a combination of NAND (negated AND) logic gates, meaning that a sufficient number of NAND gates produces a system capable of handling all mathematical functions. The details of the architecture used in real-world computer chips are more complicated than simply having very large collections of NAND gates, with specialized logic circuits to optimize performance; however, in theory, given no requirements for speed, size, and power efficiency, all the calculations performed by modern computing could be eventually performed by a very large array of NAND gates.7

Logic gates are built up from transistors. A transistor in this usage acts as a two-state switch used to represent the binary digits of 0 and 1. The performance metrics of a modern computer element, such as a CPU or memory chip is related to the amount of transistors found in the component, and the speed at which the transistors can operate. Transistor technology has evolved from a collection of different metals soldered together to thin layers of metal and rare earth elements layered onto silicon wafers. Candidates for future transistors include complex arrangements of carbon atoms (carbon nanotubes and graphene) and self-assembling DNA molecules.

Prior to the modern digital computer, there were machines called analog computers. Analog systems are based on physical quantities along a continuous scale, where there is a smooth transition from point to point—an infinite number of different quantities between points. Digital representations only have a finite number of possible values. Now it is argued that with only the nature of the physical medium limiting resolution, there is a richness found in analog representations, old fashion photography and analog audio, not possible with digital formats. Digital technology is limited to the resolution of the original impression captured. With traditional photography an image can be easily enlarged to very large sizes. Digital photography if enlarged beyond the limits of the original image suffers from pixelation (becoming a series of jagged square blocks).

The advantage of digital encoded data is that such data can be easily stored, manipulated, and transmitted. Imagery intelligence gathered by satellite or aircraft can be immediately accessed as many times as needed. Advances in data storage have meant that very high-resolution data can be stored in a physically compact form. Extra information is easily attached or embedded without damage to the original data. In the past film would have be recovered and processed, meaning it took time to reach the analyst phase and rarely was supplied to lower echelons of command. Recovery of film is not an easy task—aircrafts and drones can be shot down, and satellite film-return systems involved the challenges of de-orbiting and reentry from space. A physical medium implies physical limits on how much can be recorded on a mission. Copying an analog medium introduces errors. Photographic intelligence can suffer from being overly marked on, or otherwise damaged, during analysis.

The contemporary technological era, the Information Age, unlike the prior Atomic and Space Ages, has its foundation embedded in everyday life. For the military these technologies have manifested in net-centric warfare, the current Revolution in Military Affairs (RMA), and other superlatives to describe the modes of Information Age warfare spearheaded by the United States. Part of the reason that computer technology has become a dominant factor in contemporary society has been the exponential growth in computer power. Moore's law, named after Intel cofounder Dr. Gordon Moore, posits that, “The number of transistors incorporated in a chip will approximately double every 24 months.”8 The timeframe of 24 months was actually an update made in 1975; earlier in 1965 Dr. Moore expressed that doubling would occur every year.9 Others have placed the recent the rate of doubling at every 18 months.10 The greatest factor in this power increase is due to continuing advances in integrated circuit production; “Cramming More Components onto Integrated Circuits” was the title of Dr. Moore's 1975 article updating Moore's law.

The continuing exponential increase in computing power has been a strong factor in the development of many of the emerging (military) technologies covered in this book. Compact computing power will be needed for many of the on-orbit applications that justify the development of ubiquitous military space access. Computer control is critical to many of the technologies needed for a viable long-range directed energy weapon (DEW) attack; examples of such technologies include atmospheric compensation and the basic sensor-to-shoot cycle for engaging targets at thousands of kilometers of range. For nanotechnology and computer power, it is clearly a two-way relationship; nanotechnology is one possible technology to continue the exponential growth of computer power, and the computer power being made available via Moore's law is needed in modeling the properties of atomic-scale constructions. The exponential growth in computer power is credited as being the key factor to completing the map of the human genome,11 and will continue to be an important tool in finding applications for genetic research and other biotechnology breakthroughs. Advancements in understanding how the brain functions, and learning in particular, are serving as models for solving many of the challenges that keep general-purpose machine autonomy out of reach.

Now that there will be limits to how far a transistor may be shrunk, nearing the atomic scale, quantum physics introduces many complications beyond the scope of this book. Then again Moore's law does not impose a size limit on the conceptual mass-produced integrated circuit; if transistors cannot get smaller, then numbers could simply expand outward (larger microchips), and upward (moving beyond two-dimensional architecture to multilevel three-dimensional architectures).

This wealth in computer power coupled with rise in the Internet has brought about many changes in society, with many surprising results. At the same time there was much hype over the Information Age, much of it predictable in hindsight. The fortunes lost during the dot.com bubble burst are warnings that simply throwing money at trendy-sounding technology is no guarantee of wealth. Post burst, Web 2.0 has, however, been more successful in leveraging technology as a new medium for human interaction. Information revolution success stories include Internet commerce for real products, as well as turning online interaction, such as online search,12 into a commodity. These later Internet successes were not even imagined in 1990s when public Internet access was still emerging, and largely regarded as a toy for the small technically inclined segment of society. Today some jurisdictions are treating Internet access as more of a necessary utility to modern life, and in some cases as a right.13

In the security and defense realm, Information Age warfare has also seen hype, some justified, others not so much. The 2003 U.S.-Iraq War can be, and is, argued as being an example of both. RMA and net-centric warfighting concepts were used to rapidly overcome the Iraqi military and initially oust Saddam Hussein. The long struggle to establish security in the aftermath has for many raised some doubts over the hype over RMA and net-centric concepts (notwithstanding the fact that these concepts were originally meant for a different type of conflict than the low-level conflict seen in Iraq after 2003). A major theme in P. W. Singer's Wired for War is the contention that the early hype and interest over Information Age warfare was misdirected on computer networks, and the application of corporate information technology strategies (some of a questionable nature as many businesses that exemplified such strategies suffered greatly during the dot.com bust) to the military,14 instead of on the robotic warfare systems that Information Age technology was slowly bringing into reality. Another theme was the proliferation and, perhaps more importantly, increasing acceptance of battlefield robotics, specifically land-warfare systems; this effectively was an argument that the true RMA of the Information Age is the removal of people from the platforms allowed by networks and perhaps later by autonomous computer systems.

It is far more important that humans’ 5,000-year-old monopoly over the fighting of war is over.15

The networking of computers and computer-controlled devices has led to other areas of security and defense thinking on the Information Age: cyber security and cyber warfare. Computer virus writing started off as pranks and experimentation, but today are means for serious criminal activity. The U.S. government's Internet Crime Complaint Center, a partnership primarily between the Federal Bureau of Investigation (FBI) and National White Collar Crime Center (NW3C), noted in its 2010 report a growing diversity of Internet crimes being reported.16 Many of the same tools used to hijack, steal information, and otherwise attack Internet resources can also be directed toward political ends.

Incidents of interstate cyber conflict have been clouded by the ease at which an attack's point of origin can be hidden. By hijacking the services of multiple computers across multiple networks attackers can mask their trail.17 Often it becomes difficult to discern if the malicious activity is the result of criminal entities on their own, criminal entities employed by foreign governments, or computer-warfare organizations, such as those known to exist in the People's Republic of China. 18 Much of this activity is espionage, with U.S. government agencies and defense contractors as natural targets. The globalized nature of the Internet and the transnational nature of many computer components have raised not only the specter of foreign attacks breaching the thus far limited defenses in use, but also foreign intelligence and military organizations being able to preinstall backdoors into the software and hardware of critical U.S. military, intelligence, and civilian systems.19

In 2010, the Stuxnet computer virus made headlines by possibly being specifically targeted against industrial equipment connected to Iran's nuclear fuel enrichment program. The complexity of the Stuxnet computer virus has some analysts speculating that it could only have originated from the resources of one or more national governments who have been attempting to stop Iran's nuclear program, which is feared as a cover for nuclear weapons development.20 In other words, Stuxnet potentially represents a case where a computer attack was an instrument of “politics by other means.” If this assertion is true then for some analysts a Pandora's box has been opened, where all nations with computer-controlled infrastructure may come under threat. On the other hand the existence of vulnerable computer-controlled infrastructure, largely a result of human inaction, is also an invitation in itself for attack. At the time of writing the origins of the Stuxnet computer virus is unknown, but some of the specific techniques it employed as a virus are now being found in other forms of malicious software, this time of a criminal nature.21

Policies to counter cyber threats have been slow to emerge, in part because the potential of the Internet is not well understood. An increasing amount of business resources and industrial supervisory control and data acquisition (SCADA), are becoming accessible online, usually to increase productivity by allowing staff to work from any location with Internet access. A specific point of concern is the use of SCADA systems in the operation of critical infrastructure such as power plants. Underscoring this has been the fact that many of the recent incidents of cyber intrusion are less about sophisticated computer code and instead more about psychology and preying on people's lack of understanding and will to follow sensible computer-security processes. Security technologies such as encryption and Virtual Private Network (VPN) software, as well as hardware-based security devices—all rendered useless by improper use by employees. Inattention to locking down online resources opens vulnerabilities for attackers to stumble across. Social engineering uses subterfuge to convince end users to betray usernames, passwords, and other information useful for someone wanting to break into a computer network.

The power of computers has also allowed those who choose to betray the trust placed in them the capacity to cause greater damage, such as the case of U.S. Army Private First Class Bradley E. Manning who at the time of writing is alleged to have been the party responsible for copying and later delivering to unauthorized persons thousands of classified documents and videos. The sheer number of files involved introduces an indiscriminate element to the alleged actions; the stolen material covered diverse topics and activities of the U.S. government, with often only the restricted classification, and being shared on the same secure network being major commonalities.

There is of course an element of hype to war in cyberspace. Thus far there have been incidents of fraud, theft, and espionage. There is, however, nothing yet approaching the “digital Pearl Harbor” feared by many. Prior to the release of the July 2011 U.S. Department of Defense Strategy for Operating in Cyberspace, there was speculation that the new strategy would formalize the option for a “kinetic” response to a major cyber attack.22 State-versus-state cyber attack, like an anti-satellite attack, must be considered in the broader sense of what it is intended to achieve. It would be unlikely that such an attack would be pursued purely for its own effects, and more likely would be in support of some other military action. Moreover the attention and hype over the threat of cyber attack has been leading to constructive discussion and, importantly, action on methods to blunt attacks.

Another emerging aspect of Information Age international relations and international security is the use of social-media technologies in motivating political action. At the time this chapter was being written, popular uprisings demanding democracy and accountability in government were occurring across the Middle East and North Africa. These uprisings, referred to as both the Arab Spring and the Jasmine Revolution, were partially coordinated with social-media tools such as text messaging and online chat. Repressive regimes have been mindful of these developments, taking action to block access to these services and online resources.23 Similar Internet-fuelled discontent in 2010 over charges of election fraud in Iran was suppressed by Iran's repressive regime. Social media, including the easy ability to post digital videos online, are allowing the world a view into such events, when traditional journalists are denied access. The events of 2010 and 2011 will certainly provide many lessons for both those who seek freedom and the regimes that seek to maintain a grasp on power.

A darker side to technology's capacity to inspire action is that it is also being used by violent fringe groups to recruit new members and distribute training material and propaganda. Although not able to mobilize the numbers seen in the protests demanding democratic reform in the Middle East, the nature of terrorism only requires a handful of fanatics to have an effect. This has not gone unnoticed by Western intelligence and law-enforcement agencies. Reportedly a web magazine run by Islamic militants was recently hijacked by the United Kingdom's Government Communications Headquarters (GCHQ), replacing bomb-making instructions with cupcake recipes.24

Digital Realities

It is reasonable to expect computers to continue to grow in their capacity to handle numerical challenges for quite some time, but warfare is never a clean numbers problem. Over time computers have been able to carry out tasks in increasingly complex environments. Though mastering the game of chess may be daunting, it is relatively simple in comparison to navigating a personal home. The average household has an untold capacity for clutter and change is many magnitudes more complex than a chess board with only 64 squares, 32 pieces, and small number of rules for movement. Computers and people, for the most part, find the opposite of these two activities challenging—not everyone can play chess, but nearly everyone can find their way around a home. Computers have been able to offer a challenging game of chess before the advent of personal computing, but developing a commercially viable computer system for navigating a home remains a difficult challenge for robotics.

Presently the chaos of the real world is largely avoided, still allowing for some forms of autonomous operation. Most models of robotic vacuums available today dispense with perceiving and memorizing the entire room, let alone developing a plan of action specific to the room it is in, and instead act on a combination of random pattern running and the ability to detect and avoid obstacles and drop-offs in its immediate vicinity. By not having a memory, or needing to plan, and simply reacting to immediate sensor inputs as it attempts to run a series of patterns designed to maximize floor coverage, the robot vacuum deals with the real world by being ignorant to it outside of immediate concerns, such as avoiding drops and slowing down when nearing a wall. Cutting out the need for cognition, or anything approaching strong AI decision-making capabilities, was for a time controversial in robotics circles. Dr. Rodney Brooks prior to iRobot success noted that his early work on robotics without cognition amounted to a form of heresy.25 Sidestepping the problem of robot perception is also to a large extent how modern autonomous unmanned aerial vehicles (UAVs) or unmanned air systems (UASs), as these machines are now termed, can navigate global distances. Within the volume of space where the NavStar global positioning system (GPS) is usable, the coordinates for any point in space can be determined within a few meters. The reality of easily obtainable, constant, and accurate knowledge about one's own position has greatly simplified the problem of autonomous navigation. GPS-guided machines in effect do not operate with knowledge of real-world geography.

Prior to GPS, the options available to autonomous vehicle operations included terrain contour matching (TERCOM) and later digital scene-mapping area correlator (DSMAC). Both TERCOM and DSMAC are dependent on geospatial data of the ground under the flight path and machine perception able to recognize these “landmarks.” Systems that make use of star tracking are also in use, notably in missile and spacecraft applications. These earlier methods for autonomous navigation attempted to replicate in a machine some of the capabilities of a skilled navigator to recognize landmarks. Inertial navigation systems (INS) are built around precision sensors tracking a vehicle's own movements to determine where it is relative to its origin.

The limits of computer technology at the time, however, meant that major deviations from the planned flight path, or inaccuracies in the preloaded maps, would likely result in the missile or drone losing its way. The technology of the time also required extensive preparation of geospatial data needed for a mission, and could not be changed in-flight, or at the last moment. INS on the other hand losses accuracy with distance and time as minute inaccuracies in even highly refined gyroscopes and accelerometers build up. Without the situational awareness of a skilled crew onboard to direct a vehicle closer, the applications for these forms of autonomous flight were limited. Early TERCOM and DSMAC were common navigation systems found on long-range nuclear-armed cruise missiles, where accuracy of a few hundred meters was well within what was needed for these to be effective weapons. Despite these limitations, in some ways, these earlier weapons had more “robotic” characteristics in that they had to be somewhat capable of sensing real-world surroundings.

Simply being supplied with a vehicle's present coordinates reduces the navigation problem to a matter of getting from one set of precisely known coordinates to another. This is one factor in the proliferation of all-weather precision weapons, such as the Joint Direct Attack Munition (JDAM) series. Though limited (often absent) situational awareness does present problems, especially with regard to operating in crowded airspace, GPS navigation has also made possible long-range autonomous flight. On August 24, 2001, a U.S. Global Hawk UAS named Southern Cross II26 landed in Australia, successfully completing a milestone autonomous flight that originated from Edwards air force base in the continental United States.27 Less than two years later, on August 12, 2003, TAM-5, otherwise known as The Spirit of Butts Farm,28 became the first model airplane29 to cross the Atlantic Ocean nonstop, a feat performed with GPS navigation on autopilot for the majority of its flight.30 The latter record is an indication of the ubiquity of GPS, and the very low cost associated with this form of autonomy.

Converting the real world into data that a computer can comprehend is among the greatest challenges facing autonomous systems today. On a mobile autonomous system this of course has to be done quickly enough for the machine to act upon what it is able to perceive. As technology develops, the tracking of subtler signatures becomes possible. Early forms of machine perception, from well before the digital-computer revolution, could perceive and track high-contrast events such as a jet engine's nozzle, although as the air war over Vietnam demonstrated, the sun also provided a seemingly viable target. Today, there is work inside the laboratory, and among at-home computer enthusiasts, to refine computer software able to detect eyes and faces in pictures and video inputs. Commercial uses for this later technology include using eye detection to cue and focus cameras, reducing “red eye” in photography, and biometric security (not simply detecting an eye, but also unique identifiers of a specific eye).31 Autonomous face recognition and identification software used by the Internet company Facebook32 and others33 have raised privacy concerns, highlighting that the ease of using new technology has outpaced the governance of such capabilities. Biometric technology is being combined with cognitive and behavioral research in hopes that it can be an early-warning tool for terrorist and criminal acts.34

Face recognition, for identification purposes, expands on simply locating face-like images and collects biometric data, such as proportions and distances between features, and compares these numerical data points against a database. The latest smart weapons similarly are able to identify targets from a small onboard database of signatures corresponding to possible targets.35 This is something of a brute-force method, as expanding the repertoire of things to be identified means a larger library of known sensor inputs, coupled with faster processing and even additional specialized recognition algorithms to use to conduct the search of this database. This rapidly becomes an insurmountable data-entry challenge.

General object recognition is still something being studied, both in how humans and animals are able to perceive objects, and in how to either replicate or approximate such perceptive capabilities in machines.36 Although very basic computer logic can track an aircraft based on tracking specific sensor inputs, having a computer recognize the concept of an “aircraft” from general-purpose sensory data is still early in its infancy—an apt description in that that many of these efforts are modeled on the growth and development of a child's ability to perceive of the world.37

The chaos posed to machine perception in the relatively benign setting of everyday United States (or the Western world in general) is far removed from the chaos that is warfare. Apart from the general perception problem, the target may be actively trying to evade the autonomous weapon. The early heat-seeking missile could only discern infrared energy differences, heat, detected by its sensor, and has no way of knowing if the heat source is a jet engine tailpipe, a flare, or the sun. Current-generation heat-seeking missiles employ a range of processing and target-discrimination techniques that have not only been able to discern between the target, decoy, and the sun, but have also, in recent times, been able to recognize the target without actually seeing it at launch, otherwise called lock-on-after-launch (LOAL).

Now mistakes will happen sooner or later, but the computer is sometimes regarded as having an advantage in that there is an expectation that a computer will reliably replicate the same correct decisions given the same inputs. However, this reliability with following orders becomes a liability in light of the limitations of sensor technology. Given similar inputs, it may consistently produce the same wrong decisions. A valid target, a friendly military platform, and a civilian vehicle could all easily present an identical signature as far as a particular sensor may be concerned. Combining multiple sensor types is one technique to narrow down the selection. Military and civilian vehicles may have identical signatures to some sensors; for other sensors, the signatures could be quite distinct.38 Identification friend or foe (IFF), systems that broadcast on-demand coded ID, is also useful, though with respect to discussions on autonomous systems, examples of IFF failures and limitations do seem to come up often.39 Then again, in the chaos of war humans are also liable for “friendly fire,” and the use of autonomous systems to govern the release of weapons is such a recent (and controversial) development that it remains to be seen if it is humans or computers that are better at preventing these incidents.

Decision Making

Connected to these problems of machine perception are the problems of machine understanding of syntax and context. At least fiction has gotten the tendency for machine intelligence to be portrayed as overly literal somewhat correct. Giving machines a better understanding of context and syntax would make computers increasingly capable of handling real-world situations where the appropriate response to stimuli depends on an awareness of the situation it is in. Like pattern recognition in advance machine perception, solutions to the problem again include brute-force techniques such as attempting to program in contingencies for likely scenarios, to trainable systems that can be taught what an appropriate response is, and combinations of the two.

Most present day, and all historical, computer control systems can only simply respond to stimuli, and consequently there is a sentiment that this blind obedience would lead to tragic mistakes when combined with weapons. A human being on the other hand should be capable of more than blindly following orders, and is consequently argued as producing better quality of results, even under the stress of combat. One example of this would be Stanislav Petrov, who as a Soviet air defense lieutenant colonel on September 26, 1983, correctly dismissed reports from a satellite-based early-warning system that the United States had first launched one, then later a small salvo of intercontinental ballistic missiles (ICBMs) against the Soviet Union.40 In some circles at least,41 these decisions are credited with preventing an accidental nuclear war during a particularly tense period of Soviet-U.S. relations. This could also be an example of the “man-in-the-loop” needed to supervise computer-controlled systems. The Russian rebuttal to the acclaim being bestowed on Stanislav Petrov in 2006 contends both U.S. and Soviet/Russian nuclear command and control had, and continue to have, several people-in-the-loop just for this very purpose.

In the previous example, the early-warning system did what it was designed to do, warn of nuclear attack if its satellite-based sensors were set off.42 It had no ability to consider that a single launch didn't make sense in terms of global nuclear war, or for that matter that if it made one mistake with its first warning that its second warning could equally be wrong. In this sense the duty officers, such as Stanislav Petrov, were there to fill in the context of the situation, including reasoning that the small number of reported ICBM launches was more indicative of an error, than of thermonuclear war. It and other computer-based early-warning systems of the time, and those in use today, are basically interfaces for early-warning sensors. Now of course interfaces filter data, which raises a problem familiar to management and leadership: that of the influence of information gatekeepers.

The still limited ability for a computer to determine context has many repercussions to how autonomous systems can and should be used. As mentioned earlier, it is possible to apply face-recognition software to large databases of digital photographs to help identify people for social purposes. However, basic face-recognition algorithms cannot tell if the face identified is a subject of the picture or someone in the background, and certainly cannot tell just from a digital picture if the person identified wants to be identified, or even wanted their picture taken in the first place. There are serious privacy implications from the idea that companies would simply presume anyone who is digitally photographed wants to be identified by their software unless they opt out.

For military autonomous systems, this would be moving from simply being able to recognize a target, to being able to determine when it would be appropriate to attack it. As one rational for autonomous weapons is the potential to reduced collateral damage, understanding the context of a situation where the target is found would likely involve some form autonomous evaluation of the rules of engagement (ROE) before carrying out an attack. ROE has become something of a controversial topic lately with rogue state and terrorist opponents openly attempting to evade attack by collocating their assets among noncombatants in a bid to both avoid detection and, failing that, using the very presence of civilian bystanders as shields, knowing that U.S. and Western forces have inhibitions against imposing excess collateral damage. Sometimes the ROE means allowing an enemy to escape to fight another day. Other times the value of the target means that in accordance with the ROE, the attack must be conducted, resulting in civilian collateral damage. Neither decision is easy to make, especially under the stress of combat. In the aftermath, these types of decisions are open to second guessing.

This concept of machines being able to recognize not just what to attack, but when to attack is controversial. The point made by the Russian press release is that nuclear weapons during the 1983 incident and to this day are too important to be placed under any kind of automated control (notwithstanding rumors of a parallel Soviet-era doomsday system43). Autonomous conventional warfare is, however, a different matter. Right now the decision of when to attack a target is done manually, a weapon is launched and all the shooter can do is wait for impact—being present on the launching platform or remotely controlling it does not change this. The responsibility for firing and potentially any consequences from firing rest with a shooter. In the case of the remotely controlled/supervised platform this is the man-in-the-loop that the Pentagon repeatedly states will be in charge of weapon release by unmanned platforms.

Now related to the controversy over whether AI is to be trusted with giving the decision to fire, is the matter of who right now is making those decisions. The Central Intelligence Agency, despite its usually secretive nature, has become quite prominent in its armed drone operations against terrorists in the Middle East, Afghanistan, and Pakistan. The major concerns cited in a recent United Nations Human Rights Council (UNHRC) report that included concern on drone warfare was the lack of transparency inherent to intelligence organizations.44 While acknowledging that the drone attacks may be legitimate in themselves, the report brought into question the legal and organizational framework under which they are conducted.45 The subject of collateral damage from prolific counterterrorism operations just adds to the legal discourse over matters of accountability and responsibility for firing.46

These questions of accountability and responsibility are being applied to the still-hypothetical autonomous system capable of making the decision to attack. As target identification is more developed than the capability to evaluate the situation, there are also fears that these weapons will be employed without a rigorous ability to consider collateral damage. In this case collateral damage would not be by mistake, but simply due to an undeveloped or oversold feature. Even without a restrictive spending climate, weapons’ vendors have been accused of overselling features—this may someday include a weapon's ability to distinguish valid targets.

The basic antipersonnel landmine is a weapon abhorred by much of the world as an indiscriminate killer. It also happens to be an autonomous weapon, in that it is left behind with the expectation that it will go off once certain sensing criteria are met. Among the major objections to landmines is this type of weapon's general nonconformance to the, “principle that a distinction must be made between civilians and combatants.”47 The pressure switch of an antipersonnel landmine does not know if the foot setting it off belongs to a soldier or a civilian.

Yet at the same time landmines are a cost-effective means to deny territory or shape the movements of the enemy. Unlike many of its allies, the second argument has prevailed in the United States in its decision not to sign the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and On Their Destruction (also called the Ottawa Treaty). The United States is not alone in this regard, as many military powers such as Russia, People's Republic of China, and India have also declined to join the landmine treaty. Therefore it must be asked if, and when, limited discriminatory capability diminishes an autonomous weapon's utility as a precision weapon? Certainly, some ability for target discrimination would be better than the total lack in basic landmine, but how much is needed to address legitimate concerns.

It must also be remembered that someone in authority should know, and is responsible for, where the autonomous system is deployed. A tactical commander could include a weapon's cognitive limitations in the selection process for where and when to use a particular autonomous system. In this sense the use of an autonomous system again could be regarded as nothing more than firing a weapon, with the weight that it carries, except the weapon has the option of not attacking a geographical location if the target is not present or the situation has changed since the decision to fire. The true question then becomes whether the autonomous system's ability to distinguish targets is appropriate for the situation.

A related, somewhat philosophical problem is whether having weapons with some capacity for self-restraint over when and how to attack is an invitation for the use of these weapons. Does the weapons capacity for precision and limited harm, give a false sense of comfort, reducing the perceived moral and ethical weight involved with weapons’ employment? Arguments are being made that, for similar reasons, less-than-lethal weapons produce a relaxed attitude toward their employment, leading to overuse.48 Similarly there are charges that existing armed UAV capabilities, the ability to destroy targets without exposing one's own personnel to the immediate risks of attacking, are proving too tempting not to use.49 It could be argued that if it were not for contemporary risk aversion toward casualties and collateral damage, these targets would have been attacked via pre-RMA methods, such as area bombing. Capabilities such as unmanned platforms, weapons of increasing precision, and platforms able to autonomously take advantage of brief opportunities to attack fill in for the firepower lost due to current attitudes on reasonable use of military force.

Despite efforts to limit collateral damage, there will always be critics to the use of coercive force. Although their humanitarian sentiment is to be commended, it sometimes seems that for some there is no amount of adequate risk and collateral-damage prevention short of complete abstinence from military force. Indeed there are limits to how much more collateral-damage prevention is sensible. The rise of the smart bomb in the 1970s and 1980s was to increase conventional weapons effectiveness. Part of this was to develop conventional-warfare options to avoid the many conceptual, operational, and even moral problems of tactical nuclear war in the face of the Soviet Union's numerical superiority. After success in the 1991 Gulf War, there continued to be funding for new generations of even smaller yet more accurate weapons, now with collateral-damage reduction as a stated goal. The smaller the target, the more expensive the capability needed to hit it. The sad reality is that human conflict is not going away, and increasing the level of precision has diminishing returns. These two factors mean death and maiming from war are not likely to be going away. However, the United States, the West, and even near peers are at least investing in capabilities that give the choice of a more humane form of warfare.

Operation Neptune Spear, the mission on May 1, 2011, to kill or capture Osama Bin Laden, the terrorist mastermind behind the 9/11 attack, highlights several limitations of contemporary unmanned warfare options. After an intense hunt lasting almost a decade (and earlier attempts to find him for other terrorist atrocities prior to 2001), Bin Laden was located hiding in “plain sight” in a relatively affluent neighborhood of Abbottabad, Pakistan. Reportedly the initial option considered was to simply destroy the compound that Bin Laden was hiding in with a precision air strike.50 An air strike, however, not only risked significant collateral damage, but also had the problem of confirming that Bin Laden was in the compound in the first place. The terrorist leader had been careful to avoid being seen from outside the high-walled compound, though there was clearly enough nonvisual evidence to support action of some kind against the compound. In the end the Obama administration took the very risky option of sending U.S. special operations forces to raid the compound and if not able to capture Bin Laden alive, to collect the body for identification.51

Despite losing a helicopter during the landing, the mission was successful with no U.S. casualties. The raid highlighted that unmanned options, such as cruise missiles and precision-guided bombs, lack the precision and flexibility needed for such a high-value target. Ground forces could have called off the mission if the target was not present. During the actual raid, they had to react to the original plan falling apart due to one helicopter suffering damage, and were able to recover the bodies for identification. These are things that machines cannot do presently with any degree of reliability. In the lead up to Operation Neptune Spear, the Obama administration had been stepping up the number of attacks carried out by armed UAV,52 indicating that unmanned options for killing terrorists were very familiar to this president in particular.

Digital Art of the Possible

No one nation has a monopoly on the computer sciences, and the cost barriers for many areas of defense- and security-related computing are low. In some areas of civilian robotics the United States has to some extent fallen behind.53 There is certainly competition in unmanned military systems. It should be remembered that although there has been intermittent U.S. interest in remotely piloted and unpiloted aircraft (older terminology) since World War I, it is often argued that sustained U.S. interest in UAVs was only sparked by Israeli success during the 1982 Lebanon War, where unmanned aircraft were used to both decoy air defenses into attacking nonexistent air strikes54 and to provide critical battlefield surveillance.55 The Israelis remain very competitive in the field of unmanned systems.56 The People's Republic of China has certainly taken note of the recent U.S. proliferation in battlefield robotics, producing both unmanned aerial systems that are at the very least inspired by Western airframes, as well as home-grown UAS concepts.57 More recently, subnational actors, such as the terrorist group Hezbollah, have started operating unmanned aerial systems.58

The capacity to outthink and outmaneuver an opponent, agility in other words, often comes up when one cannot or will not resort to overwhelming (and expensive) quantities of force. Colonel John Boyd's OODA loop is associated with this strategy, as well as the lightweight fighter concept, which led to the F-16, to capitalize on agility. Speed is an important factor in why and if computers are to be given a place in the decision to kill. The time delay between a surveillance drone seeing a target and a manned aircraft striking the target led to many targets escaping. This led to those surveillance drones being armed in an ad hoc manner with antitank missiles, embodying the entire “sensor-to-shooter” cycle in one airframe.59

Increasing effectiveness of weapons is no longer a crude matter of increased firepower or destructive capability; otherwise nuclear weapons would have more regular utility. Instead usable weapons are based more on precision and overwhelming speed these days. As mentioned earlier, there is a premium placed on getting ahead of the opposition in forms of human competition and conflict, and computer assistance is viewed as a key factor. However, there are those who will argue that human reaction time has become a limiting factor in these man-machine combinations. This is where fully autonomous weapons may have advantages over human-controlled weapons. The armed UAS was faster than the drone calling in manned aircraft to strike. A fully autonomous unmanned combat air system (UCAS) or other unmanned combat systems may be even faster than remote human operators at noticing and evaluating valid targets. The problem is that autonomous target perception and evaluation are only emerging military technologies in need of development. For the costs involved with developing such capability, is the increase in speed worth it? Will mistakes made by a fully autonomous weapon be viewed in a harsher light than that caused by human error, despite both mistakes being just as lethal? How much better will the autonomous weapon have to be to be fully unleashed from continuous human supervision?

The budget realities of United States in the early 21st century add to the already daunting challenge of identifying near- and short-term trends in military autonomous systems. On one hand R&D funding faces cutbacks. On the other hand, autonomous computing and unmanned warfare systems promise long-term cost savings. When the U.S. Army's Future Combat System (FCS) program for completely new manned and unmanned vehicles was cancelled in 2009, the unmanned systems survived to be independently integrated with the existing vehicles.60 Aside from the survival of specific programs, there are also existing policy directives. Section 220 of the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 (Public Law 106–398) wanted a third of the operational U.S. deep-strike aircraft fleet to be unmanned by 2010, and a third of operational ground-combat vehicles to be unmanned by 2015.61 Despite the short timeframe remaining, these goals were cited in the FY2009–2034 Unmanned System Integration Road as part of the congressional direction that formed the basis for this Pentagon document and which still appears to be current at the time of writing.62

The “three Ds” used to advocate unmanned systems, tasks that are dull, dirty or dangerous, also happen to be areas where replacement of humans promises savings. Machines are already undertaking long-term surveillance missions, fuel and maintenance being the limiting factors for endurance. Analysis of the data collected could also be referred to as dull; however, the low cost of electronic reconnaissance and intelligence collection has changed the seeming monotony of scrutinizing collected intelligence to an overwhelming flood that cannot be coped with without additional personnel.

A better example of cost is that of the personnel needed to maintain an effective sentry. Unmanned sensors, and smarter monitoring software, do not get tired or degrade in performance over the course of a shift.

Having people perform dirty and dangerous tasks has many costs. Financially, there are the direct costs of hazard pay and protective equipment. In the Western world if people are to be placed into harm's way then it is generally accepted that they must be properly trained and prepared to handle the risks—training and preparation that have become both more comprehensive and more expensive over time. In the end there are the human costs; people suffer the consequences of warfare, either by death, physical injury, and trauma. Although it may be cold hearted to note, the reality is that injured soldiers also have long-term costs in terms of rehabilitation and other forms of medical care that is owed to those who put themselves in harm's way on behalf of the nation.

Robotics may give options to minimize casualties, not necessarily just for the combatants removed directly from the battlefield. Remotely operated or supervised robots present the opportunity to give emotional detachment for the “man-in-the-loop,” which is an opportunity to reduce the unfortunate mistakes of war brought on by stress and the pressures of surviving combat. Instead of speed being the armor, the operator would have distance as protection. An operator removed from the direct dangers of the battlefield would not have the pressures of survival forcing hasty decisions. Indeed the remotely operated vehicles or supervised autonomous robot could be in effect treated as a disposable avatar—hardware traded for a theoretical reduction in collateral damage under many circumstances. Where appropriate, the machine soldier would methodically, perhaps leisurely, select targets as it is in no rush and, as far as the operator is concerned, in no real danger. Now for some reducing the act of killing to the emotional level of answering a customer service call is disturbing and inhuman. The psychological impact on the drone's operator is also little studied at present.63 There is, however, great appeal to the idea that a professional soldier would kill in a manner consistent with the ideals of a just war; giving the shooter time to consider their actions is one option on how to achieve this.

Then there are also the political costs of captured personnel. Not every political entity is as concerned with the proper treatment of prisoners of war as Western governments are. That there is open and continuous debate on the treatment of the small number of captured nonmilitary combatants, who are not covered by POW conventions, attests to the general consensus in the West that detainees should be treated humanely by some definitions. Historically this has not been the case elsewhere, as demonstrated by the notorious “Hanoi Hilton” and the treatment of downed Coalition aircrews in the 1991 Gulf War.

Unmanned systems, despite charges of being increasingly “gold plated” by contractors and procurement officers alike,64 can be made financially much cheaper to own and operate. The classic example used to highlight the costs advantages of unmanned combat is the fighter pilot. Fighter pilots come from the larger pool of military aviators via an expensive selection and training process. When sent out on missions, they require life-support to function in the deadly conditions found at high altitude, G-suit to endure the effects of rapid maneuver, and as a final onboard level of protection, sit on an explosively propelled ejection seat. Depending on how far one wishes to take the accounting, a pilot's support costs also include the search and rescue resources the United States has made a duty to provide, including fleets of aircraft and helicopters, with their own crews, as well as specialist personnel, including special operations forces. Beyond this are the pension and veteran benefits due to all involved with the air combat operations.

A UCAS on the other hand could be cut down to the minimum equipment needed to reliably launch, find the target, attack, and return for recovery. If the last part, return for recovery, is omitted, this weapons system is a cruise missile. Training involving increasingly higher-resolution simulations are another avenue for savings. With virtual training there would be no need to expose the actual UCAS to the wear and tear of flight. Even if the rest of the UCAS were manufactured with the same redundancies found in manned aircraft, the lack of a pilot and associated support equipment, infrastructure, and personnel would provide considerable savings.

Autonomous Data Analysis

It is likely that the proliferation in unmanned systems for intelligence collection will create a need for advanced tools to help with analysis. Battlefield sensors are being miniaturized with the aim of providing a pervasive blanket of “smart dust.” Although such surveillance coverage is unlikely to eliminate the fog of war, it will add to the friction of war by producing an overwhelming flood of data. Automating some of the pre-analysis would help with the data bottleneck, and AI will be essential to preventing such systems from becoming simple, and potentially misleading filters and information gatekeepers.

The commercial sector has produced numerous customer relations management (CRM) and data-mining tools to sift through the massive databases that consumers willing and unwillingly contribute to with every transaction. Though sometimes hard to discern, people fall into routines and patterns that are often replicated in the lives of complete strangers who fit a similar profile. Many of these events are captured in the transaction histories that banks and credit card companies keep. Other data is generated from the use of tags such as transit passes and supermarket loyalty cards.65 Much of the personal technology that people carry, such as cell phones, actively broadcast information on a continuous basis. It is not just financial databases, and the tracking of tagged objects that are being combined, but facial-recognition technology is enabling tracking without the need for artificial markers. Surveillance cameras now use the same technology base as computer chips and have experiences a similar reduction in size and cost, accompanied by proliferation to the point of becoming another ubiquitous element of society. The now profitable industry of web searches allows for quick location of a person's “web footprint,” including information a person may not realize is, or wants, in the public domain. The rapid rise of digital technology, and its implications on society, presents many policy issues for government that extend well beyond the scope of this book.

Much of this surveillance infrastructure came about as independent components for purely commercial purposes. Patterns found in the behavior of communities are clearly useful for marketing and product support. Given large enough databases, and enough computer power, higher resolution can be obtained focusing on smaller and smaller communities. Reduced cost of analysis and reduced costs of marketing, all built on ongoing advances in computing power, have led to niche or micromarketing. There are hopes and fears that this type of data collection with proper analysis will be able to someday focus advertising to the level of the individual.66 Fraud detection would be another capability that large-scale transaction monitoring and analysis lends itself to, and already does operate at the level of individual.

Advance analysis of ambient data collected could potentially allow for preemptive identification of criminal and terrorist activity. In the aftermath of a crime, or terrorist incident, investigators have to piece together disparate clues to build up a picture of what had transpired. Investigators have been aided by computer technology, and tools that bring together the many forms of electronic monitoring occurring in the background. If criminal and terrorist acts have patterns in transactional behaviors that foreshadow the act, it then becomes in theory possible to detect these patterns before an attack or crime.67 Developing computer systems capable of identifying trends and threats from the mass of data includes technologies such as pattern recognition and computer cognition.

As intelligence programs, the specifics of government data mining and data-analysis tools to keep up with the flood of data are kept secret; however, the interest from intelligence and military agencies is not.68 Defense Advanced Research Projects Agency (DARPA) has publically listed many programs that would seem to be driving advances in computer pattern recognition, cognition, and autonomy. Among DARPA Information Innovation Office programs at the time of writing is Deep Learning, a program to move computer learning beyond the limited “shallow” learning available today via emulation of how biological brains work, and produce a general-purpose machine intelligence capability that can be applied to a range of applications.69 Another program, of perhaps immediate applicability, is the Robust Automatic Transcription of Speech (RATS) program, which applies computer perception and cognition to language detection, identification, and analysis. This latter DARPA Information Innovation Office program has great resonance with the current Global War on Terror, where good translators are critical, but also, by some accounts, a hard-to-find resource.70

Following the death of Osama Bin Laden, a MIT paper published two years earlier entered popular media coverage of the raid—it concerned using biogeographic modeling to finding Bin Laden and appears to have been not too far off the mark. Biogeography is a branch of science normally used to model endangered species populations based on last-known sightings and habitat requirements, data points that had equivalents in the 10-year hunt for Bin Laden. Though Abbottabad was not top of the list of likely city or region he was hiding in, it was second,71 according to the analysis done by Thomas W. Gillespie and John A. Agnew.72 More importantly the behavioral modeling used had suggested that Bin Laden would be more likely found in an urban setting instead of in the mountainous tribal regions, as was common belief prior to the announcement that he was found and killed by U.S. forces in a major urban center. This computer model was made from the limited amount of publically available information on Bin Laden's last-known whereabouts and medical history.

Despite their common use by business, government data mining has run into controversy. First and foremost is the association of mass surveillance with oppression and tyranny, the “big brother” state of Orwellian fame. Computer technology is potentially allowing mass indiscriminate data collection and analysis to be conducted at all times on everyone participating in modern society—ubiquitous surveillance. Among the more controversial programs was DARPA's total information awareness (TIA) project, which as highlighted by its critics, may have involved mass and uncontrolled access to various types of personal data and communications.73 Components of TIA are reputed to have continued after TIA itself was ended in the FY 2004 defense budget, meaning the legal and constitutional debate continues as well.74

There is also a potential for reading too much into the data. Criminals and terrorists to be successful in blending into society would be exhibiting many of the patterns of behavior of law-abiding citizens. There are concerns that in an attempt to interdict criminal or terrorist activity, innocents may be caught up. Although it is unlikely that a person's yogurt preference may lead to accusations of terrorism, there have already been incidents were data-mining techniques have led to embarrassing investigative failures. One has to wonder if the online research for this and other academic works on military technology has led to placement on government watch lists (and beyond to a list of the “mostly harmless”). There is a balance between security and freedom, one that often changes with events,75 but there are also principles, such as those that the U.S. Constitution is based on, that are sacrosanct and therefore cannot be ignored.

Air, Land, and Sea

Mobile machine intelligence faces more challenges. Apart from restrictive volume, power, and durability requirements, mobility itself presents its own perception and cognition needs. It is not surprising that UASs were the first operational examples of autonomous vehicles. Although exceptional situational awareness is prized in a military pilot, the largely empty skies provide freedom for autonomous vehicles to operate without any specific need for situational awareness. Satellite-based GPS has for all intents and purposes eliminated the ambiguities of air navigation. This lack of awareness, as mentioned earlier, has led to resistance from pilots who may have to share airspace with these machines. There is a very real danger of collision with these remotely controlled and autonomous, but unaware, aircraft.76 Though there is a pilot, remotely piloted vehicles have their situational awareness limited by the constraints of the onboard sensors and communications bandwidth. Remotely supervised (instead of remotely piloted) implies even less situational awareness. The fact that controllers are often not pilots is a related source of resistance over expanded UAS use. With the skies becoming more crowded by UAS, there is a growing need to introduce more situational awareness to these machines.

At present most large operational UASs require an operator to be remotely controlling the vehicle during takeoffs and landing (or launch and recover in current terminology).77 This reality is fast changing as technologies that allow for autonomous runway and taxing operations are introduced. GPS enhanced by ground beacons allows for precision navigation down to the centimeter. One of the few publicized X-37B mission objectives was to conduct a fully autonomous landing from space; it successfully performed this feat on December 3, 2010, after its first orbital flight and 244 days in orbit.78 Of arguably greater challenge was the demonstration in 2011 of repeated “hands-free” landings on a moving U.S. Navy aircraft carrier by an F/A-18 fighter aircraft as part of the X-47B Unmanned Combat Air System Demonstrator (UCAS-D) program.79 It should be noted that for normal F/A-18 carrier operations, the takeoff is always meant to be “hands free” with the pilot keeping away from the computer-aided flight controls until airborne.

For unmanned ground vehicles (UGVs) or unmanned ground systems (UGSs), situational awareness is paramount, not only to prevent unfortunate incidents of unmanned hit and runs, but also to avoid impassable terrain and the many other obstacles to ground movement. The DARPA grand challenges of 2004, 2005, and 2007 very publically demonstrated the initial difficulties and later fast pace of advancement with autonomous ground vehicles. From no vehicles able to successfully reach the finish line in 2004, to several being able to complete the rigorous 131.6-mile desert course the next year in 2005,80 to several vehicles being able to navigate an urban course, with simulated traffic and parking lot to contend with, in 2007, it reflected the pace of advancement.81 For the U.S. taxpayer, these DARPA-run competitions, with relatively small prizes of a few million dollars, also had the benefit of driving multiple lines of research among the largely self-funded competitors, including a few private and volunteer efforts. Some of this research may of course end up in personal driveways; robotics experts have been suggesting for years, that autonomous cars, with their theoretical superior adherence to the rules of the road, will cut accident rates.

UGS applications right now in development include autonomous logistic vehicles able to form convoys and deliver supplies to outposts,82 and smaller multilegged machines to follow dismounted infantry through rough terrain.83 Short range, remotely controlled, robots, practically the same as used by police bomb disposal, became a common piece of equipment for explosive ordnance disposal (EOD) units operating in Iraq and Afghanistan. On a larger scale would be equipping future long-range autonomous UGSs with minesweeping equipment, relieving sappers and EOD of a task that can be both very time consuming and monotonous, arguably fitting the dull category, but inescapably dangerous. Mines and improvised explosive devices (IEDs) are a prominent danger for supply convoys, meaning UGS technologies would not only be reducing the dangers to drivers, but also increasing the odds of supplies getting through.

Armed UGSs, variants of existing EOD robots, exist but there currently seems to be a reluctance to actually use them in combat.84 These are still remotely controlled machines, and many share technology with remote weapon station (RWS) turrets mounted on some armored vehicles. Advantages given for an RWS over a traditional manned turret include the RWS control station taking up less room inside the vehicle than a traditional turret basket, and increased protection by having the gunner fully inside the armored vehicle. Drawbacks include less situational awareness by virtue of the outside world being condensed into a video stream, and lack of access to the weapon if it jams. The armed (remotely controlled) robot in this sense is the same as the RWS except that the operator is given much more separation from the weapon, which conceptually gives more protection. The increased remoteness, however, aggravates the drawbacks, and introduces the problem of a vulnerable control signal. With fears of cyber warfare, both the combat robot and RWS face similar threats from hacking, though the wireless connection to the robot would still seem more vulnerable.

Concerns over the enemy taking over control of a UGS highlights the overall problem of maintaining a data link. Without an autonomous capability, a drone that is cut off from its controller is just as dead as if it had been blown up. Fears over the ability for peer competitors, and others, to be able to override the control of drone operator is quite clearly a worse scenario. There have already been reports of the data stream from the UAS being intercepted.85 The need for a secure data link makes remotely controlled unmanned system is more vulnerable than a living breathing soldier. A soldier can not only function, but also still fight, without a digital umbilical to HQ. The imprecise nature of psychology makes it much harder, arguably impossible, to break in and usurp leadership of the enemy's forces with any degree of reliability. A simple two-way data flow on the other hand requires just hacking encrypted communications and control software. A UGS operating in a busy urban environment is perhaps more susceptible. The urban jungle does not present a clean radio-frequency environment, with many transmitters competing for reception. Hostile electronic countermeasures, even without attempts to take over, add to the problem of communicating with the UGS.

The maritime environment presents its own challenges. It is better to consider maritime operations as two separate battle spaces: surface and underwater. Unmanned surface vehicles (USVs) can make use of GPS navigation, allowing them to use similar navigation techniques as UASs and UGVs. Maritime security, protecting shipping in harbors, and conducting short-range patrols are tasks that USVs are being marketed for. The unmanned undersea vehicle (UUV) has a more difficult navigational task, though one that is familiar to submariners: navigation without regular access to the outside world. The tethered approach used by remotely operated submersibles used in research such as deep-sea exploration and for underwater industry would limit military applications to only those that could be near a mother ship.86 The difficulty in maintaining constant communications with a UUV would seem to make some degree of autonomous operation a necessity. Along with computer autonomy, accurate low-cost inertial navigation and onboard sonar technology have also benefited from the information-technology-based miniaturization. Potential applications are broadly similar to that in other environments and include minesweeping and reconnaissance of landing areas.

A potential technological choice in unmanned military systems is that between relatively large self-contained machines, or larger groups of smaller, individually less capable, machines acting together toward a common goal, but without any direct coordination or centralized command, a swarm. In nature, very small creatures, often of limited intelligence, self-assemble into groups (swarms, flocks, packs, etc.) to carry out tasks such as migration and hunting. Although it may be impossible to create human-level intelligence (and therefore impossible to achieve the technological singularity), it may be possible to replicate the functional intelligence of an ant, locust, bird, fish, or a wolf—all types of types of creatures that have examples of traversing great distances together and, for the military, even provide examples where groups hunt together. The swarm concept lever-ages off of the still ongoing trend of computers and sensors shrinking in size and price, and combines it with the study of successful examples from nature.

The military robots swarm already has a bit of a precursor in the form of the CBU-97 sensor fused weapon (SFW). The SFW is a cluster bomb containing 40 hockey-puck-sized submunitions, called “skeets,” each equipped with multiple sensors (infrared to detect engines, laser contour sensor to detect the shape of a vehicle), and an explosively formed penetrator warhead. Unlike robots in the swarm concept proper, skeets are spread out by a complex dispensing mechanism: air bags eject a pattern of 10 parachute-stabilized dispensers from the CBU-97; these dispensers then use rockets to spin the dispensers and fling from each four skeets each over a targeted area. During the brief operational life of a skeet it will on its own find a target and determine the best position during its intentionally wobbling decent to attack; any that fail to engage self-destruct.87 A single SFW can attack multiple targets in a 121,400-square-meter (30 acres) area.88 The SFW is used to attack multiple easy-to-see targets out in the open, a convoy for instance, that the pilot would need to find before releasing the cluster bomb—a cluster bomb in accordance with precision-warfare paradigms.

A robot swarm on the other hand would be using the many sensors distributed among the swarm to seek out targets, and then coordinate an attack against found targets. Being self-propelled, a robot swarm would be able to survey hundreds if not thousands of square kilometers. A horde of small sensors, with smaller fields of view can, for some elusive targets, be more efficient than a single large sensor scanning a large area. On finding a target, the swarm would assemble to cooperatively attack. After using only as much force as needed to destroy the target, the swarm would disperse again to find more targets. The swarm concept is just one too many that leverage off of expected near-term advances in robotics to provide first persistent surveillance, and later persistent attack over entire regions. Specifically, the swarm concept utilizes not only computer autonomy, but also the ability for a group of machines to self-organize for the different needs of general surveillance and carrying out an attack. This is not just trust in the machine, but trust in the aggregate of many machines. As with many network-enabled concepts, there is room for human supervision. The degree of human supervision and the centralization of command and control are indicative of whether a specific group of robots is operating as a swarm, or tied to a controlling “mother ship”—with ultimately trust in the technology deciding on these factors.89

The swarm concept mirrors how computers are being networked together today to handle tasks that a single monolithic computer would have difficulty with. Parallel computers have been put together from used desktop computers and gaming consoles to provide low-cost supercomputer-like capabilities. Parallel processing and today's large bandwidth networks are allowing the creation of these supercomputers on an ad hoc basis, with elements spread across a large geographic region. Computer criminals already use similar techniques to overwhelm targeted servers with a distributed denial of service (DDOS) attack. Using thousands of hijacked personal computers, these machines are remotely commanded to flood the targeted server with network traffic. Hijacked computers, called zombie computers, formed into remotely control networks, called botnets, are hired out for criminal enterprises and spam e-mail purposes. It would seem that reliable covert RC over thousands of remote computer devices is not an insurmountable requirement.

From Advanced Smart Weapon to Semi-Autonomy and Beyond

Many defense academics have been noting the trend of increasing area of battlefield allocated per combatant90 or, similarly, the increasing range of weapons with time. If shear destructive power is being avoided, then increased accuracy is required to compensate—this is the raison d'être for precision-guided weapons. Ranges and battle spaces have increased in size to the point where it is difficult for the human mind to handle. This has created a need for various command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) tools, with increasing amounts of automation to assist the warfighter. In other words, the current trends and needs in conventional warfare are providing plenty of opportunities that can be filled by AI advances.

Now there is reluctance for both real operational and acceptance reasons to remove the “man-in-the-loop”; so perhaps it is better to say that semi-autonomy is the direction that weapons and platforms are taking, in the near term at least. Indeed with the difficulties involved with developing a robust form of AI able to cope with the real world, it would seem that people will not be cut completely from the loop for some time to come. However, computer power is increasing at an exponential rate, and systems are becoming increasingly capable of at least mimicking many decision-making processes. Increased computer power is aiding research in other areas of science, including the nature of consciousness and sentience. These advances are converging, with perhaps long-term implications on just how much trust future leadership and warfighters are willing to place on machines that can tell when to kill.


Notes

1. Philip Aston, United Nations Human Rights Council Report “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Philip Alston,” May 28, 2010, http://www2.ohchr.org/english/bodies/hrcouncil/docs/14session/A.HRC.14.24.Add6.pdf.

2. United Kingdom, Department for Business Innovation & Skills, A.I. Law: Ethical and Legal Dimensions of Artificial Intelligence, October 5, 2011, http://www.sigmascan.org/Live/Issue/ViewIssue/485/1/a-i-law-ethical-and-legal-dimensions-of-artificial-intelligence/.

3. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 337–38.

4. See Glossary.

5. Douglas Mulhal, Our Molecular Future—How Nanotechnology, Robotics, Genetics, and Artificial Intelligence Will Transform Our World (Amherst: Prometheus Books, 2002), 61.

6. Real numbers can have an infinite number of digits on both sides of the decimal places. For instance, in real numbers between 0 and 1 there are an infinite number of divisions. The binary system is an integer-only number system—whole numbers only.

7. This claim was made early in at least one introduction to a digital logic course for engineering students.

8. Intel, “Moore's Law and Intel Innovation,” http://www.intel.com/about/companyinfo/museum/exhibits/moore.htm.

9. Michael Kanellos, “Moore's Law to Roll On for Another Decade,” CNET, February 10, 2003, http://news.cnet.com/2100–1001–984051.html.

10. Ibid.

11. Ray Kurzweil, “How My Predictions Are Faring,” October 2010, http://www.kurzweilai.net/predictions/download.php.

12. “The Secret to Google's Success,” Bloomberg Businessweek, March 6, 2006, http://www.businessweek.com/magazine/content/06_10/b3974071.htm.

13. Don Reisinger, “Finland Makes 1Mb Broadband Access a Legal Right,” CNET, October 14, 2009, http://news.cnet.com/8301–17939_109–10374831–2.html.

14. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 189.

15. Ibid., 194.

16. United States Internet Crime Complaint Center, “2010 Internet Crime Report,” http://www.ic3.gov/media/annualreports.aspx.

17. Larry Greenemeier, “Seeking Address: Why Cyber Attacks Are So Difficult to Trace Back to Hackers,” Scientific American, June 11, 2011, http://www.scientificamerican.com/article.cfm?id=tracking-cyber-hackers&WT.mc_id=SA_Twitter_sciam.

18. Department of Defense, Military and Security Developments Involving the People's Republic of China 2010, http://www.defense.gov/pubs/pdfs/2010_CMPR_Final.pdf.

19. Department of Defense, Department of Defense Strategy for Operating in Cyberspace, July 2011, http://www.defense.gov/news/d20110714cyber.pdf.

20. British Broadcasting Corporation, “Stuxnet Worm Hits Iran Nuclear Plant Staff Computers,” September 26, 2010, http://www.bbc.co.uk/news/world-middle-east-11414483.

21. Sergey Golovanov, “TDL4 Starts Using 0-Day Vulnerability!” Securelist, http://www.securelist.com/en/blog/337/TDL4_Starts_Using_0_Day_Vulnerability.

22. Fahmida Y. Rashid, “Marine General Calls for Stronger Offense in U.S. Cyber-Security Strategy,” eWeek.com, July 15, 2011, http://www.eweek.com/cZa/IT-Infrastructure/Marine-General-Calls-for-Stronger-Offense-in-US-CyberSecurity-Strategy-192629/.

23. John Palfrey, “Middle East Conflict and an Internet Tipping Point,” Technology Review, February 25, 2011, http://www.technologyreview.com/web/32437/?mod=chthumb.

24. Michael Holden, “Make Cupcakes, Not Bombs,” Reuters, June 3, 2011, http://uk.reuters.com/article/2011/06/03/uk-britain-mi6-hackers-idUKLNE75203220110603.

25. Rodney Brooks, Flesh and Machines: How Robots Will Change Us (New York: Pantheon Books, 2002), 43.

26. The name Southern Cross II is a tribute to the historic 1928 United States to Australia flight of the original Southern Cross.

27. Dr. Jim Young, Air Force Flight Test Center History Office, “Milestones in Aerospace History at Edwards AFB,” June 2011, http://www.af.mil/shared/media/document/AFD-080123–063.pdf.

28. A tribute to both Charles Lindbergh's Spirit of Saint Louis, and the farm where it was tested.

29. Federation Aéronautique Internationale, “Aeromodelling and Spacemodelling records,” http://www.fai.org/ciam-records.

30. “Trans Atlantic Model,” http://tam.plannet21.com/.

31. United States, “Iris Recognition,” August 2007, http://www.biometrics.gov/Documents/irisrec.pdf.

32. Mark Milian, “Facebook Lets Users Opt out of Facial Recognition,” CNN, June 9, 2011, http://www.cnn.com/2011/TECH/social.media/06/07/facebook.facial.recognition/index.html?iref=allsearch.

33. Mark Milian, “Google Taking More Cautious Stance on Privacy,” CNN, June 1, 2011, http://www.cnn.com/2011/TECH/web/05/31/google.schmidt/index.html.

34. Joseph A. Bernstein, “Seeing Crime Before it Happens,” Discovery, January 23, 2012, http://discovermagazine.com/2011/dec/02-big-idea-seeing-crime-before-1t-happens.

35. The sensor fused weapon program uses multiple sensors to identify viable vehicle size targets.

36. Cathryn M. Delude, “Computer Model Mimics Neural Processes in Object Recognition,” MIT press release, February 23, 2007, http://web.mit.edu/press/2007/surveillance.html.

37. Ibid.

38. Sensor fused weapon again.

39. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 126.

40. David Hoffman, “I Had a Funny Feeling in My Gut,” Washington Post, February 10, 1999, http://www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter021099b.htm.

41. Russian Federation. “On Presentation of the World Citizens Award to Stanislavpetrov,” January 19, 2006, http://www.un.int/russia/other/060119eprel.pdf.

42. Whether the sensors were in error is a different matter.

43. Nicholas Thompson, “Inside the Apocalyptic Soviet Doomsday Machine,” Wired, Issue 17.10, September 21, 2009, http://www.wired.com/politics/security/magazine/17–10/mf_deadhand.

44. Philip Aston, United Nations Human Rights Council Report “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Philip Alston,” May 28, 2010, http://www2.ohchr.org/english/bodies/hrcouncil/docs/14session/A.HRC.14.24.Add6.pdf.

45. Ibid.

46. Philip Aston, United Nations Human Rights Council Report, “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Philip Alston,” May 28, 2010, http://www2.ohchr.org/english/bodies/hrcouncil/docs/14session/A.HRC.14.24.Add6.pdf.

47. Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, September 18, 1997, http://www.icrc.org/ihl.nsf/FULL/580?OpenDocument.

48. Amnesty International, ‘Less Than Lethal’? The Use of Stun Weapons in US Law Enforcement, December 16, 2008, http://www.amnesty.org/en/library/info/AMR51/010/2008/en.

49. Nat Hentoff, CATO Institute, “Few Batting Eyes at Obama's Deadly Drone Policy,” Cato.org, July 29, 2009, http://www.cato.org/pub_display.php?pub_id=12012.

50. Mark Mazzetti, Helene Cooper, and Peter Baker, New York Times, May 2, 2011, “Behind the Hunt for Bin Laden,” http://www.nytimes.com/2011/05/03/world/asia/03intel.html?_r=1.

51. CBS, “Obama on Bin Laden: The Full “60 Minutes” Interview,” May 8, 2011, http://www.cbsnews.com/8301–504803_162–20060530–10391709.html.

52. Paul McLeary, Sharon Weinberger, and Angus Batey, “Drone Impact on Pace of War Draws Scrutiny,” Aviation Week, July 8, 2011, http://www.aviationweek.com/aw/generic/story_generic.jsp?channel=dti&id=news/dti/2011/07/01/DT_07_01_2011_p40–337605.xml&headline=Drone%20Impact%20On%20Pace%20Of%20War%20Draws%20Scrutiny.

53. Larry Greenemeier, “National Robotics Week to Highlight the Past, Present and Future of Robot Research,” Scientific American, February 9, 2010, http://www.scientificamerican.com/blog/post.cfm?id=national-robotics-week-to-higlight-2010–02–09.

54. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 56.

55. Charles Levinson, “Israeli Robots Remake Battlefield,” Wall Street Journal, http://online.wsj.com/article/SB126325146524725387.html#MARK.

56. Ibid.

57. Graham Warwick, “China Targets UAS as Growth Sector,” Aviation Week, May 5, 2011, http://www.aviationweek.com/aw/generic/story.jsp?id=news/awst/2011/04/25/AW_04_25_2011_p62–312195.xml&channel=defense.

58. Peter La Franchi, “Iranian-Made Ababil-T Hezbollah UAV Shot Down by Israeli Fighter in Lebanon Crisis,” Flight International, August 15, 2006, http://www.flightglobal.com/articles/2006/08/15/208400/iranian-made-ababil-t-hezbollah-uav-shot-down-by-israeli-fighter-in-lebanon.html.

59. David Eshel, “Technology Shortens the Kill Chain in Urban Combat,” Aviation Week—Defense Technology International, March 28, 2008, http://www.aviationweek.com/aw/generic/story_generic.jsp?channel=dti&id=news/DTIKILL.xml&headline=Technology%20Shortens%20the%20Kill%20Chain%20in%20Urban%20Combat.

60. Kris Osborn, “FCS Is Dead; Programs Live On,” Defense News, May 18, 2009, http://www.defensenews.com/story.php?i=4094484.

61. National Defense Authorization Fiscal Year 2001, http://www.dod.mil/dodgc/olc/docs/2001NDAA.pdf.

62. Department of Defense, FY2009–2034 Unmanned Systems Integration Roadmap, 2009, http://www.acq.osd.mil/psa/docs/UMSIntegratedRoadmap2009.pdf.

63. Paul McLeary, Sharon Weinberger, and Angus Batey, “Drone Impact on Pace of War Draws Scrutiny,” Aviation Week, July 8, 2011, http://www.aviationweek.com/aw/generic/story_generic.jsp?channel=dti&id=news/dti/2011/07/01/DT_07_01_2011_p40–337605.xml&headline=Drone%20Impact%20On%20Pace%20Of%20War%20Draws%20Scrutiny.

64. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 256–59.

65. Adam L. Penenberg, “The Surveillance Society,” Wired, Issue 9.12, December 2001, http://www.wired.com/wired/archive/9.12/surveillance.html.

66. Clay Dillow, “IBM's Digital Billboard Displays Individualized Ads by Reading the RFID Data in Your Wallet,” Popular Science, August 2, 2010, http://www.popsci.com/technology/article/2010–08/ibms-new-digital-billboard-tailors-individual-adsrfid-data-your-credit-card.

67. John Markoff, “Taking Spying to Higher Level, Agencies Look for More Ways to Mine Data,” The New York Times, February 25, 2006, http://www.nytimes.com/2006/02/25/technology/25data.html?ref=johnmarkoff&pagewanted=print.

68. Ibid.

69. DARPA, “Deep Learning,” http://www.darpa.mil/Our_Work/I2O/Programs/Deep_Learning.aspx.

70. Jason Straziuso, “US Companies Send Translators to Afghanistan Who Are Old, Out Of Shape, Unprepared For Combat,” Huffington Post, http://www.huffingtonpost.com/2009/07/22/us-companies-send-transla_n_243046.html.

71. http://www.bbc.co.uk/news/world-13275104.

72. Thomas W. Gillespie and John A. Agnew, “Finding Osama Bin Laden: An Application of Biogeographic Theories and Satellite Imagery,” MIT International Review, February 17, 2009, http://web.mit.edu/mitir/2009/online/finding-bin-laden.pdf.

73. Mark Williams, “The Total Information Awareness Project Lives on,” Technology Review, April 26, 2006, http://www.technologyreview.com/communications/16741/.

74. Ibid.

75. Adam L. Penenberg, “The Surveillance Society,” Wired, Issue 9.12, December 2001, http://www.wired.com/wired/archive/9.12/surveillance.html.

76. Peter La Franchi, “Animation: Near Misses between UAVs and Airliners Prompt NATO Low-Level Rules Review,” Flight International, March 14, 2006, http://www.flightglobal.com/articles/2006/03/14/205379/animation-near-misses-between-uavs-and-airliners-prompt-nato-low-level-rules.html.

77. John A. Tirpak, “The RPA Boom,” Airforce Magazine, August 2010, http://www.airforce-magazine.com/MagazineArchive/Documents/2010/August%202010/0810RPA.pdf.

78. Scott Fontaine, “X-37B Test Mission Called Big Accomplishment,” Defense News, December 6, 2010, http://defensenews.com/story.php?i=5176376&c=AME&s=TOP.

79. Graham Warwick, “F/A-18 Shows UCAS-D Can Land on Carrier,” Aviation Week, July 8, 2011, http://www.aviationweek.com/aw/generic/story_channel.jsp?channel=defense&id=news/asd/2011/07/08/05.xml&headline=F/A-18%20Shows%20UCAS-D%20Can%20Land%20On%20Carrier.

80. DARPA, “A Huge Leap Forward in Robotics R&D: $2 Million Cash Prize Awarded to Stanford's ‘Stanley’ as Five Autonomous Ground Vehicles Complete DARPA Grand Challenge Course,” News Release, October 9, 2005, http://archive.darpa.mil/grandchallenge05/gcorg/downloads/GC05%20Winner.pdf.

81. DARPA, “Urban Challenge,” http://archive.darpa.mil/grandchallenge/index.asp

82. General Dynamics, http://www.gdrs.com/robotics/programs/program.asp?UniqueID=12.

83. Marc Raibert, Kevin Blankespoor, Gabriel Nelson, et al. “BigDog, the Rough-Terrain Quaduped Robot,” http://www.bostondynamics.com/img/BigDog_IFAC_Apr-8–2008.pdf.

84. Erik Sofge, “The Inside Story of the SWORDS Armed Robot “Pullout” in Iraq: Update,” Popular Mechanics, October 1, 2009, http://www.popularmechanics.com/technology/gadgets/4258963.

85. Siobhan Gorman, Yochi J. Dreazen, and August Cole, “Insurgents Hack U.S. Drones,” Wall Street Journal, December 17, 2009, http://online.wsj.com/article/SB126102247889095011.html.

86. Underwater salvage and communications line tapping are important military and intelligence applications where tethered remote systems have been used. See: Sherry Sontag, Christopher Drew, and Annette Lawrence Drew, Blind Man's Bluff: The Untold Story of American Submarine Espionage (New York: Public Affairs, 1998).

87. Textron Systems. “Sensor Fuzed Weapon,” http://www.textrondefense.com/assets/pdfs/datasheets/sfw_datasheet.pdf.

88. Ibid.

89. P. W. Singer, Wired for War (New York: The Penguin Press, 2009), 229–36.

90. Doug Beason. The E-Bomb (Cambridge, Massachusetts: Da Capo Press, 2005), 33.