Warfare seems to obey mathematical rules. Whether soldiers can make use of that fact remains to be seen
IN 1948 LEWIS FRY RICHARDSON, a British scientist, published what was probably the first rigorous analysis of the statistics of war. Richardson had spent seven years gathering data on the wars waged in the century or so prior to his study. There were almost 300 of them. The list runs from conflicts that claimed a thousand or so lives to the devastation of the two world wars. But when he plotted his results, he found that these diverse events fell into a regular pattern. It was as if the chaos of war seemed to comply with some hitherto unknown law of nature.
At first glance the pattern seems obvious. Richardson found that wars with low death tolls far outnumber high-fatality conflicts. But that obvious observation conceals a precise mathematical description: the link between the severity and frequency of conflicts follows a smooth curve, known as a power law. One consequence is that extreme events such as the world wars do not appear to be anomalies. They are simply what should be expected to occur occasionally, given the frequency with which conflicts take place.
The results have fascinated mathematicians and military strategists ever since. They have also been replicated many times. But they have not had much impact on the conduct of actual wars. As a result, there is a certain “so what” quality to Richardson’s results. It is one thing to show that a pattern exists, another to do something useful with it.
In a paper under review in early 2011 at Science, however, Neil Johnson of the University of Miami in Coral Gables, Florida, and his colleagues hint at what that something useful might be. Dr Johnson’s team is one of several groups who, in previous papers, have shown that Richardson’s power law also applies to attacks by terrorists and insurgents. They and others have broadened Richardson’s scope of inquiry to include the timing of attacks, as well as the severity. This prepared the ground for the new paper, which outlines a method for forecasting the evolution of conflicts.
Dr Johnson’s proposal rests on a pattern he and his team found in data on insurgent attacks against American forces in Afghanistan and Iraq. After the initial attacks in any given province, subsequent fatal incidents become more and more frequent. The intriguing point is that it is possible, using a formula Dr Johnson has derived, to predict the details of this pattern from the interval between the first two attacks.
The formula in question (Tn = T1n–b) is one of a familiar type, known as a progress curve, that describes how productivity improves in a range of human activities from manufacturing to cancer surgery. Tn is the number of days between the nth attack and its successor. (T1 is therefore the number of days between the first and second attacks.) The other element of the equation, b, turns out to be directly related to T1. It is calculated from the relationship between the logarithms of the attack number, n, and the attack interval, Tn. The upshot is that knowing T1 should be enough to predict the future course of a local insurgency. Conversely, changing b would change both T1 and Tn, and thus change that future course.
Though the fit between the data and the prediction is not perfect (an example is illustrated in Figure 13.1), the match is close enough that Dr Johnson thinks he is onto something. Progress curves are a consequence of people adapting to circumstances and learning to do things better. And warfare is just as capable of productivity improvements as any other activity.
The twist in warfare is that two antagonistic groups of people are doing the adapting. Borrowing a term used by evolutionary biologists (who, in turn, stole it from Lewis Carroll’s book, “Through the Looking-Glass”), Dr Johnson likens what is going on to the mad dash made by Alice and the Red Queen, after which they find themselves exactly where they started.
Source: Neil Johnson, Miami University
In biology, the Red Queen hypothesis is that predators and prey (or, more often, parasites and hosts) are in a constant competition that leads to stasis, as each adaptation by one is countered by an adaptation by the other. In the case Dr Johnson is examining the co-evolution is between the insurgents and the occupiers, each constantly adjusting to each other’s tactics. The data come from 23 different provinces, each of which is, in effect, a separate theatre of war. In each case, the gap between fatal attacks shrinks, more or less according to Dr Johnson’s model. Eventually, an equilibrium is reached, and the intervals become fairly regular.
The mathematics do not reveal anything about what the adaptations made by each side actually are, beyond the obvious observation that practice makes perfect. Nor do they illuminate why the value of b varies so much from place to place. Dr Johnson has already ruled out geography, density of displaced people, the identity of local warlords and even poppy production. If he does find the crucial link, though, military strategists will be all over him. But then such knowledge might perhaps be countered by the other side, in yet another lap of the Red Queen race.
This article was first published in The Economist in March 2011.
Software: Can software really predict the outcome of an armed conflict, just as it can predict the course of the weather?
IN DECEMBER 1990, 35 days before the outbreak of the Gulf war, an unassuming retired colonel appeared before the Armed Services Committee of America’s House of Representatives and made a startling prediction. The Pentagon’s casualty projections – that 20,000 to 30,000 coalition soldiers would be killed in the first two weeks of combat against the Iraqi army – were, he declared, completely wrong. Casualties would, he said, still be less than 6,000 after a month of hostilities. Military officials had also projected that the war would take at least six months, including several months of fighting on the ground. That estimate was also wide of the mark, said the former colonel. The conflict would last less than two months, with the ground war taking just 10 to 14 days.
Operation Desert Storm began on January 17th 1991 with an aerial bombardment. President George Bush senior declared victory 43 days later. Fewer than 1,400 coalition troops had been killed or wounded, and the ground-war phase had lasted five days. The forecaster, a military historian called Trevor Dupuy, had been strikingly accurate. How had he managed to outperform the Pentagon itself in predicting the outcome of the conflict?
His secret weapon was a piece of software called the Tactical Numerical Deterministic Model, or TNDM, designed by the Dupuy Institute, an unusual military think-tank based near Washington, DC. It was the result of collaboration between computer programmers, mathematicians, weapons experts, military historians, retired generals and combat veterans. But was the result a fluke, or was the TNDM always so accurate?
Bosnia was its next big test. In November 1995, General Wesley Clark asked the Dupuy Institute to project casualty scenarios for NATO’s impending peacekeeping mission, Operation Joint Endeavour. The resulting “Bosnia Casualty Estimate Study”, prepared using results from the TNDM, stated that there was a 50% chance that no more than 17 peacekeepers would be killed in the first year. A year later, six had died – and the Dupuy Institute’s reputation had been established.
The TNDM’s predictive power is due in large part to the mountain of data on which it draws, thought to be the largest historical combat database in the world. The Dupuy Institute’s researchers comb military archives worldwide, painstakingly assembling statistics which reveal cause-and-effect relationships, such as the influence of rainfall on the rate of rifle breakdowns during the Battle of the Ardennes, or the percentage of Iraqi soldiers killed in a unit before the survivors in that unit surrendered during the Gulf war.
Analysts then take a real battle or campaign and write equations linking causes (say, appropriateness of uniform camouflage) to effects (sniper kill ratios). These equations are then tested against the historical figures in the database, making it possible to identify relationships between the circumstances of an engagement and its outcome, says Chris Lawrence, the Dupuy Institute’s director since its founder’s death in 1995.
All of this is akin to working out the physical laws that govern the behaviour of the atmosphere, which can then be used in weather forecasting. But understanding the general behaviour of weather systems is not enough: weather forecasting also depends on detailed meteorological measurements that describe the initial conditions. The same is true of the TNDM. To model a specific conflict, analysts enter a vast number of combat factors, including data on such disparate variables as foliage, muzzle velocities, dimensions of fordable and unfordable rivers, armour resistance, length and vulnerabilities of supply lines, tank positions, reliability of weapons and density of targets. These initial conditions are then fed into the mathematical model, and the result is a three-page report containing predictions of personnel and equipment losses, prisoner-of-war capture rates, and gains and losses of terrain.
What is perhaps even more surprising than the TNDM’s predictive accuracy is the fact that it is for sale. The $93,000 purchase price includes instruction classes, a year of technical support and a subscription to the TNDM newsletter, although subsequent updates to the software cost extra. Organisations that have acknowledged buying the software include the defence ministries of Sweden, South Africa, Finland, Switzerland and South Korea, along with the aerospace giant Boeing. Such customers rarely divulge the uses to which they put the software. But Niklas Zetterling, formerly a senior researcher at the Swedish National Defence Research Institute in Stockholm and now an academic at the Swedish War College, says his country uses the software to improve its arsenal. Mr Zetterling toyed with the software’s technical variables “to create hypothetical weapons” that could then be proposed to engineers.
Rather than simply buying the TNDM, most clients contract the Dupuy Institute to produce studies that combine the software’s predictions with human analysis. American clients have included the Joint Chiefs of Staff, the Army Medical Department, the Department of Defence, the Vietnam Veterans of America Foundation and the Sandia National Laboratories (a government-owned weapons-research centre run by Lockheed Martin). The institute is currently preparing a secret forecast of the duration and intensity of the Iraqi insurgency for the Centre for Army Analysis, a Pentagon agency.
The TNDM is not the only war-forecasting system. Many other systems have been developed, primarily by armed forces, government agencies and defence contractors in America, Australia, Britain, France and Germany. Some are glorified spreadsheets, but many are far more complex, including the American navy’s GCAM software, the OneSAF model used by the army and Marine Corps, the air force’s BRAWLER system and the Australian Department of Defence’s JICM. With all these systems, younger officers tend to have more faith in the technology than their older counterparts. (According to a joke among technophiles, old-school military planners refuse to upgrade from BOGSAT, or “Bunch of Guys Sitting Around a Table”.)
A survey of American war-forecasting systems by the Dupuy Institute found that very few are for sale or hire, and officials in charge of government models are often unwilling to share them with rival agencies. The simple availability of the TNDM has favoured its growth, although technology-transfer laws not surprisingly restrict its sale to certain countries.
Another attraction of the TNDM over rival models is the Dupuy Institute’s independence: it has no weapons to sell, is not involved in internecine competition for budgetary funding, and has no political stake in military outcomes. Software developed primarily for, or by, a contractor or a branch of the armed forces often favours certain hardware or strategies, says Manfred Braitinger, head of forecasting software at IABG, a Munich-based firm that is Germany’s leading developer of war-forecasting systems. The air-force and army models differ widely, for example, in their estimates of how easy it is to shoot down planes. “If you run both models you will see a remarkable difference in attrition rates simulating the same scenario,” Mr Braitinger says. Systems with a wide customer base, like the TNDM, are regarded as more credible, since they do not have such biases.
The TNDM’s reliance on real combat data, rather than results from war games or exercises, also gives it an edge. Another forecasting system, TACWAR, was used by America’s Joint Chiefs of Staff to plan the overthrow of Saddam Hussein. Like many models, it was largely developed with data from war games. As a result, says Richard Anderson, a tank specialist at the Dupuy Institute, TACWAR and other programs based on “laser tag” exercises tend to “run hot”, or overestimate casualties. Real-bullet data is more reliable, because fear of death makes soldiers more conservative in actual combat than they are in exercises, resulting in fewer losses. The discipline is only just beginning to recognise the “tremendous value of real-world verification”, says Andreas Tolk, an eminent modelling scientist at Virginia’s Old Dominion University.
Yet another factor that distinguishes the TNDM from other war-forecasting systems is its unusual ability to take intangible factors into account. During NATO’s air campaign above Serbia and Kosovo in 1999, for example, the Serbs built decoy tanks out of wood and tarpaulins and painted trompe l’œil bomb-holes on to bridges. Microwave ovens, modified to operate with their doors open and emit radiation, were used as decoys to attract HARM missiles that home in on the radar emissions of anti-aircraft batteries.
Such cunning is one of the many intangible variables that are taken into account by the TNDM’s number-crunching equations. Mr Lawrence says incorporating human factors into equations is controversial: most models favour “harder” numbers such as weapons data. But Robert Alexander, an expert on war simulations at SAIC, an American defence contractor, says these are “almost secondary” to human factors.
The Concepts Evaluation Model (CEM) developed at the Pentagon’s Centre for Army Analysis, provides an instructive example. While testing the model, programmers entered historical data from the Battle of the Bulge, the German offensive in 1944 against American forces in Belgium. The CEM predicted heavy German losses in the initial attack, yet German casualties were in fact light. The probable error? The model overlooked the shock value of launching a surprise attack. Analysts duly recalibrated the CEM – using an early version of the TNDM.
The Dupuy Institute is renowned for its ability to take into account such non-material factors: the effect of air support on morale, fear engendered by attack with unexpected weaponry, courage boosted by adequate field hospitals. The mother of all intangibles, within the TNDM model, is initiative, or the ability of lower-ranking soldiers to improvise on the battlefield. Armies from democratic countries – where people are empowered to make decisions – benefit by giving their soldiers some scope to change tactics in the midst of a firefight. Soldiers fighting for authoritarian regimes may not have the reflexes, or the permission, to seize opportunities when they arise in battle.
Maintaining the accuracy of the TNDM means feeding it with a constant stream of new information. The Dupuy Institute’s analysts visit past battlefields to augment their statistical data, follow the arms industry closely and cultivate contacts with government defence procurers. In countries where access to military archives is limited, the Institute surreptitiously pays a handful of clerks to provide photocopies.
The next challenge will be to expand the TNDM’s ability to forecast the outcomes of “asymmetric” conflicts, such as the Iraqi insurgency. To this end, the Dupuy Institute is hoping to get its hands on the Vietcong archives, as Vietnam opens up. Insurgencies rarely leave much of a paper trail, but the Vietnamese kept detailed records of their struggle against the French and Americans. The resulting papers provide the world’s most extensive documentation of guerrilla fighting. “That’s where warfare seems to be heading,” says retired Major General Nicholas Krawciw, who is the president of the Dupuy Institute. And wherever warfare leads, war-forecasting systems must follow.
This article was first published in The Economist in September 2005.
Chaos fills battlefields and disaster zones. Artificial intelligence may be better than the natural sort at coping with it
ARMIES HAVE ALWAYS BEEN DIVIDED into officers and grunts. The officers give the orders. The grunts carry them out. But what if the grunts took over and tried to decide among themselves on the best course of action? The limits of human psychology, battlefield communications and (cynics might suggest) the brainpower of the average grunt mean this probably would not work in an army of people. It might, though, work in an army of robots.
Handing battlefield decisions to the collective intelligence of robot soldiers sounds risky, but it is the essence of a research project called ALADDIN. Autonomous Learning Agents for Decentralised Data and Information Networks, to give its full name, is a five-year-old collaboration between BAE Systems, a British defence contractor, the universities of Bristol, Oxford and Southampton, and Imperial College, London. In it, the grunts act as agents, collecting and exchanging information. They then bargain with each other over the best course of action, make a decision and carry it out.
So far, ALADDIN’s researchers have limited themselves to tests that simulate disasters such as earthquakes rather than warfare; saving life, then, rather than taking it. That may make the technology seem less sinister. But disasters are similar to battlefields in their degree of confusion and complexity, and in the consequent unreliability and incompleteness of the information available. What works for disaster relief should therefore also work for conflict. BAE Systems has said that it plans to use some of the results from ALADDIN to improve military logistics, communications and combat-management systems.
ALADDIN’s agents – which might include fire alarms in burning buildings, devices carried by emergency services and drones flying over enemy territory – collect and process data using a range of algorithms that form the core of the project. To develop these algorithms the 60 researchers involved used techniques that include game theory (in which agents have to overcome barriers to collaboration in order to get the best outcome), probabilistic modelling (which is employed to predict missing data and reduce uncertainty) and optimisation techniques (which can provide means of making decisions when communications between agents are limited). A number of the algorithms also employ auctions to allocate resources among competing users.
In the case of an earthquake, for instance, the agents bid among themselves to allocate ambulances. This may seem callous, but the bids are based on data about how ill the casualties are at different places. In essence, what is going on is a sophisticated form of triage designed to make best use of the ambulances available. No human egos get in the way. Instead, the groups operating the ambulances loan them to each other on the basis of the bids. The result does seem to be a better allocation of resources than people would make by themselves. In simulations run without the auction, some of the ambulances were left standing idle.
The bidding algorithms can be tweaked to account for changing behaviour and circumstance. Proportional bidding, for instance, allows resources to be shared. If one agent bids twice as much as another for the use of a piece of equipment, the first agent will be given two-thirds of its capability and the second one-third. And, a bit like eBay, deadlines placed on making bids speed the process up.
All of which is very life-affirming when ambulances are being sent to help earthquake victims. The real prize, though, is processing battlefield information. Some 7,000 unmanned aerial vehicles, from small hand-launched devices to big robotic aircraft fitted with laser-guided bombs, are now deployed in Iraq and Afghanistan. Their combined video output in 2010 will be so great that it would take one person four decades to watch it. In 2011 things will be worse. America is about to deploy drones equipped with a surveillance system called Gorgon Stare. This stitches together images from lots of cameras to provide live video of an area as big as a town. Users will be able to zoom in for a closer look at whatever takes their interest: a particular house, say, or a car.
Data are also streaming in from other sources: remote sensors operating as fixed sentries, sensors on ground vehicles and sensors on the equipment that soldiers carry around with them (some have cameras on their helmets). On top of this is all the information from radars, satellites, radios and the monitoring of communications. The result, as an American general has put it, is that the armed forces could soon be “swimming in sensors and drowning in data”.
ALADDIN, and systems like it, should help them keep afloat by automating some of the data analysis and the management of robots. Among BAE Systems’ plans, for example, is the co-operative control of drones, which would allow a pilot in a jet to fly with a squadron of the robot aircraft on surveillance or combat missions.
The university researchers, meanwhile, are continuing to look at civilian applications. The next step, according to Nick Jennings of the University of Southampton, who is one of the project’s leaders, is to examine more closely the interaction between people and agents. The earthquake in Haiti in 2010, he says, showed there is a lot of valuable information about things such as water, power supplies and blocked roads that can be gathered by “crowdsourcing” data using software agents monitoring social-networking websites. The group will also look at applying their algorithms to electricity grids, to make them work better with environmentally friendly but unreliable sources of power.
And for those worried about machines taking over, more research will be carried out into what Dr Jennings calls flexible autonomy. This involves limiting the agents’ newfound freedom by handing some decisions back to people. In a military setting this could mean passing pictures recognised as a convoy of moving vehicles to a person for confirmation before, say, calling down an airstrike.
Whether that is a good idea is at least open to question. Given the propensity for human error in such circumstances, mechanised grunts might make such calls better than flesh-and-blood officers. The day of the people’s – or, rather, the robots’ – army, then, may soon be at hand.
This article was first published in The Economist in November 2010.
How to build ethical understanding into pilotless war planes
WHAT THE HELICOPTER was to the Vietnam war, the drone is becoming to the Afghan conflict: both a crucial weapon in the American armoury and a symbol of technological might pitted against stubborn resistance. Pilotless aircraft such as the Predator and the Reaper, armed with Hellfire missiles, can hit targets without placing a pilot in harm’s way. They have proved particularly useful for assassinations. On February 17th 2010, for example, Sheikh Mansoor, an al-Qaeda leader in the Pakistani district of North Waziristan, was killed by a drone-borne Hellfire. In consequence of this and actions like it, America wants to increase drone operations.
Assassinating “high-value targets”, such as Mr Mansoor, often involves a moral quandary. A certain amount of collateral damage has always been accepted in the rough-and-tumble of the battlefield, but direct attacks on civilian sites, even if they have been commandeered for military use, causes queasiness in thoughtful soldiers. If they have not been so commandeered, attacks on such sites may constitute war crimes. And drone attacks often kill civilians. On June 23rd 2009, for example, an attack on a funeral in South Waziristan killed 80 non-combatants.
Such errors are not only tragic, but also counterproductive. Sympathetic local politicians will be embarrassed and previously neutral non-combatants may take the enemy’s side. Moreover, the operators of drones, often on the other side of the world, are far removed from the sight, sound and smell of the battlefield. They may make decisions to attack that a commander on the ground might not, treating warfare as a video game.
Ronald Arkin of the Georgia Institute of Technology’s School of Interactive Computing has a suggestion that might ease some of these concerns. He proposes involving the drone itself – or, rather, the software that is used to operate it – in the decision to attack. In effect, he plans to give the machine a conscience.
The software conscience that Dr Arkin and his colleagues have developed is called the Ethical Architecture. Its judgment may be better than a human’s because it operates so fast and knows so much. And – like a human but unlike most machines – it can learn.
The drone would initially be programmed to understand the effects of the blast of the weapon it is armed with. It would also be linked to both the Global Positioning System (which tells it where on the Earth’s surface the target is) and the Pentagon’s Global Information Grid, a vast database that contains, among many other things, the locations of buildings in military theatres and what is known about their current use.
After each strike the drone would be updated with information about the actual destruction caused. It would note any damage to nearby buildings and would subsequently receive information from other sources, such as soldiers in the area, fixed cameras on the ground and other aircraft. Using this information, it could compare the level of destruction it expected with what actually happened. If it did more damage than expected – for example, if a nearby cemetery or mosque was harmed by an attack on a suspected terrorist safe house – then it could use this information to restrict its choice of weapon in future engagements. It could also pass the information to other drones.
No commander is going to give a machine a veto, of course, so the Ethical Architecture’s decisions could be overridden. That, however, would take two humans – both the drone’s operator and his commanding officer. That might not save a target from destruction but it would, at least, provide room for a pause for reflection before the pressing of the “fire” button.
This article was first published in The Economist in March 2010.