15 Identifying, and killing, the quarry

A tide turns

Technology used to help spies. Now it hinders them

DEPENDING ON WHAT KIND OF SPY YOU ARE, you either love technology or hate it. For intelligence-gatherers whose work is based on bugging and eavesdropping, life has never been better. Finicky miniature cameras and tape recorders have given way to pinhead-sized gadgets, powered remotely (a big problem in the old days used to be changing the batteries on bugs).

Encrypted electronic communications are a splendid target for the huge computers at places such as America’s National Security Agency. Even a message that is impregnably encoded by today’s standards may be cracked in the future. That gives security-conscious officials the shivers.

But the same advances are making life a lot harder for the kind of spy who deals with humans rather than bytes. The basis of spycraft is breaking the rules without being noticed. As with the Russians arrested in June 2010 in America and now deported, that involves moving around inconspicuously, usually under false identities, and handing over and receiving money by undetectable means. For those who get caught, the consequences can be catastrophic.

The biggest headache is mobile phones. For spycatchers, these are ideal bugging and tracking devices, which the target kindly keeps powered up. But that makes them a menace for spies (and for terrorists, who often operate under the same constraints). Removing the battery and putting the bits in a fridge or other metal container disables any bug, but instantly arouses suspicion. If two people being followed both take this unusual precaution near the same location at the same time, even the most dull-witted watcher may infer that a clandestine meeting is afoot.

Creating false identities used to be easy: an intelligence officer setting off on a job would take a scuffed passport, a wallet with a couple of credit cards, a driving licence and some family snaps. In a world based on atoms, cracking that was hard.

Thanks to electrons, it is easy to see if a suspicious visitor’s “shadow” checks out. Visa stamps from other countries can be verified against records in their immigration computers. A credit reference instantly reveals when the credit cards were issued and how much they have been used. A claimed employment history can be googled. Mobile-phone billing records reveal past contacts (or lack of them).

Missing links, in fact, are almost as bad as mistakes. A pristine mobile phone number is suspicious (especially when coupled with new credit cards and a new e-mail address, but no Facebook account). An investigation that would have once tied up a team of counter-espionage officers for weeks now takes a few mouse clicks.

With enough effort, a few convincing identities can be kept alive – a minor industry in the spy world involves keeping the credit cards for clandestine work credibly active. But for serious spies these legends wear out faster than they can be created.

Dead on arrival

Biometric passports are making matters worse. If you have once entered the United States as a foreigner, your fingerprints and that name are linked for ever in the government’s computers. The data can be checked by any of several dozen close American allies. Obtaining a passport with a dead child’s birth certificate is increasingly risky as population registers are computerised. Stealing a tourist’s passport and changing the photo (a tactic favoured by Israel’s Mossad) is no longer easy: in future the biometric data on the chip will need to check out too. Only the most determined and resourceful countries can do that – and the cost is spiralling.

Technology creates other problems. Take the dead-letter drop, where an item can be left inconspicuously and securely for someone else to pick up. Intelligence officers are trained to spot these, in places that are easy to visit and hard to observe (cisterns and waste bins in public lavatories, or under a heating grating in a church pew, for example). Time was when monitoring a suspected dead-letter box involved laborious work by humans. Now it can be done invisibly, remotely and automatically. Next time you bury a beer bottle stuffed with money in a park, you should ponder what cameras and sensors may be hidden in the trees nearby.

The days of the “illegal”, living for many years in a foreign country under a near-foolproof false identity, are drawing to a close. Spymasters are increasingly using “real people” instead: globalisation makes it unremarkable for those such as Anna Chapman, one of the ten Russians deported from America (under her own, legally acquired, British name), to study, marry, work and live in a bunch of different countries. Like so many other once-solid professions, spying is becoming less of a career and more a job for freelancers.


This article was first published in The Economist in July 2010.

 

What’s in a name?

Computing: Intelligence agencies are using new software to handle the arcane business of comparing lists of names

IN 1990 A PAKISTANI named Mir Aimal Kansi used an alternative transliteration of his Urdu family name, Kasi, to obtain a visa at the American consulate in Karachi. He entered America, overstayed his one-month visa and then went to the Pakistani embassy in Washington, DC, and obtained a new Pakistani passport, this time with the “n” reinserted in his surname. Using this new identity, he obtained working papers and a driving licence, bought a gun and went on to shoot five CIA employees, killing two, outside the agency’s headquarters. (Kansi spent four years on the Federal Bureau of Investigation’s Ten Most Wanted list before being captured, and was executed in 2002.)

This case shows how the apparently humdrum process of transliterating names from one language to another can be exploited by criminals. According to the FBI, Kansi also used the names Mir Aimal Kanci, Mir Aman Qazi, Amial Khan and Mohammed Alam Kasi. That last name introduces a further twist: there are more than 15 accepted ways to transliterate “Mohammed” from Arabic into English, and when you count the ways the name is written in the other 160-plus languages that use the Roman alphabet, the figure jumps to more than 200 correct spellings. Transposing words or names from one language or alphabet into another is evidently an inexact science.

In Indonesia, where single names are common, what appears to be just part of a name may in fact be the whole name. Chinese and Korean surnames are often mistakenly written last by Westerners, but some Chinese and Koreans are now adopting the Western convention. And then there is the problem of spelling variants. The Chinese family name Zhou, for example, may be written by English speakers as Jhou, Joe, Chou or Chow. Jafari, the common English transliteration of an Iranian family name, is rendered in German as Djafari or Dschafari. Shahram, the standard English spelling of an Iranian first name, becomes Scharam in German (and Chahram in French).

Such ambiguities cause huge problems for intelligence analysts trying to monitor and prevent terrorist activity. In an effort to avoid being picked out by computer watch-lists, many terrorists use alternative (but linguistically legitimate) transliterations of their names. “It’s extremely commonplace, particularly with Islamic names,” says Dennis Lormel, former director of the FBI’s Terrorist Financing Operations Section, who is now an intelligence consultant at Corporate Risk International, near Washington, DC. “There are just so many variations of a name and they know that, so they can just flip-flop their name around,” he says.

But companies in a fast-growing corner of the software industry have developed name-matching programs that can take into account the thousands of possible transliterations of a particular name – say, Mohammed bin Abdul Aziz bin Abdul Rahman Al-Khalifa – as they scan through watch-lists and databases looking for a match. The industry was flooded with investment in 2004 when the 9/11 Commission noted that the terrorists who attacked New York and Washington, DC, on September 11th 2001 defeated watch-lists by using different transliterations of their names. The commission urged the government “to close the long-standing holes in our border security that are caused by the US government’s ineffective name-handling software.” In-Q-Tel, the investment arm of the Central Intelligence Agency (CIA), began pouring money into name-matching software developers, according to a former official who chose which firms to finance. He says the technology is now becoming “pretty solid, robust stuff”.

A name by any other name

“One of our biggest problems has always been variations of names,” says Michael Scheuer, who was the head of the CIA’s Osama bin Laden Unit from 1996 to 1999. Mr Scheuer says analysis was “back-breaking”, especially for Arabic names, because it involved manually compiling lists of variations deemed worthy of tracing. This included positing names with or without titles such as bin (“son of”, also written as ben or ibn), abu (“father of”, also written as abou), sheikh (tribal leader, also written as sheik, shaikh, shaykh, cheik and cheikh) or haji (Mecca pilgrim, also written as hajj, hajji, hadj, haaji, haajj, haajji and haadj). The article al (also written as el) may be attached to surnames directly, separated from surnames with a hyphen or a space, or omitted altogether. Some variants do not even look similar. Sheikh can be written as jeque in Spanish. Wled, one English transliteration of an Arabic first (and last) name, is often written as Ould in French.

To make matters worse, many bureaucracies tolerate name abbreviations and short forms. The result is that intelligence analysts, no matter how expert, are often plagued by doubts. Has a Russian-speaking intelligence officer in Moscow transliterated into Cyrillic the name of a Nepalese suspect in exactly the same way as a Russian-speaking Uzbek field officer? Has an Italian analyst working with Russian intelligence caught and corrected the error, or passed it along?

Name-matching difficulties actually worsened when counterterrorism activity increased in late 2001. Analysts were granted greater access to databases kept by foreign agencies – but locating relevant files proved hard. A Portuguese case officer, for example, might have difficulty taking advantage of Dutch intelligence on, say, Nepalese Maoist extremists, if he is unfamiliar with Dutch conventions for the transliterations of Nepalese names. The number of people gathering and handling intelligence also increased suddenly, and many newcomers had little language training or were unsure how to transliterate names from spoken sources. Information on suspects increased, but spelling variations – due both to terrorist subterfuge and intelligence shortcomings – made it harder to interpret.

Mr Scheuer says that by late 2004, when he left the CIA, name-matching software was beginning to perform well, and American agencies were investing heavily in the latest technology – with one glaring exception. Computer systems at the State Department, according to Mr Scheuer, were “archaic compared to the rest of the intelligence community”. That was a grave weakness, considering that the State Department issues passports and visas for travel to the United States.

If someone fears that the Romanised version of his name has been flagged, he can choose a new (but linguistically correct) transliteration, and then establish that spelling gradually by using it on low-level documents such as a gym-membership card or a lease agreement. These “feeder documents” are used to obtain progressively higher-level identity documents, such as a city-issued residence card, a driving licence or a certified birth-certificate translation. These documents, in turn, are presented at consulates to obtain the ultimate prize – passports and visas using the new variation of the name.

“It’s a very tough set of problems,” says Philip Zelikow, executive director of the now-dissolved 9/11 Commission. The group’s research turned up numerous cases of transliteration fraud. Mr Zelikow notes, however, that the American government is now doing a better job handling names. Other experts affirm that the State Department has dramatically upgraded its name-matching software.

There are no firm estimates of how much name-matching software is being sold worldwide. Government agencies generally decline to release figures, and software firms shy from discussing hard numbers. Those in the industry, however, claim that growth is spectacular. Sam Kharoba of First Capital Technologies, based in Baton Rouge, Louisiana, says his firm’s sales doubled in each of the three years 2004 through 2006. Its clients include America’s Defence Department and over 20 other government agencies. Around 25 companies are working in the field in America, and a handful are in Europe.

As watch-lists multiply beyond the realms of intelligence and international travel, demand for such software is likely to grow. Increasingly, watch-lists are used to restrict access to training and education, and to stop people buying property, guns, chemicals and other things that can be made into weapons. Many postal services rely on name-matching software to pick out packages for inspection.

The financial services industry is also adopting the technology, which is often required by central banks and monetary authorities. In America, the Treasury’s Office of Foreign Assets Control is one of the world’s largest users of name-matching technology. It uses it to compile watch-lists that are sent to thousands of banks worldwide. Credit-card companies use the software to spot recidivists applying for new cards under modified names. (Names are cross-referenced with addresses, dates of birth and other data.) Developers and users are hesitant to discuss costs. But OMS Services, a British software firm, says government agencies pay a lot more than commercial users, who pay about $50,000 for its NameX programme.

Name-matching software is also becoming more sophisticated and performing other functions. The name-matching software made by Identity Systems, based in Old Greenwich, Connecticut, is used by more than 200 government agencies around the world. As well as flagging names on watch-lists, it also sifts historical records to reveal hidden relationships: if two men have entered a country several times on the same plane, sitting apart from each other, might one be a moneyrunner and the other his overseer?

Names and numbers

GNR, a software firm owned by IBM, makes software that “enriches” names by annotating them with inferred cultural information, scored according to probabilities derived from demographic data. Given a particular name it can, for example, say how likely someone is to have a particular place of birth. Names and titles can also provide clues as to birth order, occupation, deaths of spouses and immigration history. GNR also repairs names that are “damaged” by transliteration because the original non-Roman script is lost. The software generates possible original spellings and provides accuracy probabilities for each one. This helps spooks starting with the Romanised versions of, say, Pushtu names, to gather intelligence on those individuals in their native Afghanistan. GNR sells its software to law-enforcement and intelligence agencies – those in Australia, Israel and Singapore are particularly big spenders.

Name-matching software is just one small item in the counter-terrorism toolbox. But it can play a crucial role by enabling analysts to piece together snippets of intelligence. What’s in a name? The answer, in some cases, is a surprising amount of valuable information.


This article was first published in The Economist in March 2007.

 

If looks could kill

Security experts reckon the latest technology can detect hostile intentions before something bad happens. Unless it is perfect, though, that may be bad in itself

MONITORING SURVEILLANCE CAMERAS is tedious work. Even if you are concentrating, identifying suspicious behaviour is hard. Suppose a nondescript man descends to a subway platform several times over the course of a few days without getting on a train. Is that suspicious? Possibly. Is the average security guard going to notice? Probably not. A good example, then – if a fictional one – of why many people would like to develop intelligent computerised surveillance systems.

The perceived need for such systems is stimulating the development of devices that can both recognise people and objects and detect suspicious behaviour. Much of this technology remains, for the moment, in laboratories. But Charles Cohen, the boss of Cybernet Systems, a firm based in Ann Arbor, Michigan, which is working for America’s Army Research Laboratory, says behaviour-recognition systems are getting good, and are already deployed at some security checkpoints.

Human gaits, for example, can provide a lot of information about people’s intentions. At the American Army’s Aberdeen Proving Ground in Maryland, a team of gait analysts and psychologists led by Frank Morelli study video, much of it conveniently posted on the internet by insurgents in Afghanistan and Iraq. They use special object-recognition software to lock onto particular features of a video recording (a person’s knees or elbow joints, for example) and follow them around. Correlating those movements with consequences, such as the throwing of a bomb, allows them to develop computer models that link posture and consequence reasonably reliably. The system can, for example, pick out a person in a crowd who is carrying a concealed package with the weight of a large explosives belt. According to Mr Morelli, the army plans to deploy the system at military checkpoints, on vehicles and at embassy perimeters.

Guilty

Some intelligent surveillance systems are able to go beyond even this. Instead of merely learning what a threat looks like, they can learn the context in which behaviour is probably threatening. That people linger in places such as bus stops, for example, is normal. Loitering in a stairwell, however, is a rarer occurrence that may warrant examination by human security staff (so impatient lovers beware). James Davis, a video-security expert at Ohio State University in Columbus, says such systems are already in use. Dr Davis is developing one for America’s Air Force Research Laboratory. It uses a network of cameras to track people identified as suspicious – for example, pedestrians who have left a package on the ground – as they walk through town.

As object- and motion-recognition technology improves, researchers are starting to focus on facial expressions and what they can reveal. The Human Factors Division of America’s Department of Homeland Security (DHS), for example, is running what it calls Project Hostile Intent. This boasts a system that scrutinises fleeting “micro-expressions”, easily missed by human eyes. Many flash for less than a tenth of a second and involve just a small portion of the face.

Terrorists are often trained to conceal emotions; micro-expressions, however, are largely involuntary. Even better, from the researchers’ point of view, conscious attempts to suppress facial expressions actually accentuate micro-expressions. Sharla Rausch, director of the Human Factors Division, refers to this somewhat disturbingly as “micro-facial leakage”.

There are about 40 micro-expressions. The DHS’s officials refuse to describe them in detail, which is a bit daft, as they have been studied for years by civilian researchers. But Paul Ekman, who was one of those researchers (he retired from the University of California, San Francisco, in 2004) and who now advises the DHS and other intelligence and law-enforcement agencies in the United States and elsewhere, points out that signals which seem to reveal hostile intent change with context. If many travellers in an airport-screening line are running late, telltales of anguish – raised cheeks and eyebrows, lowered lips and gaze – cause less concern.

Supporters of this sort of technology argue that it avoids controversial racial profiling: only behaviour is studied. This is a sticky issue, however, because cultures – and races – express themselves differently. Judee Burgoon, an expert on automated behaviour-recognition at the University of Arizona, Tucson, who conducts research for America’s Department of Defence, says systems should be improved with cultural input. For example, passengers from repressive countries, who may already be under suspicion because of their origins, typically display extra anxiety (often revealed by rigid body movements) when near security officials. That could result in a lot of false positives and consequent ill-will. Dr Burgoon is upgrading her software, called Agent 99, by fine-tuning the interpretations of body movements of people from about 15 cultures.

Another programme run by the Human Factors Division, Future Attributable Screening Technology, or FAST, is being developed as a complement to Project Hostile Intent. An array of sensors, at a distance of a couple of metres, measures skin temperature, blood-flow patterns, perspiration, and heart and breathing rates. In a series of tests with role-playing volunteers, the system detected about 80% of those who had been asked to try to deceive it by being hostile or trying to smuggle a weapon through it.

A number of “innocents”, though, were snagged too. The trial’s organisers are unwilling to go into detail, and are now playing down the significance of the testing statistics. But FAST began just 16 months ago in June 2007. Bob Burns, the project’s leader, says its accuracy will improve thanks to extra sensors that can detect eye movements and body odours, both of which can provide further clues to emotional states.

Until proved innocent

That alarms some civil-libertarians. FAST, they say, amounts to a forced medical examination, and hostile-intent systems in general smack of the “pre-crime” technology featured in Philip K. Dick’s short story “The Minority Report” and the film based on it. An exaggeration, perhaps. But the result of using these devices, according to Barry Steinhardt, the head of technology and liberty at the American Civil Liberties Union in Washington, DC, will inevitably be that too many innocents are entangled in intrusive questioning or worse with “voodoo science” security measures.

To the historically minded it smacks of polygraphs, the so-called lie-detectors that rely on measuring physiological correlates of stress. Those have had a patchy and controversial history, fingering nervous innocents while acquitting practised liars. Supporters of hostile-intent systems argue that the computers will not be taking over completely, and human security agents will always remain the final arbiters. Try telling that, though, to an innocent traveller who was in too much of a hurry – or even a couple smooching in a stairwell.


This article was first published in The Economist in October 2008.

 

Worse than useless

An American government attempt to help Iranian dissidents backfires

FOR IRAN’S BELEAGUERED OPPOSITION, the internet is a potent weapon and a big hope. During the Green movement’s protests in 2009, activists used Twitter and Facebook, often from mobile phones, to upload videos of police brutality and spread messages of support and news of new demonstrations. The authorities responded not only by cracking heads, but cracking computers: trying to trace users, block services and close websites.

Outsiders found the struggle inspirational. Austin Heap, a 26-year-old hacker born in Ohio, decided to develop anti-censorship software to foil the authorities’ efforts. He named the product Haystack, and began in 2010 to distribute it to Iranian opposition leaders. The publicity was excellent: he was named “Innovator of the Year” by the Guardian, a British newspaper, and gained a plaudit from Hillary Clinton, America’s secretary of state. The Treasury, State Department and Commerce Department hastened to grant Mr Heap a licence to export the software to Iran – not normally a favoured destination for American sales efforts, especially cryptographical ones.

But experts rapidly raised doubts. On investigation, Haystack looked dangerously insecure. Not only did it fail to encrypt secrets properly, but it could also reveal its users’ identities and locations. Amid mounting criticism, Haystack’s backers withdrew it on September 10th 2010.

Mr Heap’s reaction heightened the worries. He admitted the project’s faults but claimed only “a couple of dozen” people had been testing the product; all bar one had been alerted in writing that it was still being developed. How many of those people were in Iran, and why they had not been informed at the outset, was unclear. A disquieting message on the Haystack website reads “We have halted ongoing testing of Haystack in Iran pending a security review. If you have a copy of the test program, please refrain from using it.” That suggests that the test was anything but controlled. Some reports suggest that up to 5,000 people had the software (though some say it did not work).

A tweet from Daniel Colascione, Haystack’s lead developer, on September 13th 2010 added to the cringeworthy picture. “A whirlwind is coming straight for me…I flee”. That option will not be available to Haystack’s users in Iran, where the authorities have sometimes tortured and raped opposition activists. Ross Anderson, a professor of security engineering at Cambridge University, calls it “exceptionally stupid” to ship such a product in this way. The effect is to signal “I’m an important target, come get me,” he says.

The news follows other rows involving American companies and totalitarian regimes, including Google’s flirtation with Chinese censorship and Yahoo!’s failure to protect the identity of dissidents there who used its e-mail accounts. In September 2010 the New York Times accused Microsoft of colluding with the Russian authorities’ attempts to harass opposition groups, by backing false charges that they used pirated software. Now the American government is open to the charge of recklessness.

While geeks unpick Haystack’s technical failings, the political storm is growing. The unthinking praise for the project may have temporarily boosted Mr Heap’s Censorship Research Center. But the wider effect was to violate a central principle of democracy-promotion: “first, do no harm”.


This article was first published in The Economist in September 2010.

 

A time to kill

The professional and presumably state-directed killing of a leading Palestinian has been exposed in embarrassing detail. Perhaps such methods have had their day

USING SUBTERFUGE TO ENTRAP and kill adversaries, in locations far from any battlefield, has been a feature of conflict for the past 3,000 years or so – at least since Jael, one of the warrior heroines of ancient Israel, lured the enemy commander Sisera into her tent, lulled him to sleep with a refreshing drink of milk, and then used a tent peg to smash out his brains.

In modern times targeted killing is a more elaborate business, and many of the finer points – how the victim is stalked, how many people are involved – usually remain under wraps. But the plot to eliminate Mahmoud al-Mabhouh, a Hamas commander who was found dead in a Dubai hotel room on January 20th 2010, has been laid bare in stark detail by the police in that country, not normally regarded as a model of open government.

Hamas instantly blamed Mossad, the Israeli intelligence service, confirming that the dead man was a founder of the movement’s military wing. Israel had fingered him in particular for the abduction and killing of two soldiers in 1989. Mr Mabhouh’s brother claimed that he had been killed by an electrical appliance that was held to his head. The local police said he had been suffocated.

The gory details of his end were not made public in Dubai, but many of the events that led up to it were starkly exposed. Indeed any amateur student of espionage and its tradecraft can now consult YouTube, the video-sharing site, to see closed-circuit television footage of some of the 11 people (all travelling on European passports) who are said by the Dubai authorities to have joined in the plot. On February 15th 2010 the country’s police chief offered a blow-by-blow account of the plotters’ doings, elucidating the images.

The key agents were “Gail” and “Kevin” who supervised the hit, and “Peter” who was in charge of preparatory logistics. In the films their appearances changed frequently. Kevin acquires glasses and a full head of hair, after going to the loo. It is clear that the plotters were expecting Mr Mabhouh’s arrival. One spotter waited at the airport; he duly tipped off a couple of colleagues, stout figures in tennis gear, who wait at the hotel and take note of the victim’s room number, 230. The plotters book room 237, which they use as a base. In later footage Gail and Kevin are seen pacing the corridor nearby. Four men in baseball caps, one also wearing gloves, are seen getting into a lift to leave; they seem to be the ones who did the job.

In Israel the initial reaction to the killing was of telling smirks, plus leaks to the effect that the victim was buying arms from Iran. But this gave way to embarrassment as the Dubai authorities produced their evidence, and as protests came from countries – Britain, France, Germany and Ireland – whose passports had apparently been faked or abused; and from individuals whose identities were “borrowed”.

The Israeli security services have never voiced any moral doubts about targeted assassinations (whether in the neighbourhood or farther afield) but there was a concern that the latest killing might go down on a list of plots that have misfired in unforeseen ways. In 1997, for instance, Mossad agents tried to eliminate Khaled Meshal, a senior Hamas official, in Jordan. Two agents posing as Canadians were caught trying to poison him and Israel, under threat that its agents would be executed, agreed to send an antidote. In 1973 Israeli agents murdered a Moroccan waiter in Lillehammer in Norway, mistaking him for the leader of Black September, the group blamed for a massacre of Israeli athletes at the Munich Olympics.

These bungles contrast with operations that Israeli spooks recall with defiant pride: the killing of Imad Mughniyeh, a top member of Hizbullah, in Damascus in 2008 (a particular coup since Syria is hostile territory for Israel); and the dispatch of Abu Jihad, a senior Palestinian official and founder of the Fatah movement, by a squad that swooped into Tunis in 1988.

The not-so-cold war

Israel has no monopoly on killing its foes far from home. European countries, including Britain (since the 1950s, anyway) claim to eschew such methods. But during the cold war both superpowers conspired eagerly to eliminate people they deemed undesirable. In America there was a rethink after a committee, under Senator Frank Church, disclosed that it was probing a web of plots to kill senior figures in countries like Congo, Cuba, the Dominican Republic and Vietnam. This led to a series of presidential decisions – most famously order number 12,333, signed by Ronald Reagan in 1981 – which barred assassinations.

The real force of such orders was to squelch rogue plots hatched in the lower levels of the security services; procedures still exist for the president, in consultation with congressional leaders, to authorise the killing of a perceived adversary. In 1998, three years before the 9/11 attacks, Bill Clinton mandated the capture or killing of Osama bin Laden, after bombs at American embassies in Kenya and Tanzania.

Since the start of the “war on terror”, the boundaries in American thinking between legitimate military action and cold-blooded assassination have become fuzzier still. Among America’s foreign-policy pundits there were serious discussions, back in 2003, as to whether simply killing Saddam Hussein would be a humane alternative to waging war against Iraq. As the fronts in the battle with al-Qaeda have broadened from Afghanistan and Pakistan to Somalia and Yemen, so too has the scope of American actions to eliminate perceived foes. In September 2009, for example, American helicopters fired on a convoy of trucks in Somalia and killed Saleh Ali Saleh Nabhan, who was blamed for an attack on an Israeli hotel in Kenya in 2002, and for the embassy bombs of 1998.

On February 3rd 2010 Dennis Blair, the director of national intelligence, told Congress that American forces might sometimes seek permission to kill a citizen of the United States, if he was a terrorist. This followed a report that Barack Obama had authorised an attack on Anwar al-Awlaki, a radical American imam, in Yemen.

The operation in Somalia earned Mr Obama a rebuke in the Harvard law faculty, where he first shone as a progressive young legal scholar. Such actions were counterproductive and of dubious legitimacy, a columnist in the Harvard Law Record argued. But defenders of the right to kill selectively cite the shooting down of Japan’s Admiral Isoroku Yamamoto in the second world war, which was quite a cold-blooded business – though he was clearly an enemy combatant.

In truth, the factor that has changed the tactics of the American administration is less legal than mechanical: the advent of drones that can be directed with lethal accuracy (most of the time) from offices in Virginia. The best-known target was Baitullah Mehsud, leader of the Pakistani Taliban, who was blown up at his home in Waziristan in August 2009. A study by the New America Foundation, a think-thank, points out that CIA drone attacks have become far more frequent since Mr Obama took office, with more strikes being ordered in his first ten months than in George Bush’s last three years.

In a world where Western voters demand maximum results for minimum expenditure of blood and treasure, assassination by machine has an obvious appeal to political leaders. Although they cost more “enemy” lives (including civilian ones) than old-time stabbing or poisoning, they also arouse less controversy. But for how long? Legal watchdogs say it makes unlawful killing more likely by dehumanising the process; and Pakistani officials, even those committed to fighting the Taliban, say the ruthless use of drones is alienating local people.

Whether death is by computer or by more old-fashioned methods, the antecedents and details of assassination are easier to hide in rough, remote locations than in rich, westernised ones. And even in wild places, awkward facts can come out – as they obviously did in Dubai.


This article was first published in The Economist in February 2010.

 

Hitmen old and new

Modern technology makes killing easier – but harder to get away with

ONLY A DECADE AGO the assassins who killed Mahmoud al-Mabhouh, a Palestinian Hamas commander found dead in a Dubai hotel room on January 20th 2010, would have disappeared into oblivion. Now that would be much harder, and not merely for the obvious reason that lenses are ubiquitous. Modern cameras capture more than blurred images: they record the precise bone structure of people’s faces. Digitised and interpreted by an algorithm, this information is fed to police computers all over the world.

The net is closing around old-fashioned secret-service methods. Biometric passports are already the norm in most European countries. Their chips hold easily checkable data such as retina scans, which are both unique and unfakeable. The thought of an easily disproved false identity fills spymasters with horror. They remember the fate of western agents, in the Soviet Union after the second world war, whose painstakingly forged identity documents had a fatal flaw: they used stainless steel staples, rather than the soft iron fastenings found in authentic Soviet documents. The tell-tale absence of rust allowed Stalin’s secret police to spot them.

The age of Facebook creates another problem. Creating a false identity used to be simply a matter of forging a few documents and finding a plausible life story. Nowadays, leaving an internet trail of convincing evidence for a fake identity is increasingly difficult – and a phoney detail is worse than none at all.

Even poisoning, for a long time the best way to hide a killing, may have become more difficult. The Soviet Union developed formidable expertise in the art of assassination, and (as a by-product of its germ-war and poison-gas efforts) in making toxins. A book published in Britain in 2009 and written by Boris Volodarsky, described as a former Russian military-intelligence officer, provided a glimpse into “The KGB’s Poison Factory” from 1917 until the present day. Its “successes” included the killing of a Soviet defector in Frankfurt with thallium in 1957, and that of a Bulgarian dissident, Georgy Markov, in 1978, in London with a ricin-tipped umbrella.

Toxin analysis has improved but sometimes it is only luck that reveals ingeniously administered substances. Alexander Litvinenko, a renegade Russian security officer living in London, was killed by poisoning with polonium, a rare radioactive substance, in 2006. His assassins – said by British officials to have had help from Russia’s security service – nearly got away with it. Had their victim died sooner, nobody would have tried the highly unusual test for that kind of radiation poisoning.

As another sign that sending hit squads to distant lands can go wrong, consider the tale of Zelimkhan Yandarbiyev, a Chechen ex-president killed in 2004 by a car-bomb in Qatar. The Qatari authorities, using well-honed surveillance, arrested three Russian officials; one had diplomatic immunity, but the other two were sentenced to jail. Only after a messy row between Russia and Qatar, and much damage to Russia’s ties with Islam, did the pair return to Moscow – and a hero’s welcome.


This article was first published in The Economist in February 2010.