11

Global Exposure

States like these and their terrorist allies constitute an axis of evil, arming to threaten the peace of the world.

President George W. Bush, State of the Union Address, 2002, referring to North Korea, Iran and Iraq

Of course, Europe’s exposure to global influences was nothing new. There had been trade with the Far East in the Middle Ages. Gold and other commodities had been shipped across the Atlantic following the conquest of the Americas in the sixteenth century. After Ottoman rule had been established in the Balkans and southern parts of Hungary, Turkish invasions, seen as an alien threat to Christian Europe, were repulsed in Malta in 1565 and near Vienna as late as 1683. The Dutch established a trading base in the seventeenth century in what would eventually become Indonesia. The East India Company began the British colonization of India the following century. Wars were fought in the Caribbean in the eighteenth century. European imperialist expansion into Africa, Asia and other parts of the world followed a century later. And what is often referred to as ‘the first globalization’ set in from the middle of the nineteenth century as the telegraph and telephone, steamships and railways helped to foster a huge expansion of trade to all parts of the world. Then came the most brutal exposure – in two world wars separated by a global economic depression during the first half of the twentieth century. The long post-war recovery after 1945 saw Europe, eventually Central and Eastern as well as Western Europe, opened to the interests of American foreign policy, and to the dominant economic and cultural influences from across the Atlantic.

Even so, there was – and was felt to be – something new about the European global exposure in the early years of the twenty-first century. Conceivably, greater numbers of ordinary people in Europe were more aware of the intrusion of the rest of the globe into their lives than ever before in peacetime. The spread of the internet in the 1990s, above all, had made the world seem smaller. The new millennium ushered in highly accentuated forms of European global exposure. And one difference was of notable significance. In earlier centuries, quite especially in the era of imperialism, Europe had exported violence to other continents. In the first decade of the new millennium Europe had its initial taste of how violence could strike back.

The attack on the World Trade Center in New York on 11 September 2001 was not just a devastating introduction to Islamist terrorism for the United States but marked a caesura for Europe as well. The impact on the continent in the coming years was profound. This in turn had major consequences for attitudes towards immigration and multiculturalism – the attempt to integrate those from other cultures who had migrated to Europe – which became acute political as well as social issues. The potential for a dangerous cultural clash between Western values and those of Islam, as forecast by the American political scientist Samuel Huntington in the 1990s, seemed to loom larger.

A second new, or at least greatly altered, form of exposure came from the globalized economy and its pervasive effect on everyday life. Although partly an intensification of trends already long under way, the ‘second globalization’ was more than just a continuation of well-established developments. The communications revolution that flowed from the inordinately rapid spread of computer technology and the vast expansion of a deregulated financial sector ensured that in the scale and depth of its impact, quantitative change amounted to a qualitative shift. Connections across the globe became not just easier, but almost instantaneous. Not only business was transformed. Europe, like the rest of the world, was interconnected and interdependent as never before. The internet, mobile telephones and e-mail permeated practically all aspects of society. What would have been unthinkable half a century earlier had become reality.

THE ‘WAR ON TERROR’

On 20 September 2001, little more than a week after the attack on New York, President George W. Bush declared a ‘war on terror’ that would only end when ‘every terrorist group of global reach’ had been defeated. States had historically declared war against other states. What war meant in such instances was usually plain. War against an abstraction, on the other hand, lacked clear definition. But the rhetorical value of a ‘war on terror’ was unquestionable. It captured the mood in America, and in most of the Western world, in the immediate aftermath of the atrocity. It was a mood thirsting for revenge.

The attack on America had been planned in Afghanistan, a fractured, lawless, ultra-violent country, wracked by civil war since the end of the Soviet occupation in 1989. In the preceding decade, the United States had provided the local warlords and tribal leaders in Afghanistan, the Mujahedin, with weapons and finance to fight against the Soviets. Once the Soviets withdrew, the USA lost much of its interest in the region. But Pakistan did not, and the Mujahedin continued to thrive, now with Pakistani sponsorship. The warlords controlled their own domains. The writ of the government in Kabul did not run, unless the warlords permitted it to do so. And the Mujahedin were not just anti-Soviet; they were anti-Western as well. This was the soil in which the seeds of terrorism directed at the West, and directly at the United States, could germinate.

Osama bin Laden, scion of an exceedingly wealthy Saudi family who had founded the loose organization of Al-Qaeda in 1988 in order to fight a ‘holy war’, had located his base in Afghanistan when he moved there from Sudan in 1996. One of the ringleaders of a foiled plot to blow up the Word Trade Center in New York in 1993 had received some of his training in an Al-Qaeda camp in Afghanistan. In 1996 the Taliban, initially a small group of militants, originally formed in Pakistan, who avowed the most extreme, fundamentalist form of Islam, and were responsible for countless atrocities in their fight against Afghanistan’s corrupt and widely unpopular government, seized the capital city, Kabul. They had been supported militarily by Pakistani intelligence and had gained financial backing from wealthy sources in Saudi Arabia. They were soon able to extend their vicious rule over more than two-thirds of the country. After moving to Afghanistan, Bin Laden linked forces with the Taliban leader, Mullah Mohammed Omar, and declared war on the United States and the West. He was behind attacks on three American embassies in Africa in 1998, and according to information that the CIA passed on to President Clinton was planning future attacks in the USA itself.

So both Bin Laden and Afghanistan as the base of Al-Qaeda shone brightly on Washington’s radar screen long before the atrocity of 11 September 2001. Once the terrible blow had fallen, it was obvious that armed American retaliation would swiftly follow. Within three days Congress had authorized the President to use whatever force he thought necessary to destroy both the organizations responsible for the attack and the state that had sponsored the terror. That plainly meant, once the Taliban leadership had refused to hand over Bin Laden to the Americans, an invasion of Afghanistan with the primary aims of destroying Al-Qaeda (capturing or killing Bin Laden in the process) and crushing the Taliban. ‘Operation Enduring Freedom’, the assault on Afghanistan, commenced with a bombing campaign by American and British forces on 7 October 2001.

The participation of British forces had been a certainty from the outset. The British Prime Minister, Tony Blair, had (he wrote later) immediately on hearing the news of the outrage of 11 September viewed the attack on the Twin Towers in Manhattan as ‘in a very real sense, a declaration of war’ by Al-Qaeda not just on America but on all the civilized world. That very evening he declared on television that Britain would stand ‘shoulder to shoulder with our American friends in this hour of tragedy, and we, like them, would not rest until this evil is driven from our world’.

Other European leaders were more circumspect. Germany, for instance, decided in mid-November by only a single vote in the Bundestag to send nearly 4,000 Bundeswehr soldiers to Afghanistan. There was nevertheless wide international solidarity with the American objectives and the country’s right to defend itself after the attack on New York. France, Italy and Russia were among the many countries that offered active support to the United States. ‘France,’ President Chirac declared, ‘will not stand aside’ in what had been an attack on all democracies. He added, prophetically: ‘Today it is New York that was tragically struck, but tomorrow it may be Paris, Berlin, London.’

At first the war went well for the Western coalition forces. The Afghan anti-Taliban forces (calling themselves the United Front or the Afghan Northern Alliance). which controlled about a third of the country, undertook the ground fighting and, supported by the heavy air strikes, retook Kabul in November 2001. By early December they had forced the Taliban out of Kandahar in the south, their last stronghold. A small International Security Assistance Force was set up under the aegis of the United Nations later in December – soon to be supported by over twenty countries – to defend Kabul and to help establish a transitional government to be headed by Hamid Karzai, under the aegis of the Americans and the British.

By December 2001 it looked as if the worst was over. In fact, it was only just beginning. The Taliban had retreated. But they were far from eradicated and were soon rebuilding their strength. And Osama Bin Laden, together with many of his closest supporters and much of the Al-Qaeda terroristic network, had managed to escape into the remote mountainous tracts of western Pakistan. So the Western powers were unable to accomplish conclusively either of their primary objectives. Without any obvious plan for securing the peace once they had vanquished the Taliban, they were left supporting an unstable, corrupt regime and trying to pacify a large, intensely violent and unruly country. It would be another thirteen long years and cost the lives of many tens of thousands of people – about 4,000 of them from the coalition forces, but far more from the Afghan population – before, in 2014, the British ended their combat role, the Americans announced they were withdrawing all but a residual force, and NATO (which had become involved in Afghanistan in 2003) pulled out, transferring responsibility to the Afghan government. Few objective observers could by then claim that the long stay of the Western powers in Afghanistan had been an undiluted success. Events were to justify this pessimistic conclusion: the Afghan government was to prove incapable of halting renewed advances by the Taliban, leading to Washington’s reversal of the earlier decision to withdraw and announcement in 2017 that thousands more American soldiers would be sent to Afghanistan.

What had begun primarily as American retaliation had drawn into the war a global coalition of forces from as many as forty-three countries, mostly NATO members. European countries were heavily involved. Of over 130,000 foreign troops based in Afghanistan at the height of the conflict in 2011, about 90,000 were Americans. Most of the remainder were European. The largest European force, some 9,500 troops, was from the United Kingdom, but sizeable contingents also came from Germany (around 5,000), France (4,000) and Italy (4,000). Poland, Romania, Turkey and Spain made further substantial contributions to NATO forces, while numerous other European countries provided smaller contingents.

As the British had learned in the nineteenth century and the Soviets in the 1980s, Afghanistan was forbidding territory for occupying forces. That proved to be the case once again in the early years of the twenty-first century. Partly this was because the overall aims of the war in Afghanistan were not clearly specified. Were they simply to destroy Al-Qaeda and eliminate the Taliban, in which case, whatever the misleading initial signs of success, they plainly failed? Or was the goal a far wider one – as Blair insisted, no less than that of reconstructing Afghanistan as a viable democracy? ‘We were in the business of nation-building,’ he later wrote. Of course, the two goals were interlinked. It was felt that, in order to remove the incubus of terror, a firm basis of modern government had to replace the failed state in Afghanistan. But in the understandable desire for the swiftest retaliation against the perpetrators of 9/11, the difficulties involved in the wider goal had been gravely underestimated. Planting Western-style liberal democracy in such infertile ground was a thankless, largely impossible task. It had mostly proved a failure in much of Europe after the First World War. In Afghanistan the prospects were even more daunting. It would, with the greatest optimism, be a task for generations, not for a few years. But the forces sent to destroy the Taliban and Al-Qaeda, once there, had no easy means of extricating themselves from a rapidly worsening situation. Terror, it rapidly became plain, was here to stay – and not just in Afghanistan.

This was a second major miscalculation in embarking on the Afghan War. The Americans and their European allies had underestimated the novelty, character, scale and sheer menace of the threat they faced in international terrorism. The phenomenon was not altogether new; it had been around in the Middle East and known to Western intelligence services for about three decades. And of course, a number of European countries were no strangers to internal terrorism. The violence of the IRA had seriously afflicted Northern Ireland (and to a lesser extent also the British mainland) since the late 1960s. Spain had an equivalent problem with the Basque separatist organization, ETA. Both West Germany and Italy had had to contend with serious home-grown terrorism during the 1970s. But, although each of these manifestations of terror had killed and maimed extensively, the Islamic terrorism of the twenty-first century was essentially different in character and posed an infinitely greater threat. Earlier terrorist organizations had had limited goals. They had targeted nation states. They had wanted to acquire national independence (like the IRA and ETA) or attack capitalism in specific states (like Baader-Meinhof in West Germany and the Red Brigades in Italy). Their terror had aimed primarily at the representatives of the states and systems they were assailing – politicians, soldiers, police, business leaders. Many innocent bystanders were certainly killed in their atrocities. But the numbers of casualties would have been far greater had not (usually) coded warnings of bombings been given. Moreover, the terrorists themselves had mainly sought to escape with their own lives from the horror they were inflicting on others.

With Islamist terrorism all this altered. It operated globally, not nationally. It was decentralized and international in its personnel, its targets, its acquisition of weapons, and its use of modern mass media to disseminate its propaganda. Its exponents – a major change – were willing, even anxious, to kill themselves in carrying out their terrorist acts, seeing themselves as martyrs in an apocalyptic cause. And this cause was utterly unlimited: the destruction through worldwide Islamic revolution of all Western, liberal values and their replacement by the ‘true’ values of fundamentalist Islam. The culture to be destroyed was epitomized by the United States and its allies. Israel, too, and Jews more generally, who (in a variant of age-old conspiracy notions) were taken to stand behind the power of the West, were also slated for destruction. To attain these chiliastic goals Islamist terrorism did not just accept that there would be civilian casualties. It actively sought to maximize the numbers of innocent civilians killed. The greater the shock, ran the thinking, the more the impact of terror was felt, the more the power of the West would be corroded, and the closer the aims of the terror would come.

There was widespread understanding in Europe for the American-led war in Afghanistan – certainly in its early stages. The aim of destroying the Taliban and Al-Qaeda was hugely popular. Before 2001 many Europeans might even have struggled to name the capital of Afghanistan. But soon names such as Kandahar, Helmand Province, Hindu Kush or Kashkar Gah became widely familiar from television news bulletins. Those frequent bulletins, often giving doleful reports of Western soldiers killed or the numbers of innocent victims caught up in suicide bombings, themselves made plain that the war was dragging on interminably. They offered the clearest indication that the enemy was far from defeated. And gradually, but inexorably, the initial popularity of the war evaporated.

In any case, interest in Afghanistan was soon overtaken by a second strand of the ‘war against terror’. The invasion of Iraq in March 2003, undertaken by an American-led force again predominantly supported by the British, was a far more divisive issue than Afghanistan had been, encountering from the outset heated opposition and rapidly proving disastrous in its consequences.

For Europeans, a war against Saddam Hussein’s Iraq was an entirely different matter from the war to uproot and destroy Al-Qaeda and the Taliban in Afghanistan. Nothing connected Saddam with Bin Laden’s plot to attack America. So, unlike Afghanistan, there was no cause for retaliation. If the case were to be made at all, it had to be on entirely different grounds. These would prove highly contentious. Beyond other considerations, any attack on Iraq raised acute questions of legal justification. And its wider ramifications were incalculable. It would amount to a dangerous extension of the ‘war on terror’. The issues raised by the war domestically split both governments and families across Europe.

Saddam, few doubted, was a brutal dictator who, backed by his loyalist Ba’ath party and a fearsome security apparatus, ruled Iraq with a rod of iron. He terrorized his own people and – as his invasion of Kuwait in August 1990 had most plainly demonstrated – was a threat to the whole region. Torture, summary executions, and other grave violations of human rights were commonplace under his regime. He had used chemical weapons in the war against Iran and against the Kurds in northern Iraq. Political, ethnic and sectarian killings – the last of these mostly inflicted on the majority Shia population of Iraq by the Sunni-dominated government – probably accounted for over quarter of a million victims, not counting the dead from the wars against Iran in the 1980s and the Gulf War of 1991. Partly as a consequence of the broad economic sanctions imposed upon Iraq in 1990 by the United Nations following the invasion of Kuwait and remaining in place throughout the decade but also through brutal repression, Saddam had reduced much of the population of his once wealthy country to poverty. It was an appalling record of a detestable regime and a hateful dictator. But did it give Western countries, with the United States in the lead, the right to take military action to depose Saddam from power?

Iraq was already in the 1980s classed by policy advisors in the United States as a ‘rogue state’, which definition included the aim of building ‘weapons of mass destruction’. A number of prominent ‘neo-conservatives’ (as they were coming to be labelled) – ideologically committed to utilizing American military hegemony to impose an international pax americana, and who would later occupy positions of importance in the Bush administration – were urging President Clinton as early as 1998 to take military action to topple Saddam. In July 2001, three months before the attack on the Twin Towers, the Defence Ministry, headed by a forceful ‘neo-con’, Donald Rumsfeld, had already prepared concrete plans for military intervention in Iraq. On the day after the fateful events in New York, the Bush cabinet deliberated the issue. Afghanistan had at this point evident priority. But that was soon to change.

In his address to Congress on 29 January 2002, speaking only a month after the deposition of the Taliban when final victory in Afghanistan seemed assured, President Bush singled out Iraq as part of an ‘axis of evil’ that, through the acquisition of weapons of mass destruction, threatened world peace. It would become obvious over the following months that American attention was turning towards Iraq as the next stage of the ‘global war on terrorism’. Bush’s ‘axis of evil’ speech was hugely popular in the United States. Public opinion had been drastically affected by 9/11. It overwhelmingly favoured the President’s all-out assault on what were seen as the sources of global terror. There was cross-party support, too, for action. Already in December 2001 Republican and Democratic senators had jointly reminded the President that his policy was ‘regime-change’, and called for the removal of Saddam. Iraq was by now plainly high on the American agenda.

European government leaders – and the citizens of their countries – were mostly far more hesitant. They were worried about the growing likelihood of war in Iraq and its possible consequences. They did not see any real link between the agreed threat of Al-Qaeda and Iraq. Tony Blair was the outright exception in the alacrity with which he offered British support to President Bush. Blair, like Bush, was from the outset moved by a fervent emotional conviction of the urgent need to eliminate by force the grave international existential danger that he saw presented by an Iraq in possession of weapons of mass destruction. By the time he visited President Bush at his Texan ranch in April 2002 he had already reached the conclusion ‘that removing Saddam would do the world, and most particularly the Iraqi people, a service’. He had seen the effectiveness of Western intervention to protect the population in Kosovo in 1999 and had ordered British troops to intervene – which again they did successfully – in the civil war in Sierra Leone (once a British colony) the following year. Most recently he had seen the Taliban expelled (for good as was thought then) in Afghanistan. So Blair endorsed the aim of ‘regime change’ in Iraq with almost missionary zeal. British support for the United States could not, however, as he acknowledged, rest upon the destruction of Saddam as a tyrant, however welcome that might be. It would not suffice in international law. And it would not be enough to win the backing of the British people for war. The crucial issue, he stressed, was the possession of weapons of mass destruction.

Saddam was believed by American and British intelligence to be rebuilding Iraqi stocks of biological and chemical weapons that the United Nations had forced him to destroy in 1991 after the first Gulf War. With great reluctance, and under threat of attack for non-compliance under United Nations Security Council Resolution 1441, Saddam allowed a team of United Nations weapons inspectors, headed by the Swedish diplomat Hans Blix, into his country in November 2002. When Blix’s team reported on 7 March 2003, it was to state that they had found nothing. By then, however, the findings were secondary. The American administration had already made up its mind. The President had told Rumsfeld and Secretary of State Colin Powell two months earlier that he was determined on war against Saddam. Blair, too, had made up his mind, long since. The previous July he had written a private and confidential note to Bush. ‘I will be with you, whatever,’ he had assured the President.

What followed the decisions already, in principle, taken to go to war was the process of convincing the American and British public that Saddam did indeed threaten the West with weapons of mass destruction, despite the negative findings of the weapons inspectors. Both the Bush administration and the Blair government continued, in the face of growing public scepticism, to insist that the weapons would eventually be located, seizing upon imperfect, unfounded and speculative reports from their intelligence agencies to advance a case in public that was, in reality, deeply flawed. Secretary of State Powell told a plenary session of the United Nations Security Council on 5 February 2003 that ‘there can be no doubt that Saddam Hussein has biological weapons and the capability to rapidly produce more, many more’. Blair had continued to stress to the British public in 2002 that the objective in Iraq was disarmament, not regime change. But in September 2002 and February 2003 his government had released dossiers to prepare the public for armed intervention in Iraq, claiming that Saddam possessed weapons of mass destruction, was building a nuclear capability, and would soon have the capacity to attack London within forty-five minutes. It was an alarming scenario – but, as it proved, wrong.

Such statements from political leaders nevertheless did the trick. In November 2002 Congress gave President Bush a free hand to act against Iraq as he thought appropriate to defend the security of the United States. Only about a third of the members of the House of Representatives and of the Senate (overwhelmingly Democrats) withheld support. The proportion of the public opposed to military action was smaller still – only just over a quarter of Americans according to opinion polls in February 2003. The majority thought action against Iraq justified, though preferring it to be backed by a mandate of the United Nations. The public, it was plain, had been persuaded by the statements of the President and Secretary of State. Most of them had indeed, opinion surveys demonstrated, swallowed the belief that Iraq was behind 9/11.

The British House of Commons on 18 March 2003 backed an invasion of Iraq by an even larger majority than the American Congress had done (412 to 149 votes). No more than a quarter of Labour members were opposed, and only two Conservatives. Public opinion was less supportive of action than in the United States: 54 per cent favoured it, with no more than 38 per cent opposed, and the indications were that public support was fragile and on the verge of waning rapidly. As in the USA, the British public (and Parliament) had, however, been sold the case for war on a false prospectus. It was almost certainly not the case, as later often claimed, that Bush and Blair had overtly lied in arguing for war. But they had both, from different ends of the political spectrum (a Republican President and a Labour Prime Minister), misled their people. They had used knowingly unverified and flawed intelligence to construct arguments that rested, ultimately, on little more than their own unshakeable belief – whatever the findings by Blix’s team of inspectors – that Saddam did indeed possess weapons of mass destruction. Both were determined on ‘regime change’ in Iraq, though they – Blair more than Bush – hid this motive behind the need to eliminate the imminent menace Saddam posed to the world. And both – this time Bush far more than Blair – were prepared to act, if necessary, without sanction from the United Nations.

There had been massive protests against a war in Iraq in Britain and all across Europe the previous month. Around a million people, one of the largest protest rallies in British history, demonstrated in London on 15 February 2003. Huge anti-war demonstrations also occurred in Germany, France, Greece, Hungary, Ireland, the Benelux countries, Portugal and in other European countries, the largest being in Italy (around 3 million) and Spain (1.5 million). More than 10 million people worldwide were estimated to have joined in demonstrations.

The prospect of war against Iraq divided Europe more sharply than at any time since the fall of the Iron Curtain. While the British perception of its ‘special relationship’ with America encouraged Blair, as over Afghanistan, instinctively and uncritically to stand ‘shoulder to shoulder’ with President Bush, France, under President Chirac, took a diametrically opposite view. France had, of course, from Charles de Gaulle’s time, built a strong anti-American strain into its foreign policy. But Chirac’s stance towards war in Iraq had little or nothing to do with traditional French anti-Americanism. His well-founded objections were that war in Iraq would inflame Muslim anti-Western feeling. He made clear in January 2003 that France would not join in any military action. Gerhard Schröder, head of Germany’s coalition government of Social Democrats and Greens, was also resolutely opposed to the war. Germany, in fact, went even further than France, declaring that there would be no German participation even in the event of a United Nations mandate. Belgium and Luxembourg backed the French and German line. But the European Union itself was riven. The Netherlands, Italy, Spain, Portugal and Denmark and all the states that had once been behind the Iron Curtain, had subsequently joined NATO and were preparing to join the European Union, supported the war. Not only did the split run through the European Union; it was the most serious crisis of NATO since its foundation in 1949. Some NATO countries joined the coalition; others did not. NATO itself took no part in the planned invasion, though its forces provided defensive support for Turkey (which felt threatened by neighbouring Iraq).

The ‘coalition of the willing’, as the Americans dubbed those countries prepared to support the military intervention, enjoyed nothing like the level of international backing (especially within the Middle East) that had been behind the first Gulf War of 1991 – generally seen as a legitimate intervention to block obvious Iraqi aggression against another country. When the invasion did take place, only the United Kingdom and, in small numbers, Poland and Australia provided combat troops alongside those of the United States.

The neo-conservative right in America responded to the deep divisions in Europe by stirring anti-European feeling in the United States. Defence Secretary Donald Rumsfeld denounced France and Germany as representatives of ‘Old Europe’, praising in contrast Central and Eastern European countries which, in putting themselves on the side of the USA and Britain, constituted the ‘New Europe’. The French came in for particular opprobrium. One press article in 2002 had already disgracefully dismissed them (with the capitulation in 1940 in mind) as ‘cheese-eating surrender monkeys’. ‘French fries’ were renamed ‘freedom fries’ in the cafeterias of the American Congress (though most Americans in fact thought the gesture was silly, and the French Embassy pointed out that French fries had actually originated in Belgium). Behind the absurdity lay some serious reflections on the divergence of American and European approaches to war. ‘On major strategic and international questions today, Americans are from Mars and Europeans are from Venus: they agree on little and understand one another less and less’, was the view of Robert Kagan, author of an influential book Paradise and Power, published in 2003. ‘American leaders,’ Kagan concluded, ominously, ‘should realise that … Europe is not really capable of constraining the United States.’

Resolution 1441 of the Security Council in November 2002 had given Saddam Hussein ‘a final opportunity’ to comply with demands for weapons inspection. But the implied threat had not specified that failure would result in military action. Nor was it clear that Saddam was avoiding full compliance. Blix himself was ambivalent, though by February he was indicating there was greater cooperation. Saddam hardly helped his own case. As a bluff to deter military action he had never categorically denied the charge of possessing such weapons. It was a perverse and fatal mistake. Blix had reported no findings of weapons of mass destruction. But were they still hidden somewhere?

In March 2003 it was plain that the French and the Russians would veto any further resolution by the United Nations Security Council to approve military action in Iraq. Was a new resolution, however, strictly necessary? The Americans, by this stage evidently impatient with the United Nations, were already determined to act, with or without a resolution. In London the highly dubious legal advice given to the government by the Attorney General, Lord Goldsmith, was that action was covered by Resolution 1441 (though he had initially taken a diametrically opposite view). With that the British government – apart from the Foreign Secretary, Robin Cook, who resigned – was on board. War would go ahead without a mandate from the United Nations. It consequently lacked international legality. The United States had effectively decided itself, without regard for international law, how and when to prosecute war.

The invasion of Iraq started on 20 March 2003. Saddam’s forces were plainly no match for the (mainly) American invaders and within three weeks the military campaign was won. Baghdad was taken by 12 April and fighting ceased. Coalition casualties were minimal. The pictures of the statue of Saddam being toppled by crowds in the city centre went round the world on television. Saddam himself had fled, though it was presumed only a matter of time until he was captured. (He would indeed be discovered in November 2003 near his hometown of Tikrit and was later tried by an Iraqi court for crimes against humanity, sentenced to death, and hanged on 30 December 2006.) With the war over and the dictatorial regime ended, there was a sense both of relief and of self-congratulation. In a scene of extraordinary hubris on board the American aircraft carrier USS Abraham Lincoln on 1 May, President Bush, attired in a flying uniform, addressed sailors (and the watching world on television) under a banner reading: ‘Mission Accomplished’. In fact, the descent into long years of chaos and horrific bloodshed in Iraq, with lasting consequences for the United States and its allies, was just beginning.

President Bush had spoken before the invasion of building a democratic Iraq. But Iraq was not Germany in 1945. The occupiers showed scant awareness of the problems they were facing, or the sensitivities of Iraqi culture and politics. The administration of occupied Iraq under the American diplomat Paul Bremer proved utterly incompetent, its disbanding of the Ba’athist Party and the Iraqi army being enormously damaging own-goals. The Shia-dominated government installed by the Americans even intensified the gathering sectarian conflict by blatant discrimination against the Sunni minority – reduced from the former ruling elite to second-class citizens. Even worse was the torture and degrading treatment of Iraqi prisoners by their American captors in the Abu Ghraib jail, which was exposed to a worldwide television audience in 2004. The reputation of the United States had already been dragged through the mire by the treatment of several hundred prisoners, mainly from Afghanistan, suspected of terrorism and interned without trial in the detention camp at Guantanamo Bay in Cuba that had been created in January 2002. Now it hit rock bottom. Abu Ghraib made a mockery of the values of humanity and justice that the United States – and the rest of the Western world – claimed to represent. Whatever good will initially existed towards the USA and Britain for ending the tyranny of Saddam Hussein, it was replaced by widespread and growing hatred of the occupiers of Iraq as political anarchy and uncontrollable violence became the hallmarks of everyday life. Saddam had been terrible. But what replaced him, thanks to the ill-conceived and badly executed occupation that lacked any coherent concept of post-Saddam order, was in the eyes of many worse. The disastrous consequences ran far beyond Iraq itself.

The invasion, the treatment of Iraq by the conquerors, and the power vacuum that replaced Saddam Hussein’s dictatorship were a gift to international jihadist terrorism. After the Iraq War incidents of worldwide terrorism, already growing in the second half of the 1990s, soared stratospherically. Some 500 incidents in 1996 had grown to 1,800 in 2003; by 2006 there were around 5,000. By far the worst-affected area was the Middle East itself. There were an estimated 26,500 terrorist attacks in Iraq alone in 2004. The number of Iraqis killed in opposing the occupation and in internecine conflict within the country after the invasion has been put at half a million. The Western world suffered relatively little. Between 1998 and 2006 jihadi targets in Britain are estimated to have accounted for 4 per cent of the total, Spain 2 per cent, Turkey 4 per cent, Russia 11 per cent (related chiefly to the conflicts in the Caucasus, especially Chechnya) and the USA 2 per cent. Nevertheless, after the Afghanistan and Iraq wars Europe could not avoid increasing exposure to international terrorism.

Britain, as the most important ally of the United States (and as a former imperialist power), was particularly threatened. Its close connections with Pakistan offered good opportunities for the cross-fertilization of jihadist ideas and recruitment to terrorism from within the population of Pakistani origin. But jihadist networks, many linked to or inspired by Al-Qaeda, were also uncovered by intelligence services in several other European countries, including Germany, France, Italy, Spain, the Netherlands, Belgium, Poland, Bulgaria and the Czech Republic. The spread of the internet was the great facilitator of jihadist indoctrination, inside and outside Europe. Pro-terrorism websites increased in number from about twelve in 1998 to over 4,700 by 2005.

Intelligence gleaned by security services about potential attacks offered the main defence against terrorist outrages. But it did not always work. On the morning of 11 March 2004, bombs planted on busy commuter trains in Madrid killed 192 people and injured about 2,000 others. Bin Laden had threatened retaliation against America’s European allies. The government run by José María Aznar’s conservative Partido Popular Party in Spain had supported the war, highly unpopular though this had been among most Spaniards. The outrage in Madrid had a direct political motive. In the general election that took place three days after the bombings Aznar paid the price. Many voters, according to surveys, switched support after the bombings to the Socialist Party, which had opposed the war. On winning the election the new Prime Minister, José Luis Rodríguez Zapatero, promptly withdrew Spanish troops from Iraq.

A year later terror struck Britain. On 7 July 2005 three bombs on Underground trains in London and a fourth on a bus in the city centre were detonated by Islamist terrorists, killing 52 people and injuring a further 700. The suicide bombers were all British citizens, unknown to the security services. They claimed that they were acting as soldiers of Islam and in retaliation for British oppression of Muslims in Afghanistan, Iraq and elsewhere. Britain had experienced a major terrorist attack connected with the Middle East once before, in December 1988, when a Libyan bomb placed on an American passenger plane exploded over Lockerbie in Scotland, en route from London to New York, with the loss of all 259 crew and passengers (and a further 11 victims when parts of the plane struck them as it hit the ground). Retaliation against the United States for American airstrikes against Libya during the 1980s had been the apparent motive. But in contrast to Lockerbie, the 2005 July bombings took place in the heart of the capital city, and were targeted not at America but directly at Britain itself. The wars in Muslim countries had indeed rebounded on Europe.

Subsequent years would demonstrate that no European country would be safe from Islamic terrorism. France and Germany, despite their opposition to the Iraq War, would not be spared. Islamist fundamentalism would strike where it could do so most effectively – against easy targets with maximum loss of life and the greatest possible publicity. It was not restrictive in its enemies. It amounted to an attack on Western civilization as a whole.

Although the resurgence of Islam as a widening source of Muslim identity dated back to the 1970s, the wars in Afghanistan and, especially, in Iraq gave it a huge boost. The fateful intervention in Iraq and the complete mishandling of the early phase of the occupation fertilized the growth of multiple terrorist organizations that would come in the following years to haunt the West. Some of the most deadly formed along the sectarian fault lines that widened immeasurably after the Iraq War. The deepened split between Sunnis and Shias now added its own big contribution to the growing problem of Islamist terrorism. It affected the complex geopolitics of the Middle East, which were further destablized by Iran supporting Shias and Saudi Arabia the Sunnis. Since Iran was closely linked with Russia while Saudi Arabia, where the Salafist form of fundamentalist Islam predominated, was a major ally of the United States, Britain and other European countries, Europe was certain to remain highly exposed to the continuing traumas of the Middle East.

GLOBALIZATION’S JANUS-FACE

The end of communism gave a great boost to a globalized economy. In the early 1990s (as indicated in the previous chapter) the former eastern-bloc countries struggled under its impact. But in the second half of the decade they started to profit from global economic growth, as Western Europe was doing. Between the mid-1990s and the abrupt end of growth in 2008 Europeans – both east and west – enjoyed the benefits of a booming economy. At least, most of them did. Globalization provided material advantages that earlier generations could scarcely have imagined. There was new economic dynamism. World trade flourished as goods flowed across borders as never before. By the end of the first decade of the twenty-first century trade was six times higher in volume than it had been at the time of the fall of the Berlin Wall. Europe’s proportion of the world economy had in fact been in long-term decline since the 1920s. That is, Europe’s economy had grown in size, but other parts of the world had grown faster. In 1980 Europe had still accounted for around a third of world trade, but three decades later it was only about 20 per cent. The creation of a major trading bloc in Europe was, however, of great importance. Without the steps towards European integration the relative decline would almost certainly have been greater. For the widened European Union had by the turn of the millennium become the largest trading bloc in the world, ahead of the United States and China in volume of exports and imports.

Production and distribution of goods had over recent years become international on an unprecedented scale. Enormous multinational firms (and, increasingly, technology giants) were the great beneficiaries. Components for car manufacture were increasingly put together in more than one country, and assembled in yet another. Japan, among the world’s largest car manufacturers, had big components factories in several European countries; Toyotas, Hondas and Nissans were among the most popular cars on Europe’s roads. Consumers took globalization for granted. They could buy a cornucopia of products from across the world at often astoundingly cheap prices. Consumer spending soared. Electronic goods, children’s toys, clothing and a plethora of other commodities flowed from countries in East Asia that were undergoing unprecedentedly high rates of economic growth – the ‘tiger economies’ of South Korea, Singapore and Taiwan – but above all from China, whose economy was emerging as the largest after that of the United States. Europe provided expanding markets, too, for goods and expertise in computer software from India, another fast-growing economy. The shelves of European supermarkets bulged with staggering choices of foodstuffs from every part of the globe. Fruit and vegetables, once available only seasonally, were imported from distant warm countries. Myriad choices of Mediterranean and Middle Eastern dishes, innumerable kinds of pasta, oriental spices and other food products catered for practically every individual taste. Wines came not just from all over Europe but also from Australia, New Zealand, California, Argentina and Chile, available at low prices that would have been unimaginable a generation earlier.

As manufacturing continued its long-term decline in European countries, services replaced it almost everywhere as the dominant economic sector. By the end of the twentieth century, services accounted for two-thirds to three-quarters of employees in most European countries. Only a minority now worked on farms or in big factories. Most employees were engaged in administration, organization, or the commercial arrangements of production, not in the actual manufacture of products. Logistics – the organization of the flow of goods around the world – turned into a booming sector of industry and commerce. The number of transnational firms more than doubled between 1990 and 2008. There was an even faster growth in the number of subsidiary companies. ‘Outsourcing’ – contracting out parts of a business, whether administration, manufacture or distribution of products to a subsidiary, sometimes based abroad – had become a key element of a globalized economy. Governments outsourced public services to private firms to cut back on state expenditure. But most outsourcing was done by private firms themselves. Tax burdens could be minimized by relocating to countries with a low taxation regime. Moving production overseas to countries where labour was cheap, while retaining a European headquarters, could boost profits – a process that had been gathering pace for decades. Three-quarters of the labour force of Dutch multinational companies, for instance, had already been employed abroad in the 1970s. Outsourcing often meant, too, handing over elements in the production and distribution chain to self-employed people, thereby enabling firms to avoid cumbersome and costly obligations under labour law, although this often meant transposing onerous work practices on to the self-employed in small businesses.

Communications and transnational relations were by the first decade of the twenty-first century utterly transformed. The rapid spread of the internet, especially after the creation of the World Wide Web (invented in 1989 by Tim Berners-Lee and made available to the general public two years later), spearheaded a revolution that was changing the possibility of communications and availability of knowledge and information at breathtaking speed and in previously unimaginable ways. Goods could be ordered from overseas to be delivered to the front door with astonishing rapidity at the touch of a computer key. People could contact each other by e-mail across the world in seconds (drastically reducing postal services in the process). Financial transactions and transfer of capital could be completed just as fast. Total European foreign direct investment abroad overtook that of America for the first time since the Second World War. Foreign investment in Europe itself was by the end of the century almost double that in the United States. Once the Euro had been introduced in 2002, currency transactions in Europe were greatly simplified. Business benefited. So did foreign travellers.

Travellers could enjoy remarkably cheap air travel that allowed easy access, for business or pleasure, to far-flung destinations. Even after 9/11 had brought fortress-like changes to airport security the thirst for foreign travel (and the ease of undertaking it) was scarcely affected. International tourism was big business. People moved from continent to continent as never before. Travel to international conferences and business meetings expanded. Students, under the European Union’s Erasmus programme, could study without difficulty in countries outside their own, transferring their qualifications from one university to another beyond national borders. It had also become far easier, with European citizenship, to move jobs or homes from one country to another. Millions of Europeans now willingly, not just from economic necessity, lived outside their country of birth. Culturally, Europeans across the continent had lost many – though certainly not all – of the differences that once had separated them. There were remarkably similar tastes, transcending national boundaries, in music (popular and classical), film, theatre and art. East and west Europeans were now indistinguishable from the clothes they wore. International news channels carried a significant proportion of very similar stories (with, of course, national or regional slants).

In these, and in many other ways, globalization was rapidly transforming – and improving – people’s lives. It was in countless respects an enormous boon, extending material comforts to ordinary citizens that had less than half a century earlier been the preserve only of small, relatively wealthy sectors of society. The trends in globalization, though not in themselves new, were thanks mainly to the communications revolution massively accelerated. The huge benefits nevertheless came at very considerable cost. Globalization was plainly ambivalent – a phenomenon with a Janus-face, partly good but also partly negative. It was impossible to have one without the other.

Globalization had many losers as well as winners. One striking feature of its impact was the rapidly widening disparities of income and wealth. Inequalities had been reduced in the first two post-war decades, but then had started to rise again in a trend that accelerated as the twentieth century approached its end. The income of the top tenth of the population, even more strikingly of the top 1 per cent, rose significantly faster in most countries than that of the bottom tenth. A highly educated, technologically highly skilled managerial class was able to benefit disproportionately – the higher up the ladder, the greater the disproportion. There was a crassly widening gulf between the often grotesque salaries, bonuses and equity stakes of top executives of big companies and financial institutions, and the earnings of the vast majority of the employees of such concerns. Those who were most skilled at exploiting financial markets were best placed of all to earn sky-high incomes.

At the other end of the spectrum was a new proletariat, earning poor wages in precarious forms of employment, often living in sub-standard accommodation, with little or no surplus from their earnings and disproportionately prone, not surprisingly, to falling into debt. Women, often compelled to combine family commitments with meagre earnings potential from part-time or insecure employment, were particularly disadvantaged. So were the unskilled, the poorly educated, those who lacked the requisite qualifications in literacy and numeracy. Notably disadvantaged were immigrant or seasonal workers, driven to accept low-paid, insecure and unattractive jobs and poor housing conditions while also frequently having to contend with overt or more subtle forms of discrimination. As globalization provided a reservoir of immigrant and transient labour to meet the swelling demand, less scrupulous employers were able to push their labour costs down. This in turn alienated trade unions and the workers they represented, who felt migrant labour was depressing their wages.

Globalization worked heavily in favour of big business, while small concerns often struggled. Big supermarkets, for example, could control food markets through massive bulk buying. Small food stores, unable to compete, went to the wall in droves. Bookselling was another branch that favoured large-scale operations. Small bookshops, unable to match the holdings, marketing capacity or discount possibilities of the big booksellers, frequently went out of business. Even some big concerns faced major problems in competing against Amazon, which had begun as an online bookstore in the United States in 1994 before expanding throughout Europe, using computer technology to revolutionize book availability and speed of delivery (and within a number of years diversifying its output to a huge range of other products).

Deregulation of finance encouraged the transfer of capital to areas that offered the highest returns on capital investment. ‘Hot money’ could flow across borders in an instant with little or no constraint. Financial markets were global, no longer subject to restrictions imposed by national governments. Speculation in financial markets could swiftly bring extraordinary riches – or bring crushing losses, such as followed the collapse in 2001 of the ‘dot-com bubble’ in the wake of huge, but hazardous, investment in new firms established to exploit the rapid growth of the internet sector. Those who accumulated new-found wealth, or had inherited it, were able to increase it by lodging it securely and unobtrusively in banks outside the country of their residence, where they could take advantage of very low rates of taxation. Luxembourg, Switzerland, Andorra, the Channel Islands and the Isle of Man offered such provision within Europe.

Not surprisingly, then, in the boom years of globalization between the mid-1990s and 2008, not just income but wealth disparities widened sharply. Property-owners saw their wealth increase effortlessly as the price of property soared. Many belonged to a middle class of relatively modest income but owning houses whose value had increased exponentially. In some of Europe’s major cities – London was the prime example – rich foreign investors bought much of the most desirable property. The majority of ordinary citizens, however, found themselves priced out of the market. Young people especially, unless they inherited wealth, often had no hope of ever earning enough to buy even the most modest family home. Unsurprisingly, resentment simmered.

Crass inequality of income and wealth was less acute in Scandinavian countries. These had traditionally favoured higher taxation and more even social distribution than had Britain, which aimed more closely to follow the American neo-liberal model of a low-taxation highly deregulated economy. Most continental Western European countries – France, Germany, Italy and the Benelux countries prominent among them – had not followed the Scandinavian route, but had nonetheless developed strong political traditions since the Second World War of tempering the market through social welfare policies. These also tended to mitigate to differing degrees growing income inequality, which was far more marked in the former eastern-bloc countries and also in much of Southern Europe. In the ‘good years’ of the late 1990s and early 2000s the growing inequalities were, if recognized, largely ignored or regarded simply as the price to pay for the wider benefits of globalization. Should the ‘good years’ run out, however, the potential for social unrest and political challenges to the existing system was evident.

Economic growth and rising income, while self-evidently important, did not entirely accord with how people assessed their ‘quality of life’. Differing statistical indicators tried to arrive at comparative indications of what was plainly a complex and highly subjective concept. Criteria included economic well-being, political liberties, levels of employment, and stability of family and community. Whatever caveats might be attached to attempts to quantify ‘quality of life’, the results obtained by one of the most sophisticated evaluations, undertaken in 2005 by the London-based magazine The Economist, offered some indication of the place of Europe in world rankings. Top of the charts was the Republic of Ireland, where doubtless the recent transformation in the standard of living and the rapid growth in the country’s economy had been decisive. Western European countries generally fared well. Nine of the top ten countries surveyed across the world were in Western Europe, though France, Germany and Britain lagged some way behind – possibly an indicator that ‘quality of life’ was less easy to build and sustain in large, complex and varied economies. Most of the Central and Eastern European countries fell well behind Western Europe, some (including Bulgaria, Romania, Serbia and Bosnia) were placed still further back, while Ukraine, Belarus, Moldova and Russia entered the table lower than Syria and not too far ahead of the last countries in the list, Nigeria, Tanzania, Haiti and Zimbabwe.

Major regional as well as national disparities in wealth had always existed in Europe (not to mention between Europe and other parts of the globe, such as Africa or South America). How they were affected by globalization varied. Political stability, existing infrastructure, the quality of educational systems, and flexible social values offered preconditions in which globalization was likely to make a positive impact. Western Europe for the most part provided such preconditions. Some Mediterranean countries that had earlier lagged behind now took big strides to catch up. Spain and Portugal had higher growth rates than the core of Western Europe, while Ireland, once so backward, turned into a Western ‘tiger economy’. Finland, too, after undergoing severe recession in the early 1990s, recovered strongly, especially after joining the European Union in 1995, experienced strong economic growth, and developed into a leading exporter of electronic equipment, exploiting especially the rapidly expanding demand for mobile telephones.

But there were losers from globalization, too, even in relatively prosperous Western Europe. The considerable financial assistance provided under the European Union’s European Regional Development Fund helped to mitigate some of the worst regional disparities. Nevertheless, the long-standing structural problems of some areas were impossible to overcome. The age-old disparity between the poor regions of the Mezzogiorno and far wealthier northern Italy widened as the north proved much more attractive to foreign investors. Even in prosperous Germany, there were major divides between Bavaria and Baden-Württemberg in the south, flourishing regions that could attract the booming new technologies and were centres of car manufacture, and the old industrial region of the Ruhr in the north-west, or the relatively poor, largely agricultural region of Mecklenburg in the north-east. In Britain the old industrial regions of the north-east and north-west, Clydeside in Scotland and the Welsh valleys, could not make up for the long-term decline of their staple heavy industries, while London and the south-east, buoyed by the growing dominance of the financial sector in the City of London, thrived. Northern Ireland showed the importance of political stability to inward global investment and prospects of prosperity. Having languished for three decades during ‘The Troubles’, the end to decades of violence in 1998 brought much-needed growth.

Where there was political instability or poor infrastructure – Romania, for example, had only five personal computers per thousand inhabitants in 1995, compared with 250 in Western Europe – or where there was widespread corruption and a poorly educated population (with Romania again a paradigm case), countries struggled to take advantage of globalization. By the year 2000 gross national product per head in Central and Eastern Europe remained only half that of Western Europe. Central European countries were themselves pulling away from those in the Balkans and those closely tied to Russia. Political stabilization and infrastructural reforms meant that globalization, partly through inward investment that could take advantage of low labour costs, enabled Central European countries to improve their economic position and make some headway towards catching up with Western Europe in the early years of the new century. Russia, too, started to recover. High energy prices and rich reserves of oil and natural gas helped the country emerge from the doldrums of the 1990s as the economy grew at 7 per cent a year between 2000 and 2008. The reinvigoration was helped by moves under President Putin to restore state control and regulation of important parts of the economy, as well as providing strong government (demonstrated by popular steps to depose, and even on occasion imprison, some of the most corrupt oligarchs), eliminating some of the worst aspects of Russia’s kleptocracy, and encouraging inward investment. Inequality remained extremely high in the country, though it had, at any rate, stopped widening.

The massive economic stimulus that flowed from globalization was, not least, damaging to the environment, adding greatly to the dangers of pollution and global warming. But the need to meet rising expectations of higher living standards usually meant that environmental protection enjoyed a lower priority than continued economic growth. The fear of being outstripped in the race for growth played its part. The rapid expansion of globalization was unstoppable. Countries that did not move to embrace and adapt to it as swiftly as possible were left behind.

During more than a decade of dynamic economic growth since the mid-1990s the difficulties seemed manageable. But what problems would arise from globalization if the financial institutions that underpinned it were suddenly to be thrown into turmoil? No one gave the prospect much consideration. The growth that set in during the mid-1990s seemed set to continue indefinitely. The British Chancellor of the Exchequer, Gordon Brown, repeatedly claimed in the years after Labour had taken power in May 1997 that he had produced lasting stability in the British economy and that there would be no return to ‘boom and bust’. The words would soon haunt him. But Brown was far from alone in failing to foresee that the very engines of global growth were driving the instability which would threaten it, that the globalized economy was heading straight for the edge of the cliff.

POLITICAL CHALLENGES OF GLOBALIZATION

How European governments met the challenges of globalization depended heavily upon national circumstances. But three general problems were plainly visible even in the boom years.

The first arose from hugely intensified economic competitiveness. This led to great pressure to lower wages, sustain high rates of employment, keep inflation down (helped by the low costs of imported goods from China’s booming economy), and reduce the tax burden. It was often dubbed a ‘race to the bottom’. The ease of international capital transfers meant that high tax regimes and forms of protectionism that had worked in the past were unsustainable. Governments had to exploit globalization while finding ways at the national level of combating its harmful side effects. How to balance these problems with maintaining social cohesion, upholding the civilized values that Europe’s democracies saw as their very essence, and sustaining high levels of social welfare in the face of increased expectations and an ageing population, was a major challenge to all governments. None found easy or entirely palatable solutions.

A second problem was the impact on the home population of increasing migration as people from poorer economies exploited the possibilities of moving to higher-wage economies where vibrant growth produced a great demand for their labour. This was a scale of mobility that it had been impossible to foresee at the creation of the Single Market in 1986. Attempts, in varying ways, to integrate migrants and develop multicultural societies frequently gave rise to social tensions and promoted political fragmentation by fostering the appeal of ‘identity politics’ represented by minority parties. The problem, far from new, would become more serious in the second decade of the twenty-first century. But, often beneath the surface, population migration and multiculturalism were viewed with deepening concern even in years of relatively buoyant economies and global growth.

The third serious problem, of increasing importance following the Iraq War and the attacks in Madrid and London, was the threat of terrorism. Spain and the United Kingdom had long experience of dealing with localized terrorism. The terrorism of ETA and the IRA had been deadly and of long duration. But, silently acknowledging that their goals could not be attained through armed struggle, both organizations edged towards political rather than military operations. The ‘Good Friday Agreement’ of April 1998 marked a crucial point in the process of bringing to a close a thirty-year tragic phase in the history of Northern Ireland, which had caused the deaths of about 3,500 people. The Basque separatist struggle in Spain had cost the lives of around a thousand individuals since the 1960s. But here too terrorist violence was in decline. After a number of truces and a ‘permanent’ ceasefire in March 2006 proved to be temporary, lasting only until December, ETA would announce a ‘definitive cessation of its armed activity’ in January 2011. IRA and ETA terrorism, deadly though it was (and the number of its victims was very large, compared with that caused by Islamist terrorism in Western Europe during the first decade of the twenty-first century), had been specific in its goals and localized in its implementation. Islamist terrorism was a different matter altogether. This would become a more acute issue in the second decade of the twenty-first century. But already it posed high demands on the security services of individual countries and on the development of closer cooperation through networking of intelligence across the Western world to combat an obviously growing menace.

The dramatic changes experienced in Europe at the start of the 1990s had fed into an intangible readiness in Western European countries to embrace substantive reform. The aim of undertaking major structural reform of European institutions had been behind the Maastricht Treaty in 1992. Individual states, too, had to adapt to changed circumstances. The need to ‘modernize’ in a ‘new Europe’ became a political mantra. In much of Western Europe this favoured electoral shifts towards social democracy. Britain elected the Blair government with a landslide majority in 1997. Gerhard Schröder became German Chancellor in a coalition government with the Greens the following year. The French had also moved towards the left in the parliamentary elections of 1997. Social Democrats formed the predominant force in coalition governments in the Netherlands, Sweden, Denmark, Austria, Italy, Portugal and Greece in the later 1990s. The trend was general, though there were exceptions – Spain, for instance, turning towards conservatism in 1996 after a lengthy period of socialist government. But far from establishing a long-term shift in political allegiances towards the centre-left, social democracy had entered upon what proved to be an Indian summer. It found itself generally on the retreat in the early years of the twenty-first century.

A reverse trend towards centre-right conservatism was soon characteristic of an altered mood in Western European electorates. Between 2001 and 2006 Social Democrats lost power in France, Germany, the Netherlands, Portugal, Finland, Denmark and Sweden. Again, there were exceptions to the trend. Italy, where the ineffable Berlusconi returned to office at the head of a right-wing coalition in 2001, moved back towards the centre-left in parliamentary elections in 2006, while Spain had elected a socialist government two years earlier following the terrorist attack in Madrid.

The two paradigm countries of new-style social democracy as Europe entered the twenty-first century were Britain and Germany. Both the governments of Tony Blair in Britain and of Gerhard Schröder in Germany offered what in their early years seemed a welcome advance on the tired policies of their predecessors. But the path of reform that both took, attempting to combine pro-market policies with remodelled notions of social justice, proved highly controversial – not least among the supporters of their own parties. By 2005 both the Labour Party in Britain and the Social Democrats in Germany were seeing their support evaporate.

Blair’s promise to modernize the country under ‘New Labour’ (as he was now calling his party) had in 1997 sounded attractive to millions of voters. But his great electoral victory that year owed not a little to a negative factor – voters turning their backs on the divided and ineffectual Conservative government of John Major. Blair and his advisors had recognized that it was no longer possible to win sufficient electoral support by adhering to traditional Labour policies. De-industrialization had brought fundamental changes to the working class. Trade unions – the backbone of Labour – were far weaker than before the Thatcher era. And the class rhetoric of bygone years seemed outdated as individual consumer habits and lifestyles crossed class boundaries. So Blair set out to win over ‘Middle England’ – middle-class voters from far beyond Labour heartlands.

The programme of Blair’s government attempted to blend social democracy with neo-liberal economics. Blair’s critics derided it as Thatcherism with a human face. It broke with some long-standing Labour traditions and aims, for which many in the party faithful never forgave him. Equality of opportunity replaced the elimination of material inequality as Labour’s goal. The commitment to nationalization of the economy, in the party’s programme since 1918, was discarded. Instead of ‘inefficient’ public ownership, New Labour looked to control and utilize the wealth creation of a competitive free-market economy to provide a framework for social justice.

Under New Labour there was strong economic growth – which had already begun under the Conservatives in the mid-1990s (themselves benefiting from the upward swing in the global economy). Aided by further deregulation, the City of London consolidated its position as Europe’s (and by some measures the world’s) financial capital. Gordon Brown, an astute Chancellor of the Exchequer, made funds available to enable Blair’s government to finance much-needed improvements in schools, universities and hospitals. Many among the poorer sections of society certainly benefited. Changes in taxation and welfare benefits saw the incomes of the poorest rise by 10 per cent. Child poverty was reduced. And as the economy continued to thrive, there was a pervasive sense of material well-being among much of the middle class.

But a lot of this depended on a consumer boom, financed mainly by the availability of cheap credit that in turn fuelled high levels of personal debt. Inflation in property prices, too, pleased home-owners while increasing inexorably the gulf between those who possessed property and the many who could never afford to acquire any. Under New Labour the rich got richer. One of the masterminds behind the party’s rebranding, Peter Mandelson, had said in 1998 that he was ‘intensely relaxed about people getting filthy rich as long as they paid their taxes’ (which many managed to avoid doing). But the vain hope that wealth would ‘trickle down’ from top to bottom of the social ladder proved misplaced.

Blair’s legacy included the devolution in 1998 of significant powers from London to a Scottish parliament and a Welsh assembly. His most important single achievement (building upon the substantial progress already made under his predecessor, John Major) was to broker the Good Friday Agreement of April 1998, which drew a line under the violent conflict between Republicans and Unionists in Northern Ireland. Despite these lasting successes and whatever the material benefits from economic growth under New Labour, the Iraq War cast a deep shadow over Blair after 2003.

It put New Labour firmly on the defensive. Many on the centre-left, completely alienated by the Iraq War, drifted to the Liberal Democrats, while others who had earlier turned away from Conservatism returned to their traditional habitat. Nevertheless, Blair won his third election in a row in May 2005, a unique record for Labour. His own personal magnetism had not been altogether eroded. More important was the continuing strength of the British economy. But the positive result for Labour could not hide the fact that the party’s popularity was waning. It had won only 35 per cent of the popular vote – the lowest proportion ever attained by a majority government in Britain – and Labour’s parliamentary majority fell by almost a hundred seats.

The attacks in London in July 2005 were a searing reminder of the dangers for Britain that the Iraq War had intensified. Blair’s response was to propose new security measures. In this, however, he encountered much popular opposition; in the eyes of many the proposed measures threatened to undermine British liberties. When his government pressed for new anti-terrorist laws that extended the period of detention without trial from fourteen to as long as ninety days, forty-nine Labour Members of Parliament were among the majority in the House of Commons that inflicted Blair’s first parliamentary defeat since coming to office in 1997. (It was eventually agreed to increase the period of detention without charge to twenty-eight days.) Pressure built within the leading echelons of the Labour Party, especially among supporters of Gordon Brown, for Blair to step down. And sure enough in June 2007 the most successful leader in Labour’s electoral history resigned as Prime Minister, leaving Parliament soon afterwards. The Iraq War had lastingly sullied his reputation. His notable achievements as Prime Minister were as a consequence widely overlooked or played down.

Gerhard Schröder did not have the luxury of the large majority that the British ‘first-past-the-post’ electoral system presented to Blair in 1997. The Social Democrats won the German election the following year, though with only slightly more of the popular vote than the Christian Union parties. The government Schröder was able to form, in coalition with the Greens (who were backed by a mere 6.7. per cent of the electorate), nonetheless embarked on an ambitious programme of social reforms. These included tax changes to prioritize clean energy, a law to end discrimination against homosexuality, and a significant alteration of citizenship that in 2000 made residence not ethnicity the main criterion. But the serious economic problems inherited by the Schröder government posed a big challenge.

Remarkable as it would seem only a few years later, Germany was described by The Economist in June 1999 as ‘the sick man of Europe’. Economic growth, the magazine wrote, was lower than in the rest of the newly created Eurozone, unemployment remained stubbornly high, German exports had declined as its big markets in Asia and Russia had collapsed, and the continued costs of unification remained a burden. The morale of business leaders was poor and they feared the worst from the new left-leaning government. The analysis outlined fundamental problems that needed structural surgery to bring about economic revival. It offered neo-liberal remedies. Levels of corporate taxation were far too high, and were hampering investment. They had to be cut if German firms were not to move their operations to Central and Eastern Europe (which some were in fact doing). Germany was still ‘smothered in regulations’. Commerce had to be deregulated to stimulate consumer spending. Above all, Germany’s labour costs were too high, relative to what was produced, and welfare costs had swollen, encouraging firms to shed workers and add to the unemployment numbers. Germany, the article argued, had to ‘restructure in response to globalization’ by enacting ‘radical structural reforms’. It was necessary to slash top rates of corporate and income tax and ‘defuse Germany’s welfare time bomb’ by cutting benefits, encouraging private pension provision, deregulating services and speeding up privatization. Unless these structural reforms were undertaken, the article concluded, ‘Germany is unlikely soon to shed its title as the sick man of Europe.’

Social democracy had, in other words, to make the economy globally competitive while not undermining the welfare provisions that had been built up over the years to protect citizens and improve their lives yet were now proving costly and restrictive to economic enterprise. Blair’s government, which Schröder admired, was seeking in its own way to address the issue in Britain. But at least Blair had the advantage over Schröder that he could build upon the incisive changes to the economy (and inroads into the welfare state) that had already been made by the Thatcher government. Nothing comparable had been done in Germany. Schröder had to try to hold the left together while introducing reforms that were bound to be unpopular across wide sectors of his own party.

Almost immediately his modernizing aims ran foul of his Finance Minister and the Chairman of the Social Democrats, Oskar Lafontaine, who favoured a traditional programme that looked to Keynesian remedies for Germany’s economic ills. These seemed, however, in arguing for stimulation of demand through higher wages, increased social spending and low interest rates – all of which would have meant higher public debt – to offer solutions from an earlier era that were out of tune with current needs. In March 1999 Lafontaine resigned his government and party positions. Schröder was the plain victor of the internal test of his authority and his policy direction.

His real problems grew from the announcement in 2003 of what was labelled ‘Agenda 2010’ – the programme to reform labour relations and social welfare in order to reduce unemployment and promote economic growth, which had been no more than 0.1 per cent in 2002. Social security contributions by employers and employees were meanwhile taking up on average over 40 per cent of gross salaries. ‘Either we modernize,’ declared Schröder, ‘or we will be modernized, and by the unconstrained forces of the market.’ Agenda 2010 was an attempt to align changes in welfare with the needs of a globally competitive economy – and was predictably highly unpopular. It bore some resemblance to what was happening in Britain under Blair (who in turn had been inspired by the example of President Clinton in the United States). In the interests of greater economic flexibility and competitiveness, adjustments (in reality cuts) were made to unemployment benefits, sickness payments and state pensions. It was made easier for firms to make employees redundant. The changes amounted to the greatest inroads in German social security since the establishment of the ‘social market economy’ over half a century earlier.

These were welcomed by business and the liberal-conservative right. But they were detested on the left. Gradually, the reforms did indeed help to reinvigorate the German economy, partly by reducing the proportion of wages and salaries in the gross domestic product. But they were not unalloyed improvements. Unemployment was soon falling. However, as in Britain and elsewhere, this disguised a rise in part-time, temporary, and other forms of precarious work that people were effectively obliged to accept. The numbers of people living in poverty increased. Inequality of incomes grew. While wages and pensions were held down, the salaries of top business managers soared.

Schröder’s popularity never recovered. In September 2005 he paid the price for his reform programme in electoral defeat. Even so, the parties of the Union (the Christian Democrats and their Bavarian partners the Christian Socialists) won only narrowly. Forming a coalition from left- or right-wing groupings proved impossible. All that remained was a ‘grand coalition’ of the Union and the Social Democrats. Under the new Christian Democrat Chancellor, Angela Merkel, the government followed broadly the economic direction that had been laid down by Schröder. The depiction of Germany as ‘the sick man of Europe’ soon looked bizarre. But German social democracy, in the eyes of increasing numbers of voters, had meanwhile come to seem little distinguishable from its conservative coalition partners. The main parties, not just in Germany, were starting to look alike. In the long run that was not good for democracy.

THE EUROPEAN UNION’S CHALLENGES

Blair and Schröder were wrestling with problems that, whatever their national nuances, flowed from the acceleration of globalization and affected the whole of Europe. For the European Union this meant adapting its structures to meet the challenges that arose from the decisions in the early 1990s to introduce the Euro and to widen the EU to incorporate countries from Central and Eastern Europe. Both decisions would lead to new difficulties.

With the accession of ten new countries on 1 May 2004, the European Union increased its membership from fifteen to twenty-five countries. Eight of the new entrants (the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Slovakia and Slovenia) had once stood behind the Iron Curtain. The ninth country, Cyprus, despite being split into two parts since the Turkish invasion of July 1974, gained admission after Greece had threatened to veto the accession of the former communist states. Its political problems remained unresolved. Malta was the tenth new entrant – tiny, politically split with Labour opposed to joining, with modest gross domestic product, but under its nationalist liberal-conservative government keen to benefit from the economic advantages that membership of the European Union would bring.

In geopolitical terms the widening was welcome. But the economic imbalance of the European Union was now a problem since the new countries were far poorer than existing members. The per-capita gross domestic product of the new entrants in 2004 was less than half of that of the existing members. Estonia and Slovenia were best placed among the former communist countries. But Poland, the largest country in Central Europe, had a lower than average gross domestic product even among the new entrants. Slovakia’s was lower still. The average wage in Latvia was only an eighth of the average in the pre-2004 member states.

The economic disparities within the European Union became an even bigger problem when Romania and Bulgaria were admitted in January 2007. Gross domestic product per head was only a third of the average of existing member countries (an average already brought down by the last round of new entrants). Neither country came close to meeting the criteria for membership that had been agreed at Copenhagen in 1993. They were far from models of liberal democracy or the rule of law. Corruption and organized crime were still rampant. Former communist functionaries dominated the political scene and ran the security services. And in economic terms, both countries languished at the bottom of the prosperity league table of members of the European Union. Despite their evident deficiencies, they became the European Union’s twenty-sixth and twenty-seventh member states. Following the Yugoslavian wars it had been felt imperative to stabilize the European ‘periphery’. The hope was that, once admitted, political and economic reforms would be accelerated.

Understandably, many from Eastern Europe sought to improve their living standards and those of their families by seeking work in wealthier Western European countries. Just as unsurprisingly, there was concern in Western European countries, notably in Germany and Austria which bordered the new entrants, about the impact on the labour market of an influx of cheap workers from Central Europe. The principle of free movement of citizens across national borders had not been a significant issue when the European Union consisted of countries that had attained a very broadly similar level of economic development. Now it started to be questioned. In 2001 the European Union had allowed member states to restrict access to the labour market for the expected migrants from Central and Eastern European countries for an interim period of up to seven years to provide time for adjustment. The United Kingdom, Ireland and Sweden were the only countries not to introduce such restrictions in 2004. Sweden alone did not do so when Romania and Bulgaria were admitted three years later.

The numbers of migrants in 2004 from Central and Eastern Europe seeking work was higher than predicted even in countries that imposed restrictions. But the restrictions did amount to some deterrent. Those imposing no restrictions were, by contrast, particularly attractive to migrants. Britain’s thriving economy made it a magnet. The British government anticipated an influx of some 15,000 people a year from the new member countries. Between May 2004 and June 2006, however, 427,000 work applications were approved, more than half of them for Polish migrants. In 2001 Poles in Britain numbered 58,000. A decade later the figure was 676,000. In a very short period of time Poles became Britain’s largest body of foreign citizens.

A continuous rise of migrants to Britain from all eight of the former communist member states occurred between 2004 and 2007. This trend then subsided somewhat during the subsequent economic recession when many – often young – migrants returned to their countries of origin. Because of the imposed restrictions (due to last until 2014), the numbers of Romanians and Bulgarians granted work permits in the United Kingdom averaged only around 25,000 a year after 2007, though this formed part of an overall continuing steep rise in overall immigration from both inside and outside Europe.

The influx of migrant labour, most analyses concurred, was broadly beneficial for the British economy. Estimates vary considerably, depending upon the basis of the calculations, but some suggest that European migrants contributed in one form or another around £20 billion to the British economy in the first decade of the twenty-first century. In crucial areas they were indispensable. The National Health Service could scarcely operate without migrant labour; almost a fifth of those it employed were from outside Britain. The predominantly young and frequently well-educated migrants, drawn to work far from home, filled labour shortages, often in low-skilled jobs, and made relatively few demands on welfare support. However, complaints soon arose – and did not subside – about downward pressure on wages and difficulties in housing and social services in areas with high concentrations of migrants. Perceptions were frequently not in accord with realities. But perceptions became a form of reality themselves. The speed and scale of the migrant influx from the European Union quickly turned this into a political issue of growing importance. Shrill opposition, some of it thinly veiled racism, to seemingly unstoppable high levels of immigration, mostly fostered by the right-wing media, became more voluble, and not only on the political far right.

‘Immigration’ in Britain (unlike the terminology in most of Europe) bracketed together people coming to Britain from the European Union and immigrants from outside Europe (often from countries of long-standing immigration to Britain, especially Pakistan and India). ‘Immigration’ also included rising numbers of young people both from within and – three-quarters of them – from outside the European Union coming to the United Kingdom to study. A minority of them, mainly non-EU nationals, remained after completing their studies, generally offering much-needed skills and expertise.

There was a crucial difference between the categories of migrants from within and outside the European Union: freedom of movement meant that no limitation was possible on the number of migrants coming from the EU. Migrants from those countries comprised on average just under a half of total net immigration (which over subsequent years would come close to averaging more than 300,000 people a year). That made migration from the European Union, within the wider framework of increasing opposition to immigration, a particularly sensitive political issue.

This was a feature of British immigration that was not generally mirrored in other countries of the European Union. The widespread use of English globally helped to make Britain uniquely attractive. Migration was, however, in every country a fact of modern life – an inexorable by-product of globalization. European countries such as Italy or Ireland, which before the Second World War had exported people, especially to the United States, had now become countries of immigration. It was easier to move for work (or to seek refuge from war and tyranny) than it had once been. The numbers of people on the move in the search for a better life was a general phenomenon throughout Europe.

By 2010 the European Union included 47 million people (9.4 per cent of the population) who had been born outside their country of residence. Germany, France, the United Kingdom, Spain, Italy and the Netherlands had in that order the largest numbers, measured in absolute terms (ranging from Germany’s 6.4 million to the 1.4 million of the Netherlands). As a proportion of the total population Austria (15.2 per cent) followed by Sweden (14.3 per cent) headed the list. These proportions were even higher than that of the United States, the traditional target for immigration. Apart (marginally) from Belgium, the proportion of those born in a non-EU state was higher than those born within a country of the European Union.

As in Britain, migrants frequently met with hostility, as in the highly negative attitude towards the rapidly increasing numbers of Romanians in Italy (which were ten times higher in 2008 than they had been seven years earlier). Age-old racist attitudes towards Roma played no small part in the aversion to Romanian migrants. In Austria, where immigration had continued to increase despite much restrictive legislation, antipathy was directed heavily towards those coming from former Yugoslavia and Turkey, traditionally the largest sources of migrant labour.

Most hostility was directed towards migrants from outside the European Union, particularly those from different cultures and often specifically towards Muslims – whose families had been domiciled in Europe for decades and were by now sometimes in their third or fourth generation. Tolerance towards Muslims, especially, was in sharp decline – partly spurred by the growth of Islamic fundamentalism. This was mirrored by a growth of anti-Western feeling, greatly exacerbated by the wars in Afghanistan and Iraq, among Muslims in Europe. In big cities especially, deep resentments among young Muslims were growing. A sense of discrimination and economic deprivation, of alienation and intense anger at Western intervention that had brought such suffering to Muslims in the Middle East, encouraged those at home to define their own identity in distinction to that of the majority populations among whom they lived. A small minority, particularly among disaffected young people, were drawn by the allure of Islamic causes. Amid intensifying mutual antipathy the strenuous efforts made by politicians and community leaders to promote multiculturalism and integration faced an uphill struggle. Communities, far from integrating, seemed to be growing further apart. Multiculturalism increasingly described communities with in practice near irreconcilable cultural difference – not integrating, but merely existing uneasily side by side.

Sometimes the tension broke out into violence, as in the anti-Muslim riots in 2001 in several poor northern British industrial towns. More generally, the antagonism simmered beneath the surface. In France there was much antipathy towards Muslims of North African origin, many of whom had lived there since the Algerian War nearly half a century earlier and whose families had actually been French citizens even before that. Serious riots in 2005 in socially deprived parts of French towns and cities with a large immigrant population fostered anti-Muslim feeling. Animosity was growing, too, in other EU countries such as the Netherlands, and in the non-member state of Switzerland. Right-wing political parties with a strident anti-immigrant (and anti-Muslim) platform were gaining support in many countries. Even though they could not yet capture the political mainstream, their message sometimes became incorporated in the demands of establishment parties to restrict immigration.

In this climate, any ambitious future enlargement of the European Union was in practice suspended, if not discarded in theory. Croatia, following earlier agreement, joined in 2013. Its gross domestic product was by then higher than that of some already existing member states. It was nevertheless allowed to join despite continued extensive organized corruption and criminality. Political considerations were once more decisive. It was felt important to send encouraging signals to Balkan states. But ‘Catholic’ Croatia had long been viewed as more Western than other Balkan countries. In contrast, Albania, Macedonia, Montenegro and Serbia would have to wait more or less indefinitely, while there was little prospect of Kosovo or Bosnia-Herzegovina (where the tensions of the 1990s had subsided but far from disappeared) joining in the foreseeable future.

The largest country on the waiting list was Turkey, a member of the Council of Europe since 1949, of NATO since 1952, and a recognized candidate for EU membership since 1999. Following limited improvements in civil rights and political freedom, Turkey was actually said in 2004 to have fulfilled the entry criteria. The process of negotiating Turkey’s entry began in 2005, only to be suspended a year later over the failure to resolve the thorny issue of the division of Cyprus. Germany, France and, especially, Britain strongly supported Turkey’s membership, predominantly because of the country’s strategic significance as a bridge between Europe and the Middle East. Austria, the Netherlands and Denmark led the opposition. One objection was that Turks did not ‘culturally’ belong in Europe. Such a large Muslim country of 70 million, which still lagged far behind acceptable standards of liberal democracy and the rule of law, would inexorably, the critics of Turkey’s accession argued, alter the character and balance of power of the still overwhelmingly (if largely nominally) Christian European Union. There were also substantial fears about large numbers of Turkish migrants seeking work in far more prosperous Western European countries, significantly adding to the existing problems of absorbing migrants and retaining social and political cohesion.

Turkey remained after 2006 a country waiting to join. In practice, that prospect was receding fast, and would recede even further in coming years. And as the prospects dimmed, Turkey itself was meanwhile moving gradually away from the secularism that Atatürk had prescribed as the basis of the country’s identity at its foundation in 1923 and instead in an Islamist direction in which national identity was closely bound up with religion. How far Turkey’s rejection by the EU contributed to this, or whether its internal development meant it was an inexorable consequence, is unclear. The result, in any case, was that Turkey came to be seen less as a candidate for joining the EU.

The European Union was meanwhile facing significant structural problems that were in no small measure a consequence of its extension. In 2002 a European Convention, headed by the former French President Valéry Giscard d’Estaing, had met in Brussels to draw up new constitutional arrangements for a European Union that was on the verge of major expansion. After lengthy wrangling over its precise terms, the text of a treaty establishing a Constitution for Europe was finally signed by all twenty-five members states of the by now enlarged European Union on 29 October 2004. The draft constitution modified arrangements for qualified majority voting, provided for the Commission to be elected by the European Parliament, and for an elected chair of the European Council to replace the existing six-month rotating chairmanship. The Parliament had to approve the budget and would have legislative powers alongside the Council. There would henceforth be a European Minister for Foreign Affairs.

The changes were far from the radical steps towards a federal Europe favoured by the German Foreign Minister, Joschka Fischer. They nevertheless went too far for some: in spring 2005 voters in France then in the Netherlands rejected the proposals. With that, the Constitution was dead. Some of the more significant changes were nevertheless in amended or watered-down form incorporated in the Treaty of Lisbon of 2007. This itself was only finally ratified after Irish voters had at first rejected it, then, after a number of opt-outs for Ireland had been introduced (that the Treaty would not infringe Irish sovereignty regarding taxation, family policy and neutrality), finally accepted it in a second referendum.

Despite such shocks for pro-Europeans, there was, in fact, much positive feeling towards the European Union across the continent. According to the standard Eurobarometer survey of opinion in 2000, only 14 per cent of citizens disapproved of their country’s membership of the European Union whereas 49 per cent approved (though this number had fallen worryingly from 72 per cent in 1991). The highest satisfaction rates were in Ireland, Luxembourg and the Netherlands, the lowest in the United Kingdom. Forty-seven per cent of Europeans thought their country had benefited from EU membership, again a significant drop since the early 1990s, with the most favourable ratings in Ireland and Greece, the least in Sweden and, at the bottom of the table, again the United Kingdom.

To many Europeans the European Union seemed labyrinthine, impenetrably complex and elitist – a bureaucratic organization remote from their daily lives. National governments directly or indirectly enhanced this image. They did little to advertise, for instance, the substantial funding for poorer regions or infrastructural projects from the European Union. This funding was not enough to restore to former prosperity areas suffering badly from post-industrial blight. But used wisely it could make a difference. However, national governments were only too pleased to trumpet economic and political successes as their own while conveniently blaming ‘Brussels’ and EU bureaucratic interference to divert from failures closer to home.

Whatever the reasons, as it widened its membership and intensified its efforts towards closer as well as more extensive integration, the European Union was losing touch with large numbers of Europeans. Every election to the European Parliament since 1979 showed a further decline in the number of citizens bothering to vote. In 1979 participation had been 62 per cent. By 2004 it was down to 45.5 per cent. In 2004 itself, a year of economic growth, when the European Union was preparing its new Constitution and about to undergo its greatest single moment of expansion, 43 per cent of Europeans, when asked how they would feel should the European Union collapse the following day, were indifferent. Thirteen per cent said even they would be ‘very relieved’. Only 39 per cent indicated that they would very much regret it. What opinion surveys plainly demonstrated was that people’s own nation was overwhelmingly the strongest point of identity. By contrast, emotional association with a European identity was extremely weak.

The European Union could nevertheless point to significant achievements. A framework of international cooperation, the extension of the rule of law, the upholding of human rights, the establishment of a security network, and the creation of a single currency for a majority of member states, had all helped to widen prosperity and to dilute the nationalism that had once poisoned Europe, to strengthen civil society and to build solid democratic foundations.

Beyond the borders of the European Union and the countries in Central and Eastern Europe that aspired to belong to it (and were meanwhile part of an expanded NATO) it was a different story.

THE ‘PUTIN FACTOR’

During the 1990s it looked as if Russia under President Yeltsin was moving closer to the Western democracies. In 1996 the country became a member of the Council of Europe, signing up to the European Convention on Human Rights, and the following year reached an agreement on partnership and cooperation with the European Union. Hopes were expressed in Moscow at this time of Russia in due course becoming a full member of the European Union.

Much, however, stood in the way of closer integration. The question of human rights was one obstacle. After Chechnya’s attempt to establish its independence in 1991, Russian troops had perpetrated serious violations of human rights between 1994 and 1996 and again in 1999–2000. Another obstacle was the deep unhappiness in Moscow at the expansion of NATO into parts of Eastern Europe – itself a plain sign of Russia’s weakness. The climate started to change once Putin had replaced Yeltsin as President of the Russian Federation on the last day of 1999. From then onwards there was a consistent emphasis on Russian national values and invocation of the country’s status as a great power. Putin started to put an end to the widespread feeling of humiliation at the drastic diminution in the standing of the country after the collapse of the Soviet Union, to give people back their proud Russian identity, and to make them believe in the country’s future and a return to former glory.

His forceful advocacy of Russian interests in international dealings, especially with the United States, and his readiness to uphold them by military force if necessary, enhanced Putin’s prestige at home. His popularity was boosted when, in August 2008, Russian armed forces entered Georgia (independent since 1991) to support the pro-Russian rebels seeking independence for the provinces of Abkhazia and South Ossetia.

The shift towards authoritarianism bothered Moscow intellectuals but not the masses in distant provinces. After the collapse under Gorbachev and the national weakness under Yeltsin’s unstable governance, the great majority of Russians supported Putin’s restoration of strong state authority. For some he was little less than a national saviour. That Russia’s economy was able to recover strongly by exploiting high market prices for oil and gas helped the sense of a new start, even if the underlying serious economic problems and relative poverty of large sections of the population were far from overcome. Corruption remained endemic, but most Russians took it on board as long as their standard of living was improving. The facade of a democratic system was retained. But presidential power was reasserted, former KGB associates were granted increased political influence, the judicial system was subordinated to political imperative, the mass media brought under control, public opinion orchestrated, potential for opposition restricted, and over-mighty oligarchs seen to pose any political threat cut down to size (while those close to Putin were co-opted by massive material inducements). Putin’s own dominance relied heavily on a modern-day version of medieval feudalism – on keeping the upper echelons of the state security service, the heads of the state bureaucracy, and top business leaders content through allowing them the trappings of power, advancement and wealth. No systematic ideological doctrine underpinned ‘Putinism’. A strong state and a forceful foreign policy aimed at restoring Russia’s status as a great power sufficed.

The growing assertiveness of Putin’s Russia and the critical stance of the European Union and the Council of Europe towards its breaches of human rights, inroads into judicial independence, and hardening of anti-democratic tendencies meant increasing mutual alienation rather than greater cooperation. The Partnership and Cooperation Agreement between Russia and the European Union, signed in 1997, was not renewed a decade later. Putin emphasized ‘the historical distinctiveness of the European civilizations’ and warned against trying to impose ‘artificial “standards” ’ on each other. He diverted discontent into resentment towards the West, increasingly portrayed as a threat rather than an ally.

Encroachment by the West on what Russia still thought of as its ‘sphere of influence’ was viewed with great unease. Following the expansion of NATO in the 1990s came the widening of the European Union in 2004. Even in what had once been parts of the Soviet Union itself the danger of penetration of the European Union could not be discounted. The pro-Western stance of the Georgian government – not least its bid to join NATO – under its new President, Mikheil Saakashvili, after the ousting in 2003 of President Eduard Shevardnadze (a close ally of Gorbachev and the last Foreign Minister of the Soviet Union), formed part of the background to the eventual Russian military intervention in Georgia in 2008. Of great concern, from Russia’s perspective, was the possible extension of Western influence into Ukraine. With the ‘Orange Revolution’ (named after the orange scarves worn by protesters) in Ukraine in 2004 this prospect loomed large.

Opposition, especially among young Ukrainians, to President Leonid Kuchma’s grossly corrupt, incompetent and highly brutal regime in Ukraine finally had a chance to express itself in elections at the end of October 2004. Kuchma (first elected in 1994) had served two terms in office. Constitutionally, he could not stand again. So he backed his Prime Minister, Viktor Yanukovych, who was declared the victor on 21 November (and warmly congratulated by Putin). The result was so obviously falsified – the true winner was plainly the popular Viktor Yushchenko, who had survived being poisoned (almost certainly by Kuchma’s security service) shortly before the election – that hundreds of thousands of people travelled to Kiev and defied the bitter cold to protest peacefully for a fair election. Their continued vigil, before the eyes of the world’s media, eventually forced a re-run of the election on 26 December, this time resulting in undisputed victory for Yuschchenko, who was installed as President the following month.

Putin watched with concern. ‘Russia cannot afford to allow defeat in the battle for Ukraine’ was the view expressed in one journal with good connections to the Kremlin as the Orange Revolution unfolded. The fear was that Western-style democracy would spread to Russia itself. The Kremlin spent hundreds of millions of dollars in trying to ensure the election of Yanukovych. The United States poured funds into support for Yuschchenko, who had openly declared his intention to apply for Ukraine’s membership of the European Union. Putin had little choice but to grind his teeth and accept the outcome of the Orange Revolution. But the lines of potential future conflict had been drawn. Would Ukraine look for its future to Western Europe, or to Russia?

Kuchma’s election in 1994 had firmly pointed Ukraine towards unity with Russia. But most of his support had come from the eastern parts of the country, those with especially close ties to Russia. Nowhere was his support higher than in Crimea, whose population was overwhelmingly Russian. Crimea had been transferred to Ukraine by Nikita Khrushchev in 1954, though the Russian parliament, forty years later, had actually voted to cancel the cession – not that the vote had practical consequences. From the Ukrainian side Crimea had been lent upon in 1992 to rescind a parliamentary vote to declare independence from Ukraine. Crimea was the sharpest point of the division that ran through Ukraine, between a western half that in earlier times had been culturally aligned with Poland, Lithuania and Austria, and now looked to Western Europe for its future, and an eastern half that had culturally always fallen within the Russian orbit. The fissure was not healed by the outcome of the Orange Revolution of 2004. It would continue to fester.

* * *

By 2008 the traumas produced by the wars in Afghanistan and Iraq, prompted by the devastating terrorist attack on New York seven years earlier, had receded in Europe. And over the almost two decades since the fall of the Berlin Wall had symbolized the end of the division of the continent, Europe, east and west, had come closer together. Globalization had brought new levels of convergence, economic and political. The huge economic problems of Central and Eastern European countries during the transition to capitalist economies had significantly diminished. That substantial difficulties remained, and that the standard of living lagged behind that of prosperous Western Europe, did not contradict the great improvements in living conditions that had been made since the end of communism. Few, given a choice, would have voted to return to those times. And, meanwhile, the Euro had since its introduction in 1999 (and in actual monetary circulation since 2002) replaced national currencies in twelve Western European countries. The early years of the new currency had been encouraging. It was an important sign of an ever more closely interconnected Europe.

Politically, too, there were grounds for optimism. Millions in the former eastern bloc were now enjoying personal freedoms that had been denied them for over four decades. Whatever the obvious problems of adjustment, the European Union and the values that underpinned it had been greatly extended through the incorporation of the new member states in 2004 and 2007. The relative success of the spread of Western European values of liberal democracy and the rule of law contrasted starkly with the conditions in the zone dominated by Russia. The future looked bright.

Any veneer of self-satisfaction was, however, about to crack. Few Europeans were greatly alarmed in 2007 when news crossed the Atlantic that a number of big American investment banks were in trouble because of over-extending their high-risk credit on the purchase of properties, known as ‘sub-prime’, which buyers taking out big loans could have difficulty in repaying. An early sign of worry in Europe was the panic in Britain in September 2007 when depositors queued outside the branches of Northern Rock building society to withdraw their savings, forcing the British government to nationalize the faltering bank in February 2008. The panic quickly subsided. But so globally interdependent had the networks of investment and credit become that a crisis in the United States was bound to have consequences for the banking and finances of other countries, and for the world economy.

The moment when the crisis struck globally can be precisely determined. It was the filing for bankruptcy of the giant American investment bank Lehman Brothers on 16 September 2008. Within a month the European banking system faced imminent collapse. The optimism was over. A crisis-ridden era of austerity was about to dawn. The financial crash left Europe a changed continent.