Some of the attributes desirable in a modern leader (suggested on the first page of the Introduction) have proved valuable in political leadership throughout the ages – intelligence, good memory, courage, flexibility and stamina, among them. But leadership must be placed in context if it is to be better understood. In this chapter I’ll look at four different, but interconnected, frames of reference for thinking about leadership – the historical, cultural, psychological and institutional. Leadership is highly contextual and what is appropriate or possible in one situation may be inappropriate or unattainable in another. Leadership styles differ in war and peace and in a crisis as compared with calmer times. Within a democracy the opportunities open to a head of government are very different when the leader’s political party has a large majority in the legislature, a knife-edge majority, or no majority at all. What is conventionally hailed as strong leadership is not identical with good leadership, and the latter is not an abstract attribute but an appropriate response in a distinctive setting – in a particular time and place.
The times, moreover, are different in different places. This truth was well understood by a number of eighteenth-century scholars when they began to reflect seriously on the development of human society. Enlightenment thinkers in Scotland and in France first elaborated in the 1750s a four-stage theory of development which they believed went a long way to explaining the laws and institutions at each phase.1 Although excessively schematic in their approach – human development has been much less unilinear than their analyses suggested2 – these thinkers offered many pertinent insights. It was a theory of development which summed up existing knowledge and allowed for exceptions at each of the stages.3 Its most original exponent, Adam Smith, was a far from dogmatic thinker – one, indeed, who took a delight in finding exceptions to his every rule.4*
THE EVOLUTION OF GOVERNMENT AND OF THINKING ABOUT LEADERSHIP
Concerned to study ‘the progress of government’, Enlightenment thinkers attempted, among other things, to account both for the emergence of chieftains and monarchs and for the subsequent nature of leadership and followership. While intent on imposing a pattern on history, they drew on a wide variety of sources, ranging from the Old Testament to the literature of ancient Greece and Rome (especially the Roman historian Tacitus), and moving on to the accounts of travellers who had acquired familiarity with hunter-gatherer societies of their own time. Native American tribes were accorded particular attention. Some eighteenth-century writers suggested that leadership in the earliest stage of development of primitive societies went to the strongest or tallest man in the tribe. And other things being equal (a crucial qualification), higher than average height has continued to be a helpful attribute for the would-be leader.†
During the first phase of social development – that of subsistence based on hunting animals and living on ‘the spontaneous fruits of the earth’ – there was, Adam Smith observed, little that deserved the name of government.5 ‘In the age of hunters,’ he said, ‘there can be very little government of any sort, but what there is will be of the democratical kind.’ Smith recognized that leadership was not the same as power. Thus, in such very different settings as groups of hunter-gatherers and members of a club or assembly in eighteenth-century Britain, there would be some people of greater weight than others, but their influence would be due to ‘their superior wisdom, valour, or such like qualifications’ and it would be up to the other members of the group to choose whether or not to be guided by them. Thus, leadership, as distinct from power, was to be observed where all the members were ‘on an equal footing’, yet where there was ‘generally some person whose counsel is more followed’ than that of others.6 This is leadership in its purest form, defined as someone other people wish to be guided by and to follow.
It was the acquisition of property that led to a need for government,7 and in the second stage of development, that of shepherds, people began to acquire property in the form of animals. In the third stage they became husbandmen, cultivating the soil and gradually becoming owners of property in the form of land.8 The fourth phase of development for Adam Smith was the commercial stage, at which people began to engage in mercantile activity. (He never used the term ‘capitalism’. That was a mid-nineteenth-century coinage.) Smith’s somewhat younger near-contemporary, the French nobleman and government administrator Anne-Robert-Jacques Turgot, who developed a rather similar theory of stages of development, surmised that when ‘quarrels first took place in nations, a man who was superior in strength, in valour, or in prudence persuaded and then forced the very people whom he was defending to obey him’.9
For David Hume, nothing was ‘more surprising to those who consider human affairs with a philosophical eye, than the easiness with which the many are governed by the few’.10 He believed it probable that the ascendancy of one man over a great many began during ‘a state of war; where the superiority of courage and of genius discovers itself most visibly, where unanimity and concert are most requisite, and where the pernicious effects of disorder are most sensibly felt’.11 Moreover, Hume surmised that ‘if the chieftain possessed as much equity as prudence and valour’, he would become ‘even during peace, the arbiter of all differences, and could gradually, by a mixture of force and consent, establish his authority’.12
Adam Smith devoted still more attention to the problem of how some people gained ascendancy over others and of how both leadership and power developed alongside the growth of differentiation in social rank. In The Wealth of Nations, he noted four ways in which authority and subordination came about. Initially, personal qualifications, including strength and agility, were important. However, ‘the qualities of the body, unless supported by those of the mind, can give little authority in any period of society’.13 The second source of authority was age. ‘Among nations of hunters, such as the native tribes of North America,’ Smith wrote, ‘age is the sole foundation of rank and precedency.’14 But age also counts for much in ‘the most opulent and civilized nations’, regulating rank among people who are in other respects equal, so that a title, for example, descends to the eldest member (or eldest male) of the family. The third source of authority was ‘superiority of fortune’. Riches were an advantage for a leader at every stage of society, but perhaps especially in the second phase of development – the earliest which permitted great inequality.15 ‘A Tartar chief’, Smith observed, possessing herds and flocks ‘sufficient to maintain a thousand men’ will, in fact, rule over them:
The thousand men whom he thus maintains, depending entirely upon him for their subsistence, must both obey his orders in war, and submit to his jurisdiction in peace. He is necessarily both their general and their judge, and his chieftainship is the necessary effect of the superiority of his fortune.16
In the commercial stage of development, a man could have a much greater fortune and yet be able to command not more than a dozen people, since apart from family servants, no one would depend on him for material support. Yet, Smith observes, the ‘authority of fortune’ is ‘very great even in an opulent and civilized society’.17 In every stage of development in which inequality of wealth existed, it had counted for still more than either personal qualities or age.18 The fourth source of authority, which followed logically from the wide differentiation of wealth, was ‘superiority of birth’.19 By this Smith did not mean ‘old families’, a concept he ridicules, observing:
All families are equally ancient; and the ancestors of the prince, though they may be better known, cannot well be more numerous than those of the beggar. Antiquity of family means everywhere the antiquity either of wealth, or of that greatness which is commonly either founded upon wealth, or accompanied by it.20
Smith is highly sceptical of vast power being placed in the hands of an individual, noting that the apparent stability created by absolute monarchs is an illusion. Perverse and unreasonable behaviour by rulers establishes the right of the people to oust them, and an individual ruler is more likely to be guilty of this than a more collective government. As Smith puts it: ‘single persons are much more liable to these absurdities than large assemblies, so we find that revolutions on this head are much more frequent in absolute monarchies than anywhere else’.21 The Turks, Smith contends, ‘seldom have the same sultan (though they have still the same absolute government) above 6 or 8 years’.22 Addressing his student audience at the University of Glasgow in March 1763, Smith adds: ‘There have been more revolutions in Russia than in all Europe besides for some years past. The folly of single men often incenses the people and makes it proper and right to rebel.’23
The person who becomes a ruler in a primitive society – or ‘the chief of a rude tribe’, in the language of one of Smith’s pupils and later professorial colleague, John Millar – earns such a position in the first instance by becoming commander of their forces. This leads, though, to an attachment to his person and a desire to promote his interest.24 Millar, who adopted and elaborated the four-stages framework of analysis, followed Smith in arguing that differentiation of wealth became significant already in the second stage ‘after mankind had fallen upon the expedient of taming pasturing cattle’, and this had implications for social and political hierarchy:
The authority derived from wealth, is not only greater than that which arises from mere personal accomplishments, but also more stable and permanent. Extraordinary endowments, either of mind or body, can operate only during the life of the possessor, and are seldom continued for any length of time in the same family. But a man usually transmits his fortune to his posterity, and along with it all the means of creating dependence which he enjoyed. Thus the son, who inherits the estate of his father, is enabled to maintain the same rank, at the same time that he preserves all the influence of the former proprietor, which is daily augmented by the power of habit, and becomes more considerable from one generation to another.25
This applied very forcefully in the case of chiefs. As a man became more opulent, he was the better able to support his leadership and in many cases make it hereditary. Being richer than others, he had ‘more power to reward and protect his friends, and to punish or depress those who have become the objects of his resentment or displeasure’.26 Thus, other people had reason to court his favour, leading to an increase in the immediate followers of the ‘great chief, or king’.27
Monarchy, usually hereditary, and under a variety of names – kings, tsars, emperors, khans, chiefs, sultans, pharaohs, sheikhs, among others – became, indeed, the archetypal mode of political leadership across millennia and continents.28 There was huge variation among them in terms of despotism, arbitrariness, respect for law, and willingness to share some power.29 Before Napoleon Bonaparte came to power in France, monarchs in Europe as a whole (although no longer in Great Britain) claimed that their rule was based on ‘divine right’. However, as S.E. Finer observed: ‘Once Napoleon acceded, this hoary old political formula was on the defensive. It now appeared that any Tom, Dick, or Harry might come forward and seize the state, provided he had taken sufficient pains to make it appear that he had done so as the result of a call from the People.’30
British ‘Exceptionalism’
Limited monarchy and widespread civil rights and freedoms were relatively rare prior to the nineteenth century. The most striking exception was England – and subsequently Britain – which provided the classic case of very gradual transformation of hereditary rule from absolute power to limited monarchy and, by the twentieth century, to symbolic authority. It has been called ‘democracy on the installment plan’, although those who made concessions at each stage rarely had in mind a goal of full democracy. More often than not – as in the passing of the acts of parliament which widened the suffrage in nineteenth-century Britain – they believed that this latest step of reform was as far as one could go while still preserving liberty and the rule of law.31 Britain, nonetheless, saw over several centuries a gradual reduction in the power of monarchs and a leisurely rise in the power of parliament and of the accountability of politicians to an ever wider public.
Yet gradualism was not a smooth and uninterrupted process. It was most spectacularly interrupted in the middle of the seventeenth century. Civil war between 1642 and 1649 ended with the victory of parliamentary forces over those of the king and in the beheading of Charles I. Between 1649 and 1660, the British state was a republic. From 1653 until 1658 Oliver Cromwell ruled the country as Lord Protector, relying on his command of the New Model Army. The bickering that followed Cromwell’s death, however, led to the dominant grouping within the army favouring recall of the monarchy (in the shape of Charles II) – and a restoration of gradualism. But the short-lived ‘English Revolution’ left an imprint on the monarchy. When James Boswell’s father, Lord Auchinleck, was challenged by Samuel Johnson to say what good Cromwell had ever done, he responded (in Scots vernacular): ‘He gart kings ken that they had a lith in their neck’ (He made kings aware they had a joint in their neck).32
Parliamentary power was given a substantial fillip by the ‘Glorious Revolution’ of 1688. Charles II and especially his successor James II, having attempted to bypass and downgrade parliament, succeeded instead in putting an end to the Stuart dynasty. The belief that James, a Roman Catholic, was biased in favour of Catholics – and possibly attempting to reimpose Roman Catholicism as the country’s religion – was just one of a number of reasons for growing opposition to him. When influential opponents of James decided to present the monarchy to James’s Protestant daughter Mary, her Dutch husband William of Orange insisted that if she were Queen, he would be King, not merely the Queen’s consort. The ‘revolution’, although it was hardly that, was termed ‘glorious’ largely because it was bloodless in England (although it was far from bloodless in Ireland and Scotland). James II fled the country, and William III and Mary became his successors. The trend towards greater parliamentary power and in the direction of enhanced governmental independence from the monarchy continued during the short reign of Queen Anne – which saw the creation of Great Britain with the union of English and Scottish parliaments in 1707 – and under her Hanoverian successors from 1712. By the twentieth century the gradual development of constitutional monarchy had come close to turning Britain into a ‘crowned republic’.
The American Constitution and its Legacy
The two most momentous breaks with monarchy in the history of government were the American Revolution and the French Revolution. The Founding Fathers of the United States who signed the Declaration of Independence of 1776 and the framers of the American Constitution in Philadelphia in 1787 disagreed on many matters, but were virtually united on one crucial issue – that the government of the United States must be republican, not monarchical or aristocratic.33 They took pains to enshrine a rule of law and protection for the freedoms of those who enjoyed the rights of citizenship. The American Constitution, however, was neither democratic nor intended to be by most of its framers. It did not outlaw slavery and it implicitly denied the vote to more than half the population – women, African-Americans and Native Americans.* It also deliberately tried to insulate the presidency from both ‘popular majorities and congressional rule’.34 It was the growth of support for greater democracy on the part of the American people, not the Constitution, which gradually turned the electoral college, set up to choose a president indirectly, into a de facto popular election, albeit one that was imperfectly democratic. As Robert A. Dahl has observed:
. . . the electoral college still preserved features that openly violated basic democratic principles: citizens of different states would be unequally represented, and a candidate with the largest number of popular votes might lose the presidency because of a failure to win a majority in the electoral college. That this outcome was more than a theoretical possibility had already occurred three times before it was displayed for all the world to see in the election of 2000.35
The designers of the Constitution, in creating a presidency, made that person the embodiment of executive power, which he remains, in a way in which a prime minister within a parliamentary system is not, even though some holders of that office aspire to it and their placemen may encourage it. The American Constitution, however, is unambiguous. Article II, Section 1, begins with the sentence: ‘The executive Power shall be vested in a President of the United States of America’, and the first sentence of Section 2 of the same article makes the president the commander-in-chief of the armed forces. Yet, to reiterate: the framers of the Constitution never intended that the president should be chosen by popular election. Their aim was to put the choice of president into the hands of men of exceptional wisdom, rather than let the great mass of the people make such a momentous decision. They also took pains to ensure that the president would not be able to turn himself into a monarch in citizen’s clothing. By enshrining a separation of powers within the Constitution, and by placing serious constraints on the president’s ability to determine policy, they guaranteed that the president (in contrast with England’s first and last republican ruler, Oliver Cromwell) would not acquire the equivalent of kingly powers.
The participants in the Convention which met in Philadelphia in 1787 came up with two innovations in the practice of government – a written constitution and a federal division of powers. Thus, the president’s power was limited by a codification of law on the political system which set out the powers of various institutions. This document, the Constitution, became, in the words of de Tocqueville, ‘the fount of all authority’ within the republic.36 Presidential power was limited also by the way the Constitution divided authority between the federal government and the states, with each entitled to autonomy in their own separate spheres. This was qualitatively different from mere decentralization – that could be found in some other countries – since it meant that, in principle, neither could encroach on the jurisdiction of the other. As the first country consciously and deliberately to embrace both constitutionalism and federalism, the USA significantly influenced the adoption of those broad principles elsewhere, although the actual institutional arrangements outlined in the American Constitution remained unique to the United States.
The constitution and the federal division of powers in the USA put novel limits on the power of chief executives, as did the special place accorded to law in the practice of American politics, with the rule of law coming close at times to the rule of lawyers. The ‘most legalistic constitution in the entire world’, as Finer described it,37 has meant that decisions that could quite properly be taken by a popularly elected government anywhere else in the world have aroused legal challenge in the United States. Thus, when President Barack Obama succeeded in 2010 in getting a comprehensive healthcare bill passed, albeit one that still did not bring medical provision for the whole population up to the level taken for granted in other advanced democracies, the Supreme Court took it upon itself to consider the constitutionality of the Patient Protection and Affordable Health Care Act.38 Since the votes of most of the members of the Court could be predicted on the basis of their political and social predilections, it was only the surprising decision of the conservative Chief Justice John Roberts which enabled the healthcare legislation to be deemed constitutional by five votes to four.39 Many of the Supreme Court’s decisions appear to be a continuation of politics by other means. The distinguished legal theorist Ronald Dworkin even suggested that Roberts wished to uphold the act ‘for public relations reasons’ rather than on genuine legal grounds.40 Nevertheless, the Supreme Court it was which took the ultimate decision. More than a century and a half ago, de Tocqueville wrote: ‘There is hardly a political question in the United States which does not sooner or later turn into a judicial one.’41
The French Revolution
Large though the international impact was of the American Revolution, that of the French Revolution was still greater.42 Whereas the Americans had asserted the right to govern themselves, the French revolutionaries made larger claims. They believed that they were creating a model for the rest of the world – for Europe in the first instance. Even twentieth-century revolutionaries, such as the Russian Bolsheviks, often themselves invoked comparisons with the French Revolution and its aftermath – from identification with the Jacobins to fear of Bonapartism.43 The French Revolution was, in principle, democratic and egalitarian in a way in which the American Revolution was not. There was, however, an important contrast between the American Constitution and its Bill of Rights, on the one hand, and the French Revolution and its Declaration of Human Rights, on the other, that was in the longer term to the advantage of the former. The American rights were specific and legally enforcible, the French rights were general and declarations of intent.44
French monarchical rule had been inefficient and oppressive, but not more so than in many another European country, and there was already more freedom in France than in most of Europe. An essential added ingredient which inspired many of the revolutionaries was the ideology of popular sovereignty and equality, the ideas of the ‘radical Enlightenment’, which are part of the explanation of why the revolution took the form it did. Among the changes the French Revolution inaugurated were a transformation of the legal system, the removal of feudal privileges, the ending of ecclesiastical authority, proclamation of the universal suppression of black slavery, changing the laws of marriage and introducing the possibility of divorce, and emancipation of Jews.45 There is still lively argument not only about the causes of the French Revolution but also about when it began and ended, although the storming of the Bastille on 14 July 1789 has come to symbolize the destruction of the authority of the old regime and the forcible assertion of popular sovereignty.
Some of the political innovations which came with the French Revolution have had a lasting impact – including the notions of ‘left’ and ‘right’ in politics, based on the seating arrangements in France’s National Assembly and the concept (or slogan) of ‘liberty, equality and fraternity’. Of continuing influence also has been the French revolutionary assertion of secular and anti-clerical values, going beyond an attempt to replace one religion, or branch of a religion, by another. Whether religious or secular authority should be politically supreme is still a live leadership issue in many parts of the world today, but nowhere in contemporary Europe are religious leaders able to dictate the policy of governments. Notwithstanding a general hostility to religion, the French Revolution was soon creating its own rituals and myths, and it subsequently employed the use of terror on a scale that dampened an initial enthusiasm elsewhere in Europe for the French example and went some way to discredit the ideas it had embodied. That process of disillusionment continued when the early chaotic egalitarianism gave way to revivified hierarchy, military adventurism and a new autocracy. This was especially so after the collective executive, the Directory, which had come to power in 1795, was overthrown in 1799 by Napoleon Bonaparte who went on to establish his dictatorial power. In a reversal of many of the ideals of the revolution, Napoleon was crowned emperor by the Pope in 1804. The French Revolution was the first serious attempt to refound a state on the basis of radical ideas of equality and democracy. It was not to be the last time that a revolution galvanized by similar beliefs would end in autocratic rule by a strongman.
The Evolution of Democracy and of Democratic Leadership
In the course of the nineteenth century ever more social groups acquired a foothold in the political system in much of Europe and in America as economic status ceased to be a determinant of the right to vote. Even in America, however, property requirements long restricted the right to vote, and universal male white suffrage took place at different times in different states. By the 1860s it was largely complete. Non-white males were debarred from voting until 1870 when the passing of the Fifteenth Amendment to the Constitution enfranchised them – in principle. It came just five years after the Thirteenth Amendment had abolished slavery. The Fifteenth Amendment was not, however, sufficient to prevent southern states putting obstacles in the way of black Americans’ exercise of their voting rights. Even in the later years of the twentieth century, a number of states still found ways of restricting the voting opportunities of their fellow citizens of African descent. The best response to the bigots was the election of the son of a white American woman and black African father as president in 2008 and the re-election of Barack Obama in 2012. In the first of these elections Obama won a higher percentage of white voters (43 per cent) than did John Kerry (41 per cent) in 2004.46
In many countries of Europe the last third of the nineteenth century saw important extensions of the right to vote, as it was delinked from property ownership. France had universal male suffrage from 1871 and Switzerland followed suit in 1874. In Britain the extension of the suffrage was so gradual that almost a quarter of adult men were still voteless at a time when they were being conscripted for service in the First World War. It was the lack of votes for women, however, which ensured that an absolute majority of the adult population throughout Europe and America were disenfranchized prior to the twentieth century. It is, therefore, hardly appropriate to call any European country or the United States of America democratic earlier than the last hundred years or so. That is notwithstanding the fact that some countries, not least the United States and Britain, were notable in the nineteenth century (and, indeed, well before then) for the extent of their freedoms and political pluralism and the existence (however flawed) of a rule of law. More generally, there was in Europe and America a gradual but uneven growth of government by persuasion.47 At a time, however, when both women and African-Americans were denied the vote in the nineteenth-century United States, Alexis de Tocqueville was premature, albeit in many other ways prescient, in calling the remarkable book he wrote in the 1830s Democracy in America.
The development of democracy in the twentieth century, with the advent of female suffrage, had important implications for political leadership. Not the least of these was the entirely new possibility of a woman being chosen to head an elected government. It was as late as 1893 that the right of full adult suffrage was extended to women, and even then in one country only – New Zealand. Within Europe, it was from Scandinavia (characteristically) that the lead came in extending women’s rights, Finland and Norway being in 1907 pioneers of women’s suffrage. In most countries, the United States and Britain among them, women got the vote only after the First World War. Enfranchisement of women in the US came in 1920 with the passing of the Nineteenth Amendment. Unlike the constitutional amendment of half a century earlier, abolishing the colour bar to voting, states did not seek to circumvent this new provision. In the UK votes for women came in two stages – for those over the age of thirty in 1918 and for women aged twenty-one or older in 1928. At long last, that brought them into electoral equality with men.
The political advance of women has been an essential component of democracy, but it took some time for votes for women to pave the way for their elevation to positions of political leadership. Sirimavo Bandaranaike in 1960 became the world’s first woman prime minister. She acquired this position in Ceylon (now Sri Lanka), having been persuaded by the Sri Lanka Freedom Party to become their leader, following the assassination of her husband who had been the party’s founder. Centuries earlier women had, of course, at times held the highest political office, but as hereditary monarchs, with none more illustrious than Elizabeth I in sixteenth-century England and Catherine II in eighteenth-century Russia. Until the second half of the twentieth century, however, women had not headed governments as leaders of political parties which had won popular elections. Yet by 2013 more than eighty women had held the highest elected governmental office in a wide variety of countries, spanning every continent of the world. These included Golda Meir, Israeli Prime Minister from 1969 until 1974; followed by (to take only some of the more notable European examples) Margaret Thatcher in Britain in 1979; Gro Harlem Brundtland in Norway in 1981; Angela Merkel as Chancellor of Germany in 2005; Helle Thorning-Schmidt in Denmark (2011); and Norway’s second woman prime minister, Erna Solberg, in 2013.
Contrary to most people’s expectations, women leaders emerged earlier and more often in patriarchal Asian societies than in Europe or North America (where, although Canada has had a woman premier, the United States awaits its first woman president). Indira Gandhi became Indian prime minister as early as 1966. However, in all the Asian cases, there has been a family connection to an important male politician – father or husband. Thus, significant breakthrough though this was, the emergence of women leaders on the Asian continent can also be seen as a new variation on the theme of hereditary rule and dynastic politics. Bandaranaike took the place of her slain husband. Mrs Gandhi was the only child of the first prime minister of independent India, Jawaharlal ‘Pandit’ Nehru. Corazón Acquino, President of the Philippines from 1986 to 1992, was the widow of Benigno ‘Ninoy’ Acquino, the most respected political opponent of the authoritarian and corrupt Ferdinand Marcos who paid for his opposition to Marcos with his life. Benazir Bhutto, prime minister of Pakistan from 1988 to 1990 and again from 1993 to 1996, was the country’s first woman head of government. Her father, Zulfikar, had been successively president and prime minister of Pakistan in the 1970s. Their deaths were emblematic of the violence and volatility of Pakistani politics, with Zulkifar hanged in 1979 for the alleged political murder of an opponent, and Benazir killed by a bomb while she was election campaigning in December 2007. The first woman president of South Korea, Park Geun-hye, was democratically elected in December 2012 and took office in February 2013. She is the daughter of Park Chung-hee, the authoritarian president of South Korea in the 1960s and 1970s who was killed by his intelligence chief in 1979. Even the remarkable Burmese opposition leader, Aung San Suu Kyi, whose leadership of the democratic resistance to the military dictatorship led to long years of house arrest, owed her initial prestige to being the daughter of Aung San, the assassinated leader of the Burmese independence struggle.
The family connection was important also in the emergence of the earliest women leaders in Latin America. Without ever holding the highest political office, Evita Perón, the second wife of Argentina’s first post-World War Two president, Juan Perón, became influential both during her life and after her death. In particular, she was a significant influence on the achievement of female suffrage in Argentina in 1947. And it was Perón’s third wife, Isabel, who became the first woman President of Argentina, on her husband’s death in 1975. More recently, however, women leaders have been elected in Latin America without needing any dynastic connection. Although Christina Fernández in Argentina conforms to the earlier pattern, having succeeded her late husband, Néstor Kirchner, neither Dilma Rousseff in Brazil nor Michelle Bachelet in Chile needed any such family connection. They came to prominence entirely on the basis of their own efforts and abilities and to power as a result of their high standing within their parties and countries. Bachelet, who belonged to the essentially social democratic Chilean Socialist Party, was President of Chile from 2006 to 2010, and Rousseff, a member of the Brazilian Workers’ Party, was elected President in succession to Lula da Silva in the latter year. One thing the two women did have in common is that they had been active opponents of military dictatorship and that both were subjected to persecution, including torture, when they were militants resisting authoritarian rule in their countries.
CULTURAL CONTEXT
Recent anthropological research has expanded our understanding of the development of leadership over time and in different societies. It has fleshed out with new evidence, and simultaneously modified, some of the ideas of Enlightenment theorists outlined earlier in this chapter. It is clearer than ever that there has been a wide variety of ways of reaching decisions in pre-modern communities. There are many egalitarian hunter-gatherer societies in which no one person has been designated as leader and others which have chiefs.48 Moreover, since hunter-gathering has been the mode of subsistence of human beings during 99 per cent of their existence on earth, it is unsurprising that there should have been variation at different times and in different places in the ways these groups reached agreement and resolved disagreements.49 The American scholar Jared Diamond has noted that the size of the group is important. If it consists of several hundred people, in which not only does everyone know everyone else but they also form a kinship group, they can get by without a chief. Diamond writes:
Tribes still have an informal, ‘egalitarian’ system of government. Information and decision making are both communal . . . Many [New Guinean] highland villages do have someone known as the ‘big-man’, the most influential man in the village. But that position is not a formal office to be filled and carries only limited power. The big-man has no independent decision-making authority . . . and can do no more than attempt to sway communal decisions. Big-men achieve that status by their own attributes; the position is not inherited.50
In some instances, however, big-men could over time transform themselves into chiefs and when they did so, the anthropologist Marshall Sahlins argued, they used their leadership to subvert the egalitarian norms of the tribe, demanding economic dues and forcing people to produce more than was needed for subsistence. Initially such chiefs were constrained by the belief that all the members of the tribe were part of an extended family, but some of their number went on to repudiate the ties of kinship and to engage in more ruthless exploitation.51 Thus, what began as leadership and persuasion turned into power and coercion. Chiefdoms, as distinct from bands or tribes with no one granted supreme authority, appear to have first arisen some 7,500 years ago.52 Tribal associations of people tended to develop into societies headed by chiefs when ‘the local population was sufficiently large and dense’ and there was ‘potential for surplus food production’. The larger the group, the more difficult it was to avoid the emergence of a leader who was in some, but not all, cases authoritarian. Different pre-modern societies have had their own distinctive features.53
Political life in African states, which have generally come under indigenous rule only from the later decades of the twentieth century, frequently bears the imprint of earlier forms of social organization. When British colonies were accorded independent statehood (usually following political struggle) and presented with a constitution based on the ‘Westminster model’, deeper cultural traits often trumped formal institutions, and any similarity to Westminster became increasingly difficult to discern. Thus, African leaders have tended to operate ‘through highly personalized patron-client networks’ that are usually, but not always, based on ethnic and regional groupings. Within these networks there are generally ‘Big Men’ who wield disproportionate influence and ‘circumvent the formal rules of the game’.54 A persistent problem of African states has been the fact that boundaries that are a legacy of colonial conquest forcibly brought together peoples of different ethnic identities and religion who had little in common. One of the most challenging tasks of political leadership was to create a sense of national identity. Presidents Julius Nyerere in Tanzania and Nelson Mandela in South Africa were unusually successful in doing so.55 Good institutions clearly are important, but much depends on the quality and integrity of leadership. If leaders themselves circumvent the institutions and thus undermine their legitimacy, then sound structures will not be enough.
Thus, leadership matters, but it is visionary and inclusive leadership which the poorest and most divided societies need, not a strongman. Many of the most impoverished countries of the world are among the most ethnically diverse. This compounds the problem of making electoral competition work, for there is a strong tendency for voting (to the extent that the election is reasonably free) to be along lines of ethnic loyalty. The temptation is to conclude that what is needed by the kind of ethnically diverse society in which most of the bottom billion of the world’s poor live is ‘a strongman’.56 On the basis of long observation of African states and of statistical analysis of factors conducive to inter-communal violence, Paul Collier begs to differ. Noting the damage that violence does to the prospects for economic growth, in addition to its devastating immediate effects on people’s lives, Collier concludes that ‘bad as democracy is’ in ethnically diverse failing states inhabited by the world’s poorest people, ‘dictators are even worse’.57
Political Culture
My main concern in the present context is, however, with locating political leadership within the political cultures of modern societies. A focus on political culture means attending to those aspects of culture which bear relevance to politics. It also provides a link between history and politics, for deep-seated cultures, as distinct from ephemeral attitudes, are a product of the historical experience of nations and groups (although less history as distilled by professional historians than history as popularly perceived). The concept of political culture and, still more, its parent concept of culture have been defined in a great many different ways.58 In essence, however, a political culture embodies what people take for granted as appropriate or inappropriate behaviour on the part of governments and citizens; people’s understandings of the means by which political change may be brought about; their perceptions of the history of their group or nation; and their values and fundamental political beliefs.59 Students of values accept that they can alter over time, but contend that, as a rule, they change only gradually.60 Fundamental political beliefs refer not to whether people support one or another political party, but to something more basic – whether, for example, they believe that all citizens have the right to influence their leaders and help determine political outcomes or, on the contrary, they hold that what happens in government must be left in the hands of their rulers who, like the winds and the waves, are not (and should not be) subject to the sway of ordinary mortals.
Political cultures in complex, modern societies are not homogeneous. Most countries are, in fact, ethnically diverse and contain also people of different religious faiths and of none. In the more successful of them, value is attached to what they, nevertheless, have in common. They are characterized also by broad agreement on the ways in which political change may be brought about, even though, in a democracy, the content and direction of the change will remain objects of contention. It is always an oversimplification to speak about the political culture of a particular nation. Nations and states contain a number of sub-cultures. In some cases, even allegiance to a political party can be a signifier of this. Members of the Communist Party or of a conservative Catholic party in Fourth or Fifth Republic France belonged to very different sub-cultures. Yet, there are often some beliefs broadly accepted in one society which are by no means taken for granted in another.61 In one country there may, for example, be a widespread willingness to accord a leader uninhibited power for the sake of ‘order’ (seen as the supreme value), whereas in another the emphasis is on constraining the power of the top leader and making him or her legally and politically accountable. Historically, Russia has been an example of the first and the United States of America of the second.
Leaders, then, operate within political cultures which are not immutable but which tend to change slowly. Suppression of freedom of the press by an American president, Canadian prime minister or French president would meet cultural as well as institutional resistance. Indeed, during his single term of office as President of France, Nicholas Sarkozy came under strong domestic attack for an alleged willingness to use the security services to investigate critical journalists.62 Italy has been a flawed democracy in the post-Second World War period, but a democracy nevertheless.63 Thus, there was substantial opposition within the society to Prime Minister Silvio Berlusconi’s use of his media empire to curtail criticism and debate. In Russia, there has never been a fully fledged democracy, although a vigorous political pluralism emerged in the second half of the 1980s. Over the past two decades that has become progressively attenuated. There was, though, a break with the passivity and conformism of the previous decade in 2011 and 2012 when rigged parliamentary elections brought tens of thousands of protesters on to the streets of Moscow and (in much smaller numbers) in other cities. The twenty-first-century harassment of opposition leaders, accompanied by state-enforced conformism of the mass media, have, however, evoked protests from only a small minority of the population. A democratic political culture grows out of lengthy democratic experience, and such experience in Russia has been both incomplete and short-lived.
Yet political cultures change over time in an interaction between institutions and values. It is a two-way relationship. Long experience with democratic institutions helps to mould and consolidate democratic values. But there are instances where the predominant influence is from the other direction. They may arise when an authoritarian regime has been imposed on a country and the new rulers promote an ideology which is at odds with well-established and widespread beliefs within the society. A good example of this was Czechoslovakia, which existed from 1918 until the end of 1992 (following which the Czech Republic and Slovak Republic became separate states). It was the most democratic state within central Europe between the two world wars, and was led for most of that time by its main founder, Thomas Masaryk. In the years immediately after the Second World War that First Republic was denigrated by Communists and linked in many people’s minds with the unemployment of the 1930s and, above all, with the collapse of the republic in the face of Nazi aggression. Yet Czechs (more than did Slovaks) perceived their inter-war democracy much more positively after two decades of Communist rule than they had done in the early post-war period. A 1946 survey asked Czech citizens to say which period of Czech history they considered to be the most glorious. The First Republic (1918–1938) was named by only 8 per cent of respondents and came fifth from the top of ‘glorious’ periods. When the question was repeated in 1968, the First Republic topped the list with the support of 39 per cent of Czechs.64 By the 1960s many Czech and Slovak Communists were themselves re-evaluating the advantages of political pluralism, and also the moral and political stature of Masaryk, after their experience of Soviet-style oppressive rule.
In the early post-war years there had been genuine enthusiasm in Czechoslovakia for ‘building socialism’. Yet bureaucratic authoritarian rule, accompanied by political police surveillance and repression, was not what the more idealistic of young Czech Communists had sought or expected. The contrast between the depressing reality and their ideals led over time to some serious rethinking. Reform was also stimulated by Nikita Khrushchev’s attack on Stalin in a closed session of the Twentieth Congress of the Communist Party of the Soviet Union in Moscow in 1956 and then again, openly, at the Twenty-Second Congress in 1961. What became known as the Prague Spring was the culmination of a reform movement inside the Communist Party of Czechoslovakia itself. However, in the more tolerant and rapidly changing atmosphere of 1968 the broader society was revitalized. Civic groups representing the non-Communist majority of the population sprang up. The process – especially the political reforms endorsed by the Communist Party leadership – so alarmed the Soviet Politburo that they sent half a million troops in August of that year to put a stop to it.
The top party leader, Alexander Dubček (a Slovak by nationality), was not himself a radical reformer, but he was a good listener who preferred persuasion to coercion and tolerated critical discussion and a partial pluralization of the system. In the eyes of senior Soviet leaders, he became ‘the Number One Scoundrel’.65 Although Dubček’s role was that of facilitator rather than driving force, his succeeding the hardline Antonín Novotný as party leader at the beginning of 1968 was of great importance. In a highly authoritarian, strictly hierarchical political system, a change at the top of the hierarchy to a leader possessing not only a different style but also more humane values could make a huge difference. In general, the more power is concentrated in the office, the greater the potential significance of the change of leader occupying it.
Cultural influence, an important fact of political life, should never be taken to mean cultural determinism. Transnational influences, cutting across national cultures, have been important for centuries and seldom more so than in the last decades of the twentieth century and in the twenty-first when the means of instant communication between countries and continents are more numerous than ever before. Within any modern state, moreover, there is a variety of cultural traditions that can be drawn upon. Czechs were fortunate in having a past leader who embodied democratic values and who could become a potent symbol for those seeking change. Photographs of Masaryk were being sold on the streets of Prague in 1968 (I bought one there myself in that year), then banned for the next twenty years, only to re-emerge in late 1989. And this time, what became known as the ‘Velvet Revolution’ met no resistance from Moscow.
Some countries under authoritarian or totalitarian rule have a less usable past than that which Czechs could draw upon. It helps to have had past experience of democracy and to have symbols of democracy and freedom to quarry. A less propitious political cultural inheritance, from a democratic standpoint, does not, however, mean that nations are destined to spend the rest of time under dictatorial rule. Far from it. Every country in the world today which is regarded as democratic was at one time governed by authoritarian warlords or by an absolute monarch.
Leaders can be especially important at times of transition from authoritarianism to democracy. The depth of their commitment to democratic values is liable, in periods of political turmoil, to be decisively important both in securing such a breakthrough and in sustaining it. Mikhail Gorbachev, as I shall argue in Chapter 4, was a transformational leader, but he and his allies in the Soviet Union had an uphill struggle. There were not only powerful vested interests opposed to the radical changes which the last leader of the Soviet Union initiated, but also important strands in Russian political culture that could be drawn upon by his opponents. They have been among the underpinnings of the rule of post-Soviet Russian leaders as they whittled away checks on the power of the top leadership, which had emerged in the last years of the Soviet Union, and retained democratic forms while depriving them of most of their democratic substance. There has been a relapse into modes of conformist thinking whereby it becomes natural as well as prudent not to challenge the authority of the powers that be. In Russia a leader’s supposed ‘popularity’ is often an effect of ‘his perceived grip on power’. An interview with a woman voter in the run-up to the 1996 presidential election provided an apt illustration of this. Asked whom she supported, she named the Communist Party candidate, Gennady Zyuganov, but said she would be voting for Boris Yeltsin. To the question why, she replied: ‘When Zyuganov is president, I will vote for him.’ Power is deemed to confer authority and, in turn, commands respect and allegiance. As Ivan Krastev and Stephen Holmes have observed, if Putin ever becomes ‘just one of several genuinely plausible candidates for the post of president, he would no longer be the Putin for whom an opportunistically deferential electorate was eager to vote’.66
Survey research has provided much evidence of attachment to a tradition which links legitimate government to the rule of a strongman. In the year 2000 the institute headed by Yuriy Levada (until his death in 2006 the respected doyen of Russian public opinion researchers) polled fellow citizens on which of their leaders in the twentieth century they considered the most outstanding. The top five who emerged were different personalities in many ways, but the one thing they had in common was hostility to democracy. They were, at best, authoritarian and, at worst, totalitarian leaders. Josif Stalin came top with Vladimir Lenin in second place. Third was Yuriy Andropov who headed the KGB for fifteen years and was leader of the Communist Party of the Soviet Union from 1982 until his death in early 1984. Leonid Brezhnev, Soviet leader from 1964 until 1982, occupied the fourth slot, and in fifth place came the last tsar, Nicholas II, who was overthrown in 1917.67
There have been other surveys, it is important to add, which suggest that there is more support for democratic principles among the population of Russia than is evinced by the political elite. Only a minority of Russians believe that they are living under democracy, but a majority regard it as an appropriate way to govern their country. Yet, in reporting these results, Timothy Colton and Michael McFaul note also the less encouraging findings that when Russians were forced to choose between democracy, on the one hand, and a strong state, on the other, only 6 per cent preferred democracy.68 Consonant with such a preference, three surveys conducted in the Russian city of Yaroslavl in 1993, 1996 and 2004 found over 80 per cent of respondents agreeing with the statement that ‘talented, strong-willed leaders always achieve success in any undertaking’, while some three-quarters agreed that ‘a few strong leaders could do more for their country than all laws and discussion’.69
Not only, however, are there different sub-cultures within Russia, as within any modern state, there are particularly striking generational differences. In the Levada survey already cited, respondents were allowed to name only one person as the greatest leader of their country in the twentieth century. Those who chose Stalin and those who named Gorbachev clearly belonged to very different sub-cultures, given the chasm between the values and policies of these two men. Gorbachev occupied sixth place in that survey, named by 7 per cent of respondents. There were, however, very significant differences linked to age and education. Stalin’s support was highest among those aged fifty-five and over and lowest among the eighteen- to twenty-four-year-olds. Of the three levels of educational attainment – higher, middle or ‘less than middle’ – Stalin’s support was lowest among those with higher education. With Gorbachev it was the other way round in terms of both educational level and age groups. He was seen as the greatest leader of the century by 14 per cent of respondents with higher education, the same percentage from that highly educated section of the population as chose Stalin as the greatest.70 In a survey conducted in 2005 there were similar age-related differences in Russian attitudes to the unreformed Soviet system. Asked whether it would have been ‘better if everything in the country had remained as it was before 1985’ (the year Gorbachev became leader), 48 per cent agreed with that statement. Whereas, however, 66 per cent of the over-fifty-fives agreed, only 24 per cent of the eighteen to twenty-four age group accepted that proposition.71
Political cultures are historically conditioned, but we should never underestimate the impact of the history that people themselves actually live through. Yet, how they interpret that experience is likely to be heavily influenced by the values and beliefs they have imbibed in childhood and youth. Studies of the acquisition of political outlooks in established democracies have shown that parental political partisanship ‘has a major effect on the flow of political information to offspring’.72 The same is doubtless true within societies under authoritarian rule. Especially in states where Communist regimes were imposed from without, socialization within the family could be a decisively important counterweight to the state educational system and the official mass media. In the case of Poland, the influence of parents – and, linked to this, the influence of the Catholic Church – was greater than that of a party-state which never overcame the obstacle to its legitimacy of having been imposed, essentially, by Soviet force of arms. A mighty secular leader was far less likely for Poles than for Russians to be seen as the answer to their problems, still less their prayers.73
PSYCHOLOGICAL DIMENSIONS
The pursuit of power and wealth is often seen as a game played by rational actors in defence of their self-interest, especially by many contemporary economists and their fellow-travellers among political scientists. Paradoxically, though, even the motivation for money-making – except for those so poor that it is closely related to survival – often is not primarily economic. In the words of Daniel Kahneman (a psychologist who was awarded the Nobel Prize in Economics): ‘For the billionaire looking for the extra billion, and indeed for the participant in an experimental economics project looking for the extra dollar, money is a proxy for points on a scale of self-regard and achievement.’74 As usual, Adam Smith was wiser than those who interpret his theories as an unalloyed defence of economic self-interest and who view that as the governing principle of society. Smith was well aware of the non-rational element in life generally, including the way people react to major political events. He noted, for example, that ‘all the innocent blood that was shed in the civil wars, provoked less indignation than the death of Charles I’.75 ‘A stranger to human nature,’ Smith observed, ‘would be apt to imagine, that pain must be more agonizing, and the convulsions of death more terrible to persons of higher rank, than to those of meaner stations.’ He turns this reflection into a psychological explanation for social and political hierarchy, one which complements his ideas about the relationship of forms of government to the means of economic subsistence. Writing in The Theory of Moral Sentiments, Smith contends:
Upon this disposition of mankind, to go along with all the passions of the rich and powerful, is founded the distinction of ranks, and the order of society. Our obsequiousness to our superiors more frequently arises from our admiration for the advantages of their situation, than from any private expectations of benefit from their good-will. Their benefits can extend but to a few; but their fortunes interest almost every body.76
It is mainly the wise and virtuous – who form, Smith observes, ‘but a small party’ – who are ‘the real and steady admirers of wisdom and virtue’. In contrast: ‘The great mob of mankind are the admirers and worshippers, and, what may seem more extraordinary, most frequently the disinterested admirers and worshippers, of wealth and greatness’ (italics added).77
To that disposition to admire ‘wealth and greatness’ may be added a tendency of many observers to take individual rulers – whether monarchs, presidents or prime ministers – at their own high value of themselves, sustained, as it is, by flattery and the hopes of preferment of some of those around them. A number of books on leadership do now pay more attention than in the past to followers and their complex relationship with leaders.78 Timid and gullible followers, it is postulated, get the bad leaders they deserve. Leaders rely on ‘true-believer’ followers who will recruit other followers to promote their heroic image and to spread their message. Therefore, ‘to the extent that leaders’ reliance on followers is ignored, so the autonomy of the leader is exaggerated’.79
Obeisance to authority figures can allow ‘toxic leaders’ in many professions – not only politics – to survive in office when they should be driven from it. Jean Lipman-Blumen has noted a widespread tendency to ‘prefer toxic leaders to those disillusioning leaders, who would press our noses to the dark window of life’.80 Many leaders, of course, are neither ‘toxic’ nor gloom-laden. Indeed, a leader needs to be able to instil hope and provide reasons for optimism, even while being honest about the scale of problems to be overcome. Winston Churchill performed that task par excellence as British wartime prime minister. As American president, Jimmy Carter identified many of the problems facing the United States, but was much less successful at boosting morale. An intelligent and upright leader, Carter has, nevertheless, been characterized as ‘slightly too pious and nearly joyless’.81 He tried to do too much himself, and placed excessive reliance on rationality, uncluttered by emotional appeal or political sentiment, to be effective in achieving his policy goals. While Carter was still in the White House, one of his former aides identified as a problem of his leadership a failure ‘to project a vision larger than the problem he is tackling at the moment’.82 Carter had a far more detailed grasp of the issues than his successor, Ronald Reagan, but the latter’s sunny optimism went a long way towards helping him win the 1980 presidential election. There is much evidence from studies of American politics that ‘people vote for the candidate who elicits the right feelings, not the candidate who presents the best arguments’.83
Leaders often give themselves credit for a particular success, even when there is no evidence that they have done anything in particular, or even anything at all, to bring it about.84 As social psychologists Alexander Haslam, Stephen Reicher and Michael Platow put it: ‘There is no mystery as to why leaders themselves are attracted to the idea of heroic leadership. First, it legitimates their position by providing a rationale for claims that they, rather than anyone else, should hold the reins of power, . . . Second it frees them from the constraints of group traditions, from any obligations to group members . . . Third, it allows leaders to reap all the benefits of success while often avoiding the pitfalls of failure.’85 Pronouns can be revealing. Thus, the more self-regarding of leaders’ accounts of their exploits can be summarised as ‘I lead, you blunder, we fail.’86 More generally, as Kahneman has observed: ‘We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers.’87
The recent attention paid to followers as well as leaders is welcome. A focus, however, only on the one person at the top of the hierarchy and on people who may reasonably be described as his or her followers leaves out an important category of leaders. Within a democratic government – and even in some authoritarian regimes – there are people of substance within the leadership group who should not be regarded as ‘followers’ of the top leader. They may, indeed, have played as important – or sometimes even more important – a part in such successes as the government enjoys as did the official leader. That would not be news to serious biographers of some of the major figures in governments who became neither president nor prime minister. Yet it is much less discernible in books that seek to generalize about political leadership.
It is an axiom of institutional analysis that within bureaucracies where you stand depends on where you sit.88 And it is true up to a point. To take the most obvious example, officials within a government’s Department of Health or Department of Education (still more the politician in charge of the department) will generally seek substantially increased budgets for their respective spheres of health or education. The primary preoccupation of a Treasury official, in contrast, will be to keep government spending within the bounds of financial prudence. Winston Churchill is not generally regarded as a politician who favoured reducing military expenditure, but when he was Chancellor of the Exchequer, he demanded (in 1925) deep cuts from the Admiralty and called for a smaller navy, although as First Lord of the Admiralty before the First World War, he had successfully pressed for a huge increase in naval expenditure.89 More generally, what is a major concern for one department may be a matter of little interest or low priority for another.
One of the many suggestive findings, however, of social and political psychology, which complements what we know about institutional roles, is that where you stand depends also on what you see.90 Misperception of facts feeds back into values and helps to shape particular views.91 Thus, in the 1990s a fifth of Americans thought that what the government spent most money on was foreign aid – at a time when it took about 2 per cent of the budget.92 This strengthened hostility to spending money for that purpose. It is well known that in their perceptions people tend to screen out information that is at odds with their preexisting beliefs and will find a variety of imaginative means to view decisions they have made as reasonable and justified, including those which display inconsistency between their actions and professed principles.93 People selectively process and interpret information so that it does not challenge their previous assumptions in uncomfortable ways. Perceptions of political reality are ‘inextricably intertwined with citizen’s political preferences and identity’. Thus, studies of televised American presidential and vice-presidential debates found that ‘people’s perceptions of who “won”’ were ‘strongly colored by their prior opinions about the candidates’.94*
A large body of evidence testifies to the fact that emotions matter greatly in politics.95 To such an extent that we need to add to the other determinants of political stance: where you stand depends on what you feel. Rationality and people’s perceptions of their interests are far from being irrelevant to the choices they make in elections; but material self-interest plays a less major role for a significant number of voters than might be expected. There is a particularly rich body of research on this in the context of American politics. The paradox whereby many people will cast their vote for a representative or leader on grounds far removed from their immediate economic interests is well summed up by Drew Westen, a clinical psychologist and political strategist: ‘How gay people express their commitment to one another doesn’t affect the marriages of 95 per cent of Americans, who aren’t likely to start dashing off with their fishing buddies in droves if given the opportunity to tie the gay knot. Whether a few dozen murderers a year get a life sentence or the chair doesn’t make much difference to the day-to-day experience of most of us.’96 What is remarkable, Westen suggests, is the extent to which emotional reaction on such social issues influences many American votes. That is in spite of the fact that what affects people’s everyday lives much more is ‘who gets tax breaks and who doesn’t; whether they can leave one job and begin another without fear of losing their health insurance because of a pre-existing condition; whether they can take maternity leave without getting fired’.97
INSTITUTIONS OF LEADERSHIP
I have already made the point that leaders in the purest sense of the term are those who attract followers and make an impact on society and politics while not holding any vestige of state power. Mahatma Gandhi during India’s quest for independence from Britain, Nelson Mandela in the South African anti-apartheid struggle for majority rule, and Aung San Suu Kyi as the acknowledged leader of the campaign for democracy in Burma are outstanding examples from the twentieth and twenty-first centuries.98 And leaders such as these are surely no less deserving of the adjective ‘great’ than monarchs in earlier centuries who were given that accolade on account of military victories, however inadequate ‘great man’ (or great woman) narratives may be as exclusive, or general, explanations of historical change.
Even for these three leaders, however, institutions – albeit nongovernmental – have mattered in the furtherance of their cause. Gandhi became head of the Indian National Congress, the main institution of opposition to British rule long before it became a governing party in independent India. Mandela was the most renowned figure in the leadership of the African National Congress, the organization that led the struggle against institutionalized white supremacy in South Africa over many decades until, eventually, it had the opportunity of forming a government. Aung San Suu Kyi has been the longstanding leader of the National League for Democracy, an organization that had to resort to an underground existence for years on end under Burma’s oppressive military dictatorship. Yet these leaders needed neither patronage nor governmental power to bolster their moral authority and political appeal.
Most political leaders who become renowned at a national level in their own countries are not like that. Their leadership is very dependent on the office they hold, most obviously as head of the government, whether President, Prime Minister or (in the case of Germany) Chancellor. Even talented politicians with a strong personality may achieve notable success in one office and find themselves powerless to influence events in another. The institutional setting, and its scope or limitations, more often than not determines what they can do. Some leaders, however, find ways of expanding their influence, even from relatively unpromising offices. Lyndon B. Johnson, as majority leader of the US Senate from 1955 (and before that minority leader), overcame the constraints of the seniority system (less flatteringly known as the ‘senility system’), whereby promotion to committee chairs depended on how long someone had been in the Senate. Johnson, by mixing persuasion, inducements and sometimes intimidation, was able to fill slots on key committees and win votes as a ruthlessly effective Senate leader. Indeed, he virtually reinvented legislative leadership. In the words of his outstanding biographer, Robert A. Caro, he bent to his will an institution that had been ‘stubbornly unbendable’ and was ‘the greatest Senate leader in America’s history’. He was ‘master of the Senate – master of an institution that had never before had a master, and that . . . has not had one since’.99 Later, as US president, he became that rare thing – a redefining leader (discussed in Chapter 3). He left a much greater legislative legacy than his predecessor, John F. Kennedy. In particular, Johnson was able to get civil rights legislation approved that went far beyond what Kennedy was capable of persuading Congress to pass. Johnson’s achievement in the White House depended not only on his tactical acumen and virtuoso cajolery but also on a combination of his consummate Senate know-how and presidential power.
Yet in between holding the Senate leadership, which he had turned into a major power base, and (as a result of Kennedy’s assassination) acceding to the presidency, Johnson had been Kennedy’s vice-president. The charisma which Johnson appeared to radiate as Senate Majority Leader, and which was to reappear in the earliest months of his presidency, was obscured to the point of non-existence in the early 1960s when he was vice-president. In that role he was frozen out of the inner circle of most significant decision-makers. The latter included the president’s brother, Robert Kennedy, whose hatred of Johnson was heartily reciprocated. Johnson’s leadership talents had no chance to emerge, so severely were they limited by the office he held. An earlier Texan vice-president, John Nance Garner, had described the job as not worth ‘a bucket of warm piss’.100 Johnson himself added:
The vice-presidency is filled with trips around the world, chauffeurs, men saluting, people clapping, chairmanships of councils, but in the end it is nothing. I detested every minute of it.101
An American vice-president can become a hugely influential figure – another leader, in fact – but only if the president chooses to repose great trust in him, as George W. Bush did with Dick Cheney.102 For Johnson, in harness with Kennedy, it was a very different story. While Johnson had been wrong in imagining that much of the authority he had acquired in the Senate would be transferable to the vice-presidency, he had also made another calculation which turned out to be more realistic. Convinced that no candidate from a southern state would be elected President during his lifetime (the last one, Zachary Taylor, had been in 1848), he noted that one in five presidents had acceded to that office on the death of the elected incumbent. When Kennedy, aiming to strengthen his electoral chances in the south, invited the Texan to be his running-mate, Johnson (who had aspired to the presidency from an early age) reckoned those odds were as good as he was now likely to get.103
Institutions are both enabling and constraining. They help leaders to get policy implemented. Their rules, procedures and collective ethos, however, limit his or her freedom of action. An American president has more power within the executive than is normally the case for a prime minister in a parliamentary system. Johnson, like Franklin Delano Roosevelt, was among those who used it to the full. Yet, in comparison with a prime minister whose party has an overall majority in parliament (as is usual in Britain, the coalition government formed in 2010 being the UK’s first since the Second World War), the president is much weaker vis-à-vis the other branches of government – the legislature and the judiciary. Johnson’s vast Senate experience, allied to the vice-presidency, availed him nothing. But when as president he called every senator in turn, it counted for a great deal. Moreover, the US president is head of state as well as head of government and, as a result, has traditionally been treated with more deference in interviews and press conferences than a British prime minister, not to speak of the way the latter may be scorned at question-time in the House of Commons. The especially strict separation of powers in the United States has an effect on the way that presidential leadership is exercised. Hence the use of the presidency as a ‘bully pulpit’, appealing to the public over the heads of other branches of the political system in the hope of persuading voters to put pressure on Congress. Franklin D. Roosevelt and Ronald Reagan, in their different ways, were effective practitioners of what, as noted in the previous chapter, Truman regarded as the president’s main power – the power to persuade.
Leaders and Political Parties
In a democracy, a head of the executive who leads a political party has the backing of its organization and the advantage of its campaigning support. He or she had better, however, take account of opinion within the party – and in the parliamentary party in the first instance – if the relationship is to remain a happy one. It is because being a party leader in a democracy means persuading senior party colleagues and the broader membership that a policy is desirable, rather than simply decreeing it, that the party role is constraining as well as enabling. A party leader who espouses policies at odds with the core values of the party or with overwhelming party opinion on any particular issue is courting trouble. For the President of the United States, the constraints imposed by his own party are generally less than in parliamentary democracies, although they are not absent. Thus, President George H.W. Bush deemed it necessary to impose a lengthy pause in the constructive and increasingly friendly relationship with Gorbachev’s Soviet Union which had been developing under his predecessor, Ronald Reagan. Brent Scowcroft and his National Security Council staff set up a series of policy reviews with the aim of showing that Bush’s foreign policy would not simply be a continuation of Reagan’s. Condoleezza Rice, who managed two of the reviews, said that the purpose was ‘in the case of European and Soviet policy, to slow down what was widely seen as Ronald Reagan’s too-close embrace of Mikhail Gorbachev in 1988’. Only the subsequent ‘rapid collapse of communism got our attention in time to overcome our inherent caution’.104
In the view of the American ambassador to Moscow, Jack Matlock, it was not simply a matter of the wrong experts giving the wrong advice in Washington, but Bush’s need to shore up his political support where it was weakest. Whereas Reagan’s good standing with right-wing Republicans had left him more (though not entirely) immune from criticism from within his own party, Bush, as Matlock put it, felt a need ‘to reassure the right wing of the Republican Party’ and ‘put on a show of toughness to insulate himself from right-wing criticism’.105 While foreign policy issues do, in some cases, still divide the parties, they are less prominent than during the Cold War. The growing salience of social issues in American politics – abortion, school prayer, gay marriage – has contributed to a weakening of party structures.106 Even before those trends became pronounced, the American comedian Will Rogers remarked: ‘I do not belong to any organized political group – I’m a Democrat.’107
Other than through impeachment, American presidents cannot be removed in between elections. Prime ministers in parliamentary democracies have no such guarantee. If they lose the confidence of their party, especially the parliamentary party, they can be replaced. Mobilizing a large enough group to challenge a leader is a simpler task if only the parliamentary party has a vote on the leadership, as distinct from an electoral college comprising a wider electorate, including rank-and-file party members. Australia is a striking example of a country where these decisions have been exclusively in the hands of members of parliament and where there has been no shortage in modern times of party leaders being forced out by their own party, even when that person is prime minister.108
The most recent instance was the replacement of Julia Gillard by Kevin Rudd as Labor leader, and hence as prime minister, in June 2013, thus reversing Rudd’s ouster by Gillard, who was deputy leader at the time, just three years earlier.109 Following his removal as party leader and prime minister in 2010, Rudd went on to serve as foreign minister, but resigned that post in February 2012 and provoked a leadership contest in an attempt to regain the premiership. He was comprehensively defeated by Gillard, even though Rudd was by this time more popular in the country than was Australia’s first woman prime minister. Senior ministers attacked Rudd’s record and style as prime minister ‘with a candour and vehemence’ which suggested that ‘the majority of his cabinet did not want him as prime minister under any circumstances’.110 Still not accepting Julia Gillard’s leadership, Rudd and his supporters mounted another challenge just over a year later. Minutes before the vote was to be taken in March 2013, however, Rudd ‘announced he would not run, saying he did not have the numbers’.111 He also said that this would be his last attempt to regain the party leadership. Yet, a mere three months later, convinced that he now did have the numbers, Rudd renewed his challenge and won the party vote. A Chinese-speaking former diplomat, Rudd is regarded as ‘ferociously bright’, but his ‘autocratic leadership style’ when he was prime minister earlier led to his being ‘despised by large sections of his own party’.112
As was entirely predictable, Labor’s change of leadership did not affect the overall outcome of the general election when it took place in September 2013. Immediately after his return to the prime ministership in late June, Rudd was ahead not only of Gillard in the opinion polls but also of the Leader of the Opposition, Tony Abbott, although as a party Labor still trailed, albeit with the gap temporarily narrowed. By the time of the election in early September, Abbott’s ratings were higher than Rudd’s, but in neither case was the popularity or unpopularity of the leader decisive. The vote was against the Labor government at a time when it had been weakened by the extent of its public infighting and when Australia’s sustained economic success had begun to show signs of fragility. The opposition Liberal Party was able to make the most of these issues, striking a chord also with its harder line on immigration. Rudd’s return to the premiership had proved to be singularly pointless, dividing his party once again and failing to impress the country. In the wake of the electoral defeat, he announced his resignation as party leader.
Rudd’s problems during his first stint as premier were foreshadowed by his announcement that when in government he, and not members of the parliamentary party, would choose members of the Cabinet.113* The change in Australia was criticized on the grounds that it turned both Cabinet members and those who aspired to governmental office into ‘sycophants’. An Australian senator observed that ‘under the old system, everybody owned the front bench. At the moment, the front bench is wholly and solely the property of the leader.’ A Cabinet minister who served during Kevin Rudd’s first premiership said: ‘In his [Rudd’s] perfect world, he would have decided everything himself.’114 More complex voting systems for choosing a new party leader provide somewhat greater protection for heads of government in many other parliamentary democracies, but they put their future in jeopardy if they lose the support of their parliamentary party. It is, therefore, unwise as well as undemocratic for a prime minister to wish to decide everything himself or herself.
It is because they do not wish to be hemmed in by their senior colleagues and, still less, by rank-and-file party members that some leaders, whose commitment to democratic norms is less than wholehearted, make a virtue of not joining a political party. This is extremely rare in an established democracy. General Charles de Gaulle is the exception who proved the rule – not only by being ‘above party’ but by ultimately enhancing, rather than undermining, French democracy. Leaders professing to be above party are more liable to be found in countries emerging from authoritarian rule and their distancing themselves from party helps to ensure that the transition from authoritarianism is, to say the least, incomplete. Boris Yeltsin and Vladimir Putin in Russia each made much of the boast that they were president of the whole people and not shackled or tainted by party membership. In so doing, they unknowingly, or knowingly, did a disservice to the development of democracy in post-Soviet Russia. (Putin was, for a time, the designated leader of the pro-Kremlin political party, United Russia, but without actually joining it.) A president or prime minister in a democracy is no less the national leader, acting in the interests of the people as a whole as he or she perceives them, for belonging to a political party. It is not a chief executive’s party membership that is a threat to an emerging democracy, but weak or ineffective political parties. And for the head of the government not to be a party leader, or even a party member, devalues political parties and hence democratic institution-building.
Leaders and Forms of Government
Institutions clearly make a difference to what leaders can do and leaders’ choices have an impact on institutions. What form of government – whether presidential, parliamentary or semi-presidential – a country in transition from highly authoritarian rule chooses is of some consequence. There is a large literature on the relative merits of presidential and parliamentary systems for the development of democracy. The bulk of the evidence suggests that parliamentarism is more conducive to the flourishing of democracy than either a presidential or a semi-presidential system, the latter being one in which the highest executive power is divided between a president and a prime minister.115 Semi-presidential systems occupy an increasingly important place in the constellation of governments. More than fifty countries have such dual executives.116
Moreover, within these dual executive systems, there is an important distinction between the countries in which the prime minister and cabinet are responsible only to the legislature and those in which the prime minister and cabinet are responsible both to the president and to parliament. It is the latter type, in which the president is much the stronger partner, that is mainly responsible for the statistics that show semi-presidential regimes to be less democratic than parliamentary systems.117 In a semi-presidential system that is, nevertheless, democratic there is the possibility of awkward ‘cohabitation’ – a president who was elected at a different time from the legislature having to find a way of working with a prime minister and a parliamentary majority of a different political persuasion. That can lead to tension that is potentially destabilizing for the system, although the French Fifth Republic has survived such electoral outcomes remarkably smoothly.
In Russia, in contrast, parliament was gradually reduced to a condition of docile deference and dependency during the presidency of Vladimir Putin. Earlier, the system generated serious conflict between the legislature and the executive, with Boris Yeltsin employing tanks and shells to quell the most intransigent of his parliamentary opponents in 1993 – an extreme version of ‘strong leadership’ that elicited hardly a murmur of criticism from most Western governments. This was, in fact, a fateful step towards restoration of ‘strongman’ government, taking Russia in a more authoritarian direction. The choice of Putin as Yeltsin’s successor consolidated a trend that was already underway.118 This also raises the chicken-and-egg question about whether leaders and political elites in countries with a tradition of authoritarian rule opt for a strongly presidentialized semi-presidentialism, leading to an excessive concentration of power in the hands of the chief executive. We have to be careful not to make institutional design explain too much. Indeed, the Russian tradition of personalized power meant that when Putin ceded the presidency to his protégé Dmitriy Medvedev for a period of four years, because the constitution did not allow him more than two consecutive terms, he remained in political reality the stronger partner while holding what had hitherto been (and has again become) the less powerful post of prime minister within the dual executive.119 Putin was the patron, Medvedev the client, and everyone knew it.
*
Leaders everywhere operate within historically conditioned political cultures. In the way they lead, they cannot rely on reason and argument alone, but must be able to appeal to emotion, sharing in the sense of identity of their party or group. In government, the minority of leaders who come to be revered and who retain the admiration of posterity, are those who have also fostered a sense of purpose within their country as a whole, who have provided grounds for trust and have offered a vision that transcends day-to-day decision-making. There are, though, many different styles of leadership within democracies and even within authoritarian regimes. The personality and beliefs of the leader matter – and some leaders matter much more than others. That does not mean that the more power the leader accumulates in his or her own hands, as distinct from those of governmental colleagues, the more outstanding is the person and the more effective the leadership. It does not imply, in other words (as I argue in greater detail in other chapters), that the optimal model for a head of government is that of the leader as boss.
* Similarly, Smith’s defence of political and economic freedom had nothing in common with an indiscriminate defence of business interests. On the contrary, it was Smith who wrote: ‘People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.’ (Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, edited by R.H. Campbell and A.S. Skinner, Clarendon Press, Oxford, 1976, Vol. 1, p. 145.) For twenty-first-century examples of the kind of phenomenon Smith had in mind, we need look no further than the world of high finance and the cosy relationships involved in determining remuneration of those at the top of their hierarchies.
† The last American president to be elected who was below the height of the average American man was William McKinley at the end of the nineteenth century. (See Tim Harford, ‘Since you asked’, Financial Times, 11 May 2013.) In American presidential elections since then, victory has gone to the taller of the two main candidates approximately 60 per cent of the time. The point about the successful candidate being taller than the average male in the United States over the past 110 years probably relates at least as much to the relatively privileged social background of a majority of presidents (although with some notable exceptions) as compared with most Americans. Insofar as the generalization that height matters has any merit at all, it refers to leaders who are chosen by a wider group – a tribe, a political party or an electorate. The most frequently cited counter-examples of leaders who are small in stature are of authoritarian rulers, and thus of no relevance to the issue of the electoral advantage of greater height. Famous small leaders include, for example, Napoleon Bonaparte, Josif Stalin and Deng Xiaoping as well as hereditary monarchs such as Queen Elizabeth I and Queen Victoria.
* John Millar, one of the more radical representatives of the Scottish Enlightenment and a fierce opponent of slavery wherever it was to be found, did not feel the need to alter any word of a paragraph he first committed to print in 1771 when he published the third edition of his Origin of the Distinction of Ranks in 1779, three years after the American Declaration of Independence. Nor would the American Constitution subsequently diminish the force of his argument about the gulf between rhetoric and reality. Millar wrote: ‘It affords a curious spectacle to observe that the same people who talk in a high strain of political liberty, and who consider the privilege of imposing their own taxes as one of the inalienable rights of mankind, should make no scruple of reducing a great proportion of their fellow-creatures into circumstances by which they are not only deprived of property, but almost of every species of right. Fortune perhaps never produced a situation more calculated to ridicule a liberal hypothesis, or to show how little the conduct of men is at the bottom directed by any philosophical principles.’
* There are, though, limits to this when a candidate’s performance falls well short of expectations. In early October 2012, in the first of the three televised presidential debates of the campaign that year, Barack Obama’s performance was unusually lacklustre. By a substantial majority, viewers thought that Mitt Romney had done the better of the two. Romney also got a significant bounce in the polls tracking voting intentions. (Financial Times, 6–7 October and 8 October.) In the remaining two debates, with Obama more than holding his own against Romney, perceptions of who prevailed once again tended strongly to reflect the viewer’s political predilections.
* In Britain both Labour and Conservative prime ministers select their Cabinet colleagues, but in opposition the Labour Shadow Cabinet was until 2011 elected by the parliamentary party. A year after Ed Miliband succeeded Gordon Brown, this choice was placed in the hands of the leader.