2

The federal government: “Separated institutions sharing powers”

When Edward Kennedy died in August 2009 he was widely eulogized as the “lion of the Senate.” He had become the patriarch of one of America’s foremost political clans after two of his elder brothers had been assassinated. Jack and Robert Kennedy were also Senators. One went on to become President, the other died while campaigning for the White House. After his own failed bid for the Democrats’ presidential nomination in 1976, Ted Kennedy had remained the Senator from Massachusetts, a position he held for forty-seven years. In many ways, as his obituaries pointed out, he had been able to achieve more than either of his siblings, not simply because their lives had been cut tragically short, but because, unlike them, he had remained in Congress: the place where America’s laws are made.

If the Executive is about political leadership, the Legislature is about political outcomes: it is where the business of American government can continue under the watchful eye of the Judiciary, acting as the guardian of the Constitution. The politics and the government in the United States intersect in the framing of laws, in the constantly shifting balance of power between institutions, in the perennial problem of presidential power and in the important area of judicial review. American government takes shape in an arena of constant negotiation and compromise between these “separated institutions sharing powers.” The political process may at times seem messy, frustrating and unpredictable, but therein lies its fascination.

JUDICIAL REVIEW

The doctrine of judicial review is an American invention that has had a profound impact in defining the role of the Judiciary within the United States. The courts have the final say as to whether actions of the Legislature and the Executive conform with the legal framework set out in the Constitution and its subsequent amendments. The doctrine cements the Supreme Court at the core of the American political system through its power to interpret the Constitution and its commitment to upholding the principles of the rule of law and limited government.

Congress is unreliable. It doesn’t always do what the President wants. Indeed, it is always possible and frequently the case that a Chief Executive is confronted by the opposition party controlling either the House of Representatives, or the Senate, or both. In the last fifty years, divided government has been the rule rather than the exception in Washington DC. Moreover, even if the President’s party does predominate in both the House of Representatives and the Senate, there is no guarantee that the Legislature will enact the Executive’s proposals. Party fortunes fluctuate. The entire House and one-third of the Senate face the electorate every two years and in Congress political perspectives and loyalties are shaped accordingly. Congress is a consistent complication each President must face because it is independently powerful, separately elected and fiercely protective of its constitutional position.

It is in the changing political dynamic between President and Congress that the constitutional principles of the separation of powers and checks and balances thus find their constant expression. James Madison and his contemporaries envisaged that Congress would have the central role in the nation’s government, but they also ensured that it was not left entirely to its own devices. The Constitution makes it clear that “All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives” at the same time as it requires the President to “give to the Congress Information of the State of the Union, and recommend to their Consideration such Measures as he shall judge necessary and expedient.” Nowadays, having become the focal point of the federal government, the President’s task is to work with Congress to try to persuade it to support the Executive’s political agenda. At the same time, the White House can also refuse to accept the laws which are passed on Capitol Hill.

The veto power

Every act Congress passes must be signed into law by the President before it can be enacted. The Constitution specifies that this should be done within a ten-day period (excepting Sundays). If the Executive disapproves of any measure, it is sent back to the Legislature. That is not the end of the matter. Congress can then reconsider and if a two-thirds majority approves, the law is enacted despite the President’s veto. If Congress adjourns within the ten-day approval period, however, the White House has a political trick up its sleeve. A measure can fail to be enacted simply because it has been neither signed nor returned. Since much legislation tends finally to emerge from Congress under the pressure of an imminent vacation, the presidential “pocket veto” is another way of preventing a law reaching the statute books since it does not give the Legislature any opportunity to overturn the Executive’s action.

“Pocket vetoes” can cause political controversy. In November 1983, President Reagan used the power to strike down a law that prevented military aid being given to El Salvador unless his administration could show that it had improved its record on human rights. However, Congress disputed his action, arguing that since law-makers had been merely between sessions, technically Reagan could not use the “pocket veto.” Legal wrangling continued until the case finally reached the Supreme Court in 1987. The Court refused to make a definitive ruling. By that time, the original bill had expired and so it did not make much difference whether or not it had been passed into law. The Court neatly stepped through a political mine-field which might potentially have set it at odds with the other institutions of the federal government.

One way of assessing the health of the relationship between the Executive and the Legislature and who is in the political ascendancy is by seeing how much congressional legislation a President vetoes. In the early years of the Republic, the power was used sparingly: George Washington vetoed two measures and his five immediate successors exercised the option only a further eight times between them. Andrew Jackson, who was more aggressive in his assertion of presidential prerogatives, used his veto power on twelve occasions. Typically, however, presidential objections were infrequent. Moreover, the Legislature usually deferred to the Executive when laws were returned unsigned. Before the Civil War, Congress overturned just six of the fifty-two presidential vetoes.

After 1865, as Congress moved to re-assert itself following the period of national emergency when it had largely acquiesced to President Lincoln’s leadership, his two immediate successors reacted by vetoing an unprecedented number of measures. Between them Andrew Johnson and Ulysses S. Grant refused to approve legislation on 122 occasions. Johnson, who so antagonized Congress that he was eventually impeached, still holds the record for the number of times – fifteen – that Congress over-rode his objections. The potential for the White House and Capitol Hill to be at loggerheads became an ever-present political reality. In the late nineteenth century, in his two separate terms in office, Grover Cleveland objected to 584 measures sent to him by Congress and was, at the time, by far the most prolific Chief Executive in his use of the presidential veto.

In the twentieth century, Presidents were typically more prepared to defy Congress than were their nineteenth-century predecessors (Cleveland apart): an indication of their increasing involvement in shaping the nation’s legislative agenda. Indeed, while he was in the White House, Franklin Roosevelt used his veto power as one way of reminding Congress as to who was really in control of the national government. His message was effective: only on nine out of the 635 times Roosevelt objected to its legislation did Congress persist in enacting a law.

The veto power is potentially the final hurdle that has to be negotiated before a law can be placed upon the statute books. It represents the President’s weapon of choice when faced with a legislative initiative that the White House finds unacceptable. But Congress too has a formidable arsenal at its disposal that it can deploy to disrupt the Executive’s efforts to enact its political agenda. For a start, it has the power of the purse. The House of Representatives approves the raising of taxes and between them both Houses of Congress agree how it is to be spent. The President’s budget for the activities of the federal government has to be approved by the Legislature, which can shape it, change it or disapprove of it entirely. Moreover, law-making in America is a tortuous process: procedural rules in Congress mean that initiatives, whether they come from the Executive or from within the Legislature itself, have to take their chances in the obstacle race that has to be run before they can be enacted.

Congressional committees

If Congress is the most powerful independent Legislature in the world, its committees are the guardians of its prerogatives. They determine what is done, and more often what is not done. The committee system developed in the House of Representatives as a pragmatic response to America’s population increase and as more states joined the Union, increasing its size. By the beginning of the twentieth century, Woodrow Wilson, then president of Princeton University rather than of the United States, summed up the problem and its solution:

A numerous body like the House of Representatives is naturally and of course unfit for organic, creative action through debate … It organizes itself, therefore, into … standing committees permanently charged with its business and given every prerogative of suggestion and explanation, in order that each piece of legislative business may be systematically attended to by a body small enough to digest and perfect it.

Although the Senate is far smaller than the House, it too has developed its committee system. In both Houses of Congress committees are a natural organizational response to the issue of legislative efficiency. But they also shape the political landscape on Capitol Hill. Every aspect of their conduct is politicized: from who chairs them, who is assigned to them, to the legislative tasks they are given and how they perform their functions. They can be criticized on the one hand as products of patronage or powerful political fiefdoms and on the other as diverse outposts of political authority that undermine the capacity of the Legislature to act as a unified whole. Yet Congress cannot function without them. Wilson indeed observed that “the business of the House is what the committees choose to make it” and the same is true of the Senate. Of the thousands of bills and resolutions proposed each year, it is the committees which ultimately determine the legislation that Congress debates,passes and sends to the President for approval.

Once a bill has been proposed in either the House of Representatives or the Senate, it is allocated to a congressional committee for detailed consideration. This is typically a straightforward matter; for example, bills that involve taxation will automatically be referred to the Ways and Means Committee of the House or the Finance Committee of the Senate. However, the Speaker of the House and the Senate Majority Leader have the power to decide which committees they prefer to consider those legislative proposals when it is not automatically obvious as to where they should be allocated. That decision can dramatically increase or decrease a bill’s chances of becoming law. Indeed, it will often be known in advance which committee will look favorably on a particular measure and so it is a political decision as to where they are sent. Nine out of ten bills stall at the committee stage, never emerging to be voted on by either the House or the Senate as a whole. The committee system creates an obstacle course for legislation. President Obama’s healthcare proposals had to be negotiated through a number of congressional committees in both the House and the Senate, each of which could delay its progress, before the landmark legislation was finally agreed and passed into law.

Congress is not only a law-making institution. In its investigative role, it can turn the spotlight of scrutiny on areas of public concern. It also holds the Executive to account for its actions. Sometimes these two functions overlap. Although it was two indefatigable journalists – Bob Woodward and Carl Bernstein – writing for the Washington Post who unearthed Richard Nixon’s conduct during the Watergate scandal, the President’s political support in Congress was crucially undermined by the revelations brought to light in the hearings of the Senate Select Committee to Investigate Campaign Practices chaired by Sam Ervin. Ironically, Nixon’s political career had been given an initial boost as a newly elected member of the House of Representatives when he had been an investigator rather than the investigated. Assigned to the House Committee on Un-American Activities, he had pursued the case of Alger Hiss, who had served in Franklin Roosevelt’s administration, and had eventually exposed him as a former member of the American Communist Party. It established him as a rising star in Republican politics. When he resigned rather than face the inevitability of a successful impeachment, Nixon may have reflected that congressional oversight can break as well as make political reputations.

Congressional investigations can also mold American political culture and public opinion. Joseph McCarthy used the Senate committee he chaired to foment the Cold War anticommunism of which the Hiss case had been a part. Before McCarthy’s allegations and accusations were shown to be largely absurd, he had briefly convinced many Americans that communist infiltration of the government, the entertainment industry and other aspects of national life was a clear and present danger to the survival of the republic.

McCarthy stifled dissent through exploiting communal fear. A decade later, the Senate Foreign Relations Committee, chaired by William Fulbright, played an important role in allowing public misgivings to be expressed about America’s war in Vietnam. In 2001, Committees in both Houses of Congress launched investigations into President Clinton’s use of his power of pardon in the last days of his administration, particularly the case of Marc Rich, a fugitive from US justice whose wife had donated millions of dollars to the President and the Democrat party. More recently still, in November 2008, the Senate Armed Services Committee chaired by Senator Carl Levin published its inquiry into the treatment of detainees in US custody as a result of President George W. Bush’s “war on terror.” It concluded that in the aftermath of the attacks of September 11, 2001, “senior officials in the United States government solicited information on how to use aggressive techniques, redefined the law to create the appearance of their legality, and authorized their use against detainees.” As the Ervin, Fulbright and Levin Committees show, responsibly used, the power of congressional investigation is vital in maintaining public confidence in the nation’s democratic institutions, monitoring the conduct of American government and holding its officials to account.

In addition to its legislative and investigative roles, the Senate also has an important part to play in ratifying treaties. Indeed, Woodrow Wilson should have known what he would be up against. In his series of lectures at Columbia University, subsequently published in 1908 (and still in print) as Constitutional Government in the United States, he argued that the Founders had intended:

that the Senate would give the President its advice and consent in respect of appointments and treaties in the spirit of an executive council associated with him … rather than in the spirit of an independent branch of the government, jealous lest he should in the least particular attempt to govern its judgment or infringe upon its prerogatives.

Yet by the time Wilson entered the White House, jealously independent is precisely what the Senate had become. So it was that in his losing battle to persuade it to ratify the Treaty of Versailles, Wilson would not only destroy his health. He also failed to secure what might otherwise have become the crowning achievement of his administration. He suffered a massive stroke during a nationwide speaking tour intended to drum up public support for the treaty. The United States also refused membership of the League of Nations, Wilson’s vision for a forum to resolve international conflict.

TREATY OF VERSAILLES AND THE LEAGUE OF NATIONS

The Treaty of Versailles concluded the First World War. President Wilson’s major contribution to it was the proposal for a League of Nations to act as an international forum for conflict resolution. Isolationists in the Senate rejected the Treaty out of concern that American membership of the League would drag the United States into future wars. The Versailles Treaty imposed a punitive settlement on the defeated belligerents and without American participation the League proved ineffective. Within twenty-five years, a resurgent Germany precipitated another world war and American soldiers were once more fighting on the battlefields of Europe.

The Senate’s rejection of the Treaty of Versailles remains an outstanding example of its power to influence United States foreign policy. It can also shape the contours of American domestic politics through its power to confirm or deny the President’s nominees to the Supreme Court, to the Cabinet and to other senior positions within the federal government. Given the significance attached to appointments to the Supreme Court, the political stakes surrounding nomination and confirmation are often at their highest. During the last thirty years, on three occasions, the candidates chosen to fill vacancies have caused controversy. In 1987 the Senate rejected Robert Bork, a Reagan nominee. Two other candidates have subsequently withdrawn themselves from consideration rather than be subjected to congressional scrutiny. After Bork, Reagan’s second choice, Douglas Ginsburg, requested not to be considered after facing allegations about his use of illicit drugs. In 2005 Harriet Miers, who had been nominated by George W. Bush, also withdrew after questions had been raised concerning her legal competence. Even the prospect of hostile confirmation hearings in the Senate may thus be sufficient to discourage potential nominees. The same is true for cabinet officials. Tom Daschle, President Obama’s first choice as Health and Human Services Secretary, withdrew his nomination following the concerns raised about possible ethical violations and tax evasions, ironically stemming from his service in the Senate.

The Supreme Court is shaped by presidential nomination and Senate approval. It is an important part of the constitutional framework that requires the institutions of the federal government to come together in the common cause of framing, passing and adjudicating the laws which structure the framework of American public life. In establishing their independence from one another at the same time as making them interact with each other,a constantly changing political dynamic is created. Nowhere is this better seen than by tracing the shifting boundaries of presidential power.

The problem of presidential power

Only one President has ever entered office with both eyes fixed on the future rather than glancing occasionally over one shoulder at the past. George Washington had only to live up to his own reputation, rather than be judged against a standard of political leadership that many of his successors have aspired to reach but which only a handful have been deemed to achieve. Indeed, the fact that few Presidents are counted in his company is in some respects George Washington’s fault. On taking office in April 1789, his task was to map out the parameters of his powers. These had been only briefly outlined in the Constitution, partly because it had been anticipated that Washington could be trusted to take on this challenge. The new President observed correctly that “I walk on untrodden ground.”

After specifying the process by which the President is elected, the Constitution therefore defines merely one role, only one responsibility and just three formal powers, before outlining the circumstances under which the Executive can be removed from office. The role is as Commander-in-Chief of the nation’s armed forces; the responsibility is to report to the Legislature annually on the state of the union; and the powers are of pardon, of appointment to offices in the federal government and, in certain circumstances, to convene or adjourn Congress. It is not an exhaustive job description.

Many of Washington’s actions created a precedent. His acceptance speech on being sworn into office established the tradition of the inaugural address that launches a new administration. His decision to serve for two terms was respected by his successors (not all of whom achieved that ambition) until Franklin Roosevelt ignored it. Subsequently Washington’s view that it was enough to serve eight years in the presidency prevailed in a constitutional amendment passed some 154 years after he left office. He established the Departments of State, the Treasury and War (in 1947 renamed as the Department of Defense, but no less belligerent) as well as the Attorney General’s Office. These remain the key appointments in a President’s Cabinet. Washington’s “Farewell Address,” nowadays still read annually into the Senate record, endures as an influence in shaping American views of the nation’s rightful place in international relations. He is rightly regarded as a political colossus among his peers.

Washington, however, is unique. Famous for being first, significant for the precedents and traditions he established and created, his view of presidential power was nevertheless very much of its time: the Executive should provide leadership but try to remain “above politics.”Yet even he could not escape the fact that his position was political. As party divisions emerged in the new republic, the President had to take sides. When forced to make his mind up as to whether or not to veto the establishment of a National Bank, Washington invited the opinions of Alexander Hamilton, his Secretary of the Treasury, and Thomas Jefferson, his Secretary of State. Political enemies, Hamilton supported the bank, Jefferson did not. In eventually agreeing with Hamilton, the President became a partisan. It was not a role that Washington relished: it was with reluctance that he agreed to run for re-election and with relief that he relinquished the office at the end of his second term.

Moreover, the office that he gave up was not exactly large. Washington employed more workers at his private estate at Mount Vernon than he had to help carry out his official duties as President of the United States. Rightly celebrated for guiding the infant federal republic through its critical early years, Washington’s presidency affords an initial glimpse into the problem of presidential power. Constitutionally limited, it has subsequently expanded as America has demanded that its Chief Executive takes on an increasingly political as well as symbolic role as leader of the nation.

Among Washington’s peers, it was his immediate successor, John Adams, who confronted an increasingly partisan and hostile Congress without the reservoir of goodwill and prestige that had attached to his illustrious predecessor. Adams created another precedent: the first President to run for a second term and lose. It was left to Thomas Jefferson, the nation’s third Chief Executive, to explore the contemporary limits of presidential power, ignoring constitutional niceties in seizing the opportunity to add to America’s territory. The Louisiana Purchase in 1803 was not simply a shrewd piece of bargaining that enabled the United States to acquire from France the land that would fuel its westward expansion. In acting decisively without consulting Congress, Jefferson demonstrated the presidency could take the political initiative and provide leadership at a critical moment in the nation’s development. Congress subsequently ratified his action, but equally important was the fact that Jefferson, the apostle of limited government, had felt able to act at all.

LOUISIANA PURCHASE

The Louisiana Purchase dramatically increased the territory of the United States. The land bought from France for $15 million secured American trade through the port of New Orleans and far more besides. Although President Jefferson by his own admission acted beyond the limits of the powers granted to the President by the Constitution, he did so both in the interests of national security and to open up the potential for expansion to the west. Congress and the American people endorsed his action after it was announced, symbolically, on July 4, 1803.

Among those who occupied the White House after Jefferson and before Franklin Roosevelt, there were relatively few who, either through force of personality or necessity of circumstance, proved able to impose themselves on the office. Andrew Jackson exploited the fact that the Executive is the sole federal office that can claim to represent a national constituency and also a national mandate. He was able to dominate Washington politics to an unprecedented degree, winning important constitutional battles over the nullification crisis which threatened the survival of the union, and over the establishment of a national bank, which he implacably opposed. Historians of the period were right to call it “the Age of Jackson.”

Abraham Lincoln expanded presidential power in reaction to the secession crisis and a Civil War that threatened to sweep away the national government altogether. In the twentieth century, Theodore Roosevelt and Woodrow Wilson set the tone for presidential activism in domestic and foreign policy alike. Together they presided over the so-called “progressive era” when the federal government became more actively involved in promoting social and economic reforms to moderate the excesses of unregulated capitalist enterprise. It was Roosevelt who began to flex America’s increasing power on the world stage and it was Wilson who led the country into the First World War.

First elected in 1932, Franklin Roosevelt presided over the creation of the so-called “modern presidency” in response to the political challenges he faced as the United States confronted a series of national and international crises that started with the Wall Street Crash in 1929 and continued with the outbreak of the Second World War a decade later. Since FDR’s time as Chief Executive, moreover, it has been the President’s actions as Commander-in-Chief in times of war that has focused attention on the expansion of presidential power. As America’s intervention in the Vietnam War drew to its divisive and controversial close, seeking a scapegoat, the historian Arthur Schlesinger Jr. coined the term “Imperial Presidency” to describe an Executive that had slipped the chains of the Constitution to act in a manner that checks and balances had proven insufficient to control.

The Constitution requires the President to ask Congress for a Declaration of War. Yet since Franklin Roosevelt’s request for it to do so following what he called the “day that will live in infamy” that marked the Japanese attack on Pearl Harbor, no President has asked the Legislature to decide formally whether to commit the nation to military action overseas. Instead, in Korea, Vietnam, the Persian Gulf, Afghanistan and Iraq – to name the most prominent examples of America’s involvement in international conflicts since the Second World War – Congress has provided supportive resolutions rather than fulfilling its constitutional role of debating and deciding on the merits of a presidential request to commit the nation to war.

Schlesinger argued that after a world war that had seen the Executive “resurgent” in accumulating the powers essential to achieve victory against Germany and Japan, followed by the conflict in Korea in which it had been “ascendant,” it was during America’s involvement in Vietnam that the presidency had become “rampant.” Lyndon Johnson and Richard Nixon claimed to act in the interests of national security, pursuing actions that were ultimately catastrophic not only for themselves but also for the nation. Johnson’s presidency was destroyed by Vietnam and Nixon’s political career ended in the ignominy of resignation to avoid impeachment. The “Imperial Presidency” crashed and burned in the flames of an unwinnable war in Southeast Asia and the all-consuming domestic scandal of Watergate.

Those who had initially exploited the potential inherent in the constitutional role of the President as Commander-in-Chief were Democrats. Franklin Roosevelt set the stage for this expansion of executive power that his last Vice-President and successor, Harry Truman, exercised in Korea in the early years of the Cold War. And it was Lyndon Johnson whose actions in Vietnam provoked Schlesinger’s critique. Since Richard Nixon quit the presidency, however, it has been the Republicans who have tried to re-invigorate presidential power. Gerald Ford, Nixon’s unelected successor, reacted against congressional attempts to increase its oversight of the Executive’s actions following the Watergate scandal and left office warning that the Presidency was “imperiled” because of them. During Ronald Reagan’s years in office, the Iran-Contra scandal – an effective privatization of American foreign policy that enabled the White House to fund a proxy war in Nicaragua through trading arms for hostages – demonstrated the extent to which the Executive sought to avoid the increased scrutiny of a Congress reluctant to support military intervention overseas. In his inaugural address, George H.W. Bush called for an end to the political divisiveness that had marked the relations between the White House and Capitol Hill, but that hostility only increased during Bill Clinton’s time in office, fueled by partisans from Bush’s own party who resented his defeat in the 1992 election. After the Republicans recaptured the White House in 2000, elements of the new administration were ideologically committed to the re-assertion of presidential power. Over the subsequent eight years, they renewed the fears of their political opponents that George W. Bush was a President who once again over-stepped the boundaries of constitutional restraint.

For many Republicans the primacy of presidential power rests on their conviction that the Constitution allows the development of the “Unitary Executive”: a presidency that in effect should have control of all the activities undertaken by the federal government. Dick Cheney was among those who advocated this perspective aggressively to promote the unchecked use of presidential power. He had been Gerald Ford’s White House chief of staff in the immediate aftermath of Watergate and, after serving as George H.W. Bush’s defense secretary, he re-emerged in 2000 as the Republican Vice-President – a position in which he exercised unprecedented influence. After the terrorist attacks of September 11, 2001, he became a driving force behind the assertion of executive power. The stage was set for another performance of the drama of the “Imperial Presidency” with a familiar final act: George W. Bush left office with an approval rating hovering around twenty-eight percent, far lower in public esteem than LBJ (forty-nine percent) and almost as vilified as Richard Nixon (twenty-four percent).

George W. Bush’s decisions in the aftermath of 9/11 – to declare a global “war on terror,” to invade Afghanistan and topple the Taliban regime and to fight a war in Iraq, the outcome of which slid rapidly beyond his control – shaped the subsequent course of his administration. It is unsuccessful military action that fatally corrodes the nation’s faith in a President’s capacity to lead. The contemporary problem of presidential power may thus be reduced to a straightforward observation. Presidential authority rests on political credibility. Once lost, it is difficult to regain. Moreover, taking the nation to war is a high stakes political gamble.

When the nation faces a crisis it looks to its President to provide leadership. In such circumstances, Congress may defer to the Executive, as it did to Abraham Lincoln during the Civil War and to Franklin Roosevelt during the economic depression of the 1930s and then after the Japanese attack on Pearl Harbor had brought America into the Second World War. At the same time, however, if the President is judged to have flouted his constitutional authority, Congress reacts against the institution as much as against its occupant. In this way, Barack Obama entered the Oval Office as the successor to a President whose conduct in office had provoked an outcry from his critics, wary that once more the Executive had assumed too much power. With the nation embroiled in unpopular wars, Obama could not expect the same degree of deference that had been allowed Lincoln as Commander-in-Chief, nor, with the global economic crisis unraveling, did he have the same dominant authority over Congress as had been enjoyed by Franklin Roosevelt. If his inheritance was the most toxic of modern times, his task was also the most complex: exercising presidential power within its constitutional limitations when the times called for decisive leadership.

Judicial power and judicial politics

The American Constitution details the powers of Congress, outlines the responsibilities of the President, but is undeniably vague when it comes to the role that the Supreme Court should play in the federal government. It merely announces that “the judicial power of the United States shall be vested in one Supreme Court and in such inferior courts as the Congress may from time to time ordain and establish.” It took time, therefore, for the Court to realize its potential in exercising its judicial power in ways that have profoundly impacted upon the nation’s politics and which continue to shape American society.

In 1789, the Senate’s first ever item of legislative business was to consider the Judiciary Act. The United States was divided into thirteen judicial districts organized into three “circuits.” The six members of the Supreme Court, one of whom became Chief Justice, were obliged not only to hold sessions in the nation’s capital but also to “ride circuit,” meeting twice each year in each judicial district. This practice continued for just over a hundred years. The peripatetic life required of the Justices, particularly during the period when traveling around the country was far from easy, was sometimes a disincentive to joining the Court.

Moreover, in the early years of the federal government, the Supreme Court did not have much to do. John Jay, the first Chief Justice, who served from 1789 to 1795, took time out in 1792 to campaign unsuccessfully to become Governor of New York. Two years later he was sent by George Washington to Britain to negotiate the treaty that bears his name, resolving some of the issues that had soured relations between the two countries following the end of the War of Independence. Many argued indeed that Jay’s Treaty averted another war between the two nations. In his absence abroad, his political aspirations finally were fulfilled when he was elected as New York’s Governor and he was able to resign his judicial appointment.

When Jay left the Supreme Court, Congress was not in session and so an interim successor was appointed. In an early example of the recurrent phenomenon that Justices, once appointed, may behave in ways that Presidents might not predict when they nominate them, Washington’s choice to become the second Chief Justice, John Rutledge, made a controversial speech denouncing his predecessor’s treaty negotiations. Amid allegations of mental instability and alcohol abuse, he was not confirmed when the Senate eventually considered his appointment. He resigned after six months. Oliver Ellsworth, who succeeded Rutledge, emulated Jay in leading a delegation across the Atlantic in 1799, this time to France to undertake diplomatic negotiations with Napoleon. His health suffered as a result and in 1801 he too resigned.

Ellsworth quit the Court at a critical time. The President, John Adams, had been defeated in his bid for re-election and Thomas Jefferson, who had won the controversial election of 1800 and who was then branded by the defeated Federalist party as a dangerous radical, was about to enter the White House. Adams, however, could still nominate the new Chief Justice of the Supreme Court before he left office. He turned to his Secretary of State, John Marshall, whose appointment was ratified by the Senate on February 4, 1801, a month before Jefferson’s inauguration. So it was that Marshall, a distant cousin but a lifelong political opponent of the new President, administered the oath of office to Jefferson and began the process of molding the Court into a powerful counterweight to the other institutions of the federal government.

A landmark decision in the history of the Court is the one which was handed down in 1803 in the case of Marbury v. Madison. In one of his first opinions as Chief Justice, Marshall asserted the power of the Court to decide the constitutionality of the nation’s laws. Ironically it was his own lack of timemanagement skills while Secretary of State that precipitated the events leading to Marshall’s most significant judicial decision. After the election of 1800 and as the presidency of John Adams entered its final days, the Federalist party, having lost its grip on the White House, attempted to tighten its control over the Judiciary by appointing a number of judges to the federal bench. The process to create these so-called “midnight judges” was carried out hastily and haphazardly. In the Department of State, it was ultimately Marshall, continuing in this role while assuming his new position as Chief Justice, who was responsible for the necessary paperwork. As time ran out for the Federalists, one of their nominees, William Marbury, failed to receive the documents necessary for him to become a judge before the new Republican administration took office. As President, Jefferson withheld Marbury’s commission, arguing that he was not bound to honor the previous administration’s judicial appointments if they had not been properly executed in time. The new Secretary of State, James Madison, refused to deliver Marbury’s official papers, without which he could not take up his position as Justice of the Peace in the District of Columbia.

The Supreme Court was called upon to decide who was in the right and who was in the wrong. Marshall was confronted with a tricky political problem. If he agreed with Marbury that Madison was legally bound to deliver the papers, he risked an early and potentially explosive confrontation with the new President. On the other hand, to agree with the administration’s actions would diminish the status of the Court within the constitutional system. Marshall’s decision neatly finessed his dilemma. He argued that Marbury had a right to his commission but that the Supreme Court could not force Madison to deliver the necessary papers because if it did so it would itself be acting unconstitutionally.

This neat solution rested on Marshall’s interpretation of what the Court could and could not do. Marbury had taken his case against Madison directly to the Supreme Court. Marshall argued that was a mistake. The Constitution envisaged that the Supreme Court would not have original jurisdiction – deciding cases for the first time – except in a limited number of circumstances. Marbury had directly petitioned the Court to force Madison to carry out his responsibility, assuming that the Judiciary Act of 1789 had given it that power. That allowed Marshall to see a way out of his political problem. He argued an important legal and constitutional point. There was a conflict between what the Constitution allowed of the Court, and what the Judiciary Act required it to do.

The Supreme Court was primarily a court of appeal. So it should not assume the power of original jurisdiction in Marbury’s case in order to issue a writ against Madison. Although Marshall supported Marbury’s claim in principle, in practice, by declaring that the relevant section of the Judiciary Act under which his case was brought was in fact unconstitutional, he effectively avoided forcing Madison’s hand. More significant still was the legal precedent set by this decision. Marshall asserted that the Supreme Court should have the final say on the constitutionality of laws.

The decision in Marbury v. Madison therefore established two important principles that continue to define the authority and the role of the Supreme Court within the American political system: judicial review and the supremacy of the Constitution. Marshall annexed the power of judicial review to the Court simply by claiming that “It is emphatically the province and duty of the Judicial Department to say what the law is.” Moreover, “if two laws conflict with each other, the Courts must decide on the operation of each.” In Marshall’s opinion, “the Constitution is superior to any ordinary act of the Legislature” and so “the Constitution, and not such ordinary act, must govern the case to which they both apply.” The Chief Justice was able to define a role for the Court that cemented its status within the hierarchy of checks and balances necessary to the operation of the separation of powers. Now the Court would decide cases on the basis of its interpretation of the Constitution and would, if necessary, declare the actions of the Legislature and the Executive unconstitutional.

It was a power that to be effective was best used sparingly. In the sixty years after Marshall’s landmark decision, the Court used it on only one further occasion. In 1857, in the case of Dred Scott v. Sanford, Marshall’s successor, the southerner, Roger Taney, who had been appointed by Andrew Jackson, delivered the Court’s opinion that the 1820 Missouri Compromise was unconstitutional. Intense political negotiations had gone on in Congress to draw the line as to where slavery might be extended in the western territories acquired through the Louisiana Purchase. Now, and thirty-seven years after the event,the Supreme Court decided that Congress could not assume the constitutional power to legislate in order to prevent the spread of slavery. The decision inflamed the sectional tensions within the United States that eventually led to the outbreak of the Civil War.

The ramifications of the Dred Scott decision confirm, if such confirmation is needed, that the Supreme Court’s rulings can have important political consequences. That has been the case in particular when it tackles fundamental issues that impact on the liberty of the individual within American society. Consider civil rights. The Court’s decision in Plessy v. Ferguson (1896) legitimized the practices of racial apartheid in the American South that persisted, under the euphemism of “separate but equal,” until it overturned its own ruling in Brown v. Board of Education (1954).

In Roe v. Wade (1973), which found in favor of a woman’s right to abortion, the Court sparked the “right to life” versus “right to choose” debate that has been an often bitter and sometimes violent fault-line in American political discourse ever since that decision. During the 1980s, when President Reagan, a firm supporter of the “right to life,” appointed judges to the Supreme Court, he faced criticism that a “litmus test” for his nominees was their attitude on the issue of abortion. In 1983, while still President, Reagan took the time to send an article to the Human Life Review in which he argued that:

as an act of ‘raw judicial power’ … the decision by the seven-man majority in Roe v. Wade has so far been made to stick. But the Court’s decision has by no means settled the debate. Instead, Roe v. Wade has become a continuing prod to the conscience of the nation.

A few months after Reagan had left office in 1989, the Supreme Court delivered its decision in the case of Webster v. Reproductive Health Services. Sandra Day O’Connor, whom Reagan had appointed to the Court, was initially among the five justices who were in favor of a decision that would have overturned Roe v. Wade, but then changed her mind. The Court re-affirmed its Roe decision by a five to four vote. In 1992, in its decision in Casey v. Planned Parenthood the Supreme Court once again endorsed its Roe opinion that abortion laws were constitutional. O’Connor joined two other conservatives on the Court, David Souter and Anthony Kennedy, in arguing that to overturn the 1973 decision would cause “damage to the Court’s legitimacy and to the rule of law.”Yet this is a controversy that refuses to go away and one in which the Supreme Court has been and will be continually involved.

More recently, in 2008, in the case of Boumediene v. Bush, the Court ruled that the Bush administration should not deny those detained at Guantanamo Bay in Cuba having their cases heard in federal courts. Prisoners had constitutionally protected habeas corpus rights. A year later, however, as the Obama administration was becoming embroiled in controversy over how it might close the camp in Cuba, the Court refused to review a lower court’s dismissal of Rasul v. Rumsfeld, a case brought by several former detainees who alleged they had been tortured while in Guantanamo. On March 1, 2010, moreover, the Court announced that it would not hear a case scheduled for later that month concerning Chinese Muslims who had been imprisoned for eight years in Guantanamo. Kiyemba v. Obama raised the difficult issue of whether a detainee freed through exercising the right of habeas corpus could be released into the United States. Instead of debating that issue, the Court sent the case back to the lower courts. Such decisions show that what the Supreme Court will not consider may be as politically charged as its choice of the cases that it can use to try to mesh the constitutional framework and the prevailing sentiments of American society.

Individual Supreme Court justices have often proven to be thorns in the side of Presidents – sometimes even of those who appointed them. In 1902, Theodore Roosevelt nominated Oliver Wendell Holmes Jr. to the Court. Holmes, who served with distinction for thirty years, nevertheless defied the President in his opinion on an anti-trust case, prompting Roosevelt to let it be known that “I could carve out of a banana a judge with more backbone than that.” In the 1930s, his cousin Franklin would find the federal Judiciary increasingly stubborn in opposing his New Deal legislation. However, his proposal to “pack the Court” with his supporters represented such a transparent attempt to shortcircuit the Constitutional separation of powers that Congress refused to countenance it.

Who governs?

In 1878, the British Liberal politician William Gladstone, who with his conservative counterpart Benjamin Disraeli dominated the nation’s political life during the late nineteenth century, described the American Constitution as “the most wonderful work ever struck off by the brain and purpose of man. It has had a century of trial … and … has certainly proved the sagacity of the constructors and the stubborn strength of the fabric.” Yet would he have wanted to have been President of the United States of America rather than Prime Minister of Britain? For Gladstone also confessed himself puzzled as to “why the American people should permit their entire existence to be continually disturbed by the business of the presidential elections.” During his lifetime only six Presidents won re-election. James Madison was President when Gladstone was born in December 1809. On his death in May 1898, William McKinley was in the White House and Gladstone had lived through the administrations of twenty-one different Presidents. Contrast the Liberal leader’s own political career. He was first elected to the House of Commons in 1832, the same year that Andrew Jackson became President. He made his last speech there in March 1894 when Grover Cleveland was about to enter the second year of his second term in office. Moreover, for almost fourteen of his sixty-two years in Parliament, Gladstone was Prime Minister. In terms of his legislative longevity as well as the time he spent holding the highest executive office in the country, his experience, admittedly unique in British terms, is nevertheless unparalleled when compared with contemporaries across the Atlantic.

Time horizons are important. Since the twenty-second amendment was passed in the aftermath of Franklin Roosevelt’s precedent-breaking twelve years in office, on first entering the White House, Presidents know they have, barring unforeseen tragedy, four years there. That may be followed by a fight for re-election. However long they are in the presidency, they must seek to impose their authority in a constitutional system expressly designed to frustrate the use of power in order to prevent its abuse. That impacts on their political careers. Among Franklin Roosevelt’s successors, from Harry Truman to George W. Bush, five out of eleven Presidents have been re-elected but only four have successfully completed two terms in office. If the contemporary President’s tenure in office is unpredictable beyond four years and impermissible beyond eight, so too are the outcomes when he presents his legislative agenda to Congress. Rarely has a President been able to cajole Congress – even one in which his party is in control of both the House of Representatives and the Senate – into unqualified support for his campaign commitments. It takes special circumstances (Roosevelt during the Depression) or a particular skill (Lyndon Johnson’s persuasive ability) for the President to consistently dominate the legislative process.

It follows that the key political relationship for any President is the one he forges with Congress. In this respect Barack Obama, the first President since John F. Kennedy to move directly from the Senate to the White House, demonstrated considerable political skill and sensitivity during his first months in office. The circumstances of his election and the political coalition that he mobilized behind his agenda for “change we can believe in” seemed to symbolize the end of the so-called “Reagan Revolution.” In 1980 the election of the former Hollywood actor to the White House ushered in an era of limited government and tax-cuts, finding favor with those former Democrats who had changed allegiance to the Republicans. Ironically, however, Obama’s and Reagan’s approaches to Congress proved remarkably similar. Rather than present the House of Representatives and the Senate with detailed legislation for them to approve, like his Republican predecessor, Obama set the broad parameters of policy and invited Congress to fill in the details. His initial overtures toward bi-partisanship in the face of the economic crisis that he had inherited on entering office proved unsuccessful. Nevertheless, the Obama administration was able to put together a congressional coalition that supported a “stimulus package” for the economy that the President was happy to adopt as his own and which he signed into law within a month of taking office. A similar strategy enabled Obama to claim success in achieving healthcare reform, finally approved by Congress in March 2010.

Perspectives are inevitably influenced by knowledge of the recent past and the remorseless electoral timetable. The President enters office with the concern that even if his party has a majority in the Legislature on the day of his inauguration, less than two years later, the November mid-term elections can change the political landscape dramatically. Ronald Reagan in 1982, Bill Clinton in 1994 and Barack Obama in 2010 saw their party majorities in one or both houses of Congress evaporate during their first term in office, and in 2006 George W. Bush experienced a similar reversal of political fortunes during his second administration.

With the entire House of Representatives elected every two years, time horizons there remain naturally short. Although incumbency brings advantages, individual members know that if they lose support within their districts, there are limited opportunities to regain it. Collectively, however, since 1900 both Republicans and Democrats have managed to retain control of the House for extended periods of time. Republican majorities held sway there for twenty-four of the first thirty years of the century. Thereafter came a period of Democrat dominance which lasted, apart from two brief interruptions, for sixty years. It was followed by the twelve years of Republican majorities in the House that occurred between the mid-term elections of 1992 and 2006. That history was shaped in no small measure by the electoral earthquake which took place in 1932 at a time of economic collapse and the onset of the Great Depression. Following their mauling at the polls, it took the Republicans fourteen years before they once again were able to command a majority in the House. Having won just 117 seats in the year that Franklin Roosevelt first captured the White House and in 1936, having been reduced to eighty-eight members in the House, it took time to rebuild the party’s electoral fortunes in the Legislature. The Republican mid-term victory in 1946, when the party gained fifty-five seats in the House of Representatives and twelve in the Senate, made them the majority party in Congress. Yet President Truman turned this political defeat to his advantage. Two years later he won an upset victory in his bid for re-election in a contest which Thomas Dewey, his Republican opponent, was widely expected to win. Truman accused the “do-nothing 80th Congress” of preventing him from enacting his legislative program. The electorate backed the incumbent of the White House in his battle with Capitol Hill. The Democrats took back control of Congress.

After sixteen of the twenty-five congressional elections held between 1950 and the end of the century, the President woke up to a House of Representatives controlled by the opposition party. In recent times, therefore, the dynamics of the relationship between the President and the House have been shaped by that prospect. Between the end of the Second World War and Obama’s election in 2008, only three Presidents – John F. Kennedy, Lyndon Johnson and Jimmy Carter; Democrats all – have had their party in the majority in the House of Representatives for the entire duration of their administrations. Kennedy’s presidency was tragically curtailed and Carter’s four years in the White House were marked by his failure to gain the support of Democrats in Congress. Only Lyndon Johnson, whose political skills had been honed by his time as majority leader in the Senate and who knew how to win support in Congress, was able to take full legislative advantage of this comparatively rare political occurrence in pursuing his ambition to create his “Great Society.”

Time horizons in the Senate are longer, as, once elected, Senators have six years in office. While one party’s ability to control both the Executive and the Legislature simultaneously has proven increasingly difficult to sustain, within Congress, since 1900, divided government has been rare. Following the mid-term elections of 1910 and 1930 and then only for the life-time of a single Congress, the Republicans retained their majority in the Senate while losing it in the House. The two Houses of Congress were controlled by different parties after the 1980 elections when the Republicans, riding the coat-tails of Ronald Reagan, captured the Senate and remained in the majority there until the mid-term elections of 1986. In the 2010 mid-terms, the Democrats lost the House but retained control of the Senate.

Party majorities in the Senate are brittle, particularly in recent times. After the 2000 election, the Republicans and the Democrats had fifty seats each and tied votes were decided by the vote of the Senate’s presiding officer, Vice-President Dick Cheney. The Republicans then managed to improve their position after both the 2002 and 2004 elections. Two years later, the Democrats were able to regain almost all their lost ground and with the support of two independents reclaimed control of the Senate. The Republican electoral debacle of 2008 allowed their rivals to consolidate their position, although they still fell just short of achieving the elusive target of a filibuster-proof sixty-seat majority there, the number of votes necessary to block the minority party following the Senate tradition of unlimited debate in order to stop measures ever coming to a vote. The Republicans fought back in the 2010 mid-term elections and gained an additional six Senate seats.

For the Judiciary, time horizons are different. In April 2010 the longest serving member of the Court, John Paul Stevens – appointed to it by President Gerald Ford and having served for over three decades – announced his retirement. At that time, five other of the current Supreme Court justices had been nominated by Republican Presidents and three by Democrats. John Roberts, appointed by George W. Bush to the court at the age of fifty, is the third youngest Chief Justice in American history, after John Jay (forty-four) and John Marshall (forty-five when appointed). Like Marshall, he may seek to influence the Court’s direction for many years. Nevertheless, at present a delicate political balance has been maintained. Currently, Roberts allies himself with three fellow conservative judges on the one hand ranged against four liberal-leaning colleagues who now include Sonia Sotomayor and Elena Kagan, both appointed by President Obama. The “swing vote” may remain with Justice Anthony Kennedy, appointed by Ronald Reagan and generally thought of as a conservative, who on occasions may nevertheless agree with the liberals.

Deciding upon the composition of the Court thus involves the President reaching a political accommodation with the Senate in the knowledge that Supreme Court Justices may be the tortoises of the American constitutional system compared to the political hares in the White House. For Senators, the time horizons are different again. In 2005, one of the five members of the Senate Judiciary Committee who refused to support John Roberts’ nomination was none other than the late Edward Kennedy who by then could himself look back at a political career not quite as long as Gladstone’s, but nevertheless which approached half a century of public service. It proved to be one of Kennedy’s last congressional votes. Moreover, six months after he died, in January 2010, the Democrat party lost control of his Senate seat in Massachusetts. It was a sober political reminder that within the system of “separated institutions sharing powers,” when an election takes place, it is public opinion that matters most of all.