You know that I have little faith in political arithmetic and this story does not contribute to mend my opinion of it.
—Adam Smith, 1785, after Alexander Webster revised his estimate of the population of Scotland by one-sixth.
Much of the Stupid we discuss in this book is easily spotted by anyone with a functioning brain, even if sometimes we’ve become so inured to it, even if it takes some effort to separate it from its surroundings. But some forms camouflage themselves successfully as the very opposite of Stupid, masquerading as scientific and mathematical evidence, hiding behind numbers, lurking within data, requiring more than just close attention to identify. Some forms of Stupid take some work to unmask and expose, making them all the more dangerous. One of them is the systematic misuse of numbers to mislead and misinform public debate.
Let’s start with a contemporary example, one torn from the headlines of Sydney newspapers. It deals, after all, with an alarming problem. Violence on the nocturnal streets of Sydney was growing worse, caused by an epidemic of alcohol abuse—binge-drinking teenagers and aggressive, alcohol-fuelled males travelling into the city’s entertainment areas and looking for trouble, lashing out, assaulting others, often at random. Young men died, struck down with a single punch. The city’s doctors, sick of treating the victims of this rising tide of damage caused by alcohol, demanded action. The media joined the campaign, calling for tougher laws, heavier sentences and tighter restrictions to curb the epidemic of violence. Eventually, after an intense debate in the city’s newspapers in early 2014, the state government agreed and brought in a set of hard-line laws and new restrictions on alcohol consumption in Sydney. Informed public debate had led to the successful resolution to a vexing public policy issue.
There was only one problem: it wasn’t true. There was no rising tide of violence in inner Sydney—quite the opposite: the actual number of assaults (not the incident rate, which accounts for population changes) in inner Sydney had last increased in 2010 and had fallen more than 20 per cent since 2008. In particular, alcohol-related violence, crime statistics showed, had fallen precipitately in the city. The deaths of young men as a result of ‘king hit’ attacks, tragic and painful as they were, as agonising for their families and friends as they must have been, were increasingly atypical in a city and a state where violence had become significantly less common over a number of years—except for domestic violence, which remained a too-common feature of crime statistics and a too-rare subject of media interest. Moreover, as we saw in an earlier chapter, Australians’ alcohol consumption has been generally declining for nearly thirty years, and data showed more young people weren’t drinking at all and fewer were binge drinking. Data from the Australian Bureau of Statistics showed alcohol consumption in Australia in 2013 was at its lowest since the mid-1990s.
Journalists engaged in the media campaign avoided mentioning the contrary data about falling violence and less drinking, or declared it ‘confusing’ in a ‘lies, damned lies, and statistics’ way. One campaigning doctor admitted that the level of violence might not be getting worse, but its intensity was, a claim difficult to verify or even assess accurately. But nonetheless, public health lobbyists welcomed the tighter restrictions on drinking that resulted, and called for a national summit to discuss increasing the level of tax on alcohol, which in Australia is already high by world standards. Alcohol was ‘cheaper than water’, journalists and commentators repeatedly said, a claim that, like most of the others made in the course of the debate, could be easily proven false within thirty seconds online.
The whole campaign, in fact, had been invented by Sydney’s major newspapers—both of which, while we’re on the subject of statistics, had lost 15 per cent of their circulation in just one year prior to the campaign—in league with paternalist public health groups and anti-alcohol campaigners. The result of this media-induced wave of Stupid was some very ordinary policies: the government proposed the demonstrably flawed policy of mandatory prison sentences, introduced arbitrary restrictions on inner-urban drinking venues and planned to make the penalties around steroid selling the equivalent of hard narcotics offences.
Now, it would not be a major revelation in this history of Stupid to note that the media beat up stories around the topic of crime. And, it should be noted, as new forms of crime emerge, they receive the same Stupid treatment by the media as more traditional law-breaking. Take ‘cybercrime’, which is frequently labelled ‘the fastest growing form of crime’ by the media and politicians. In the same way as public health lobbyists earn a living offering solutions to ‘problems’ like alcohol, the media hype cybercrime with the aid of companies that make money from selling protection against it. In fact, there have been so many wildly overstated reports about the cost of cybercrime that in 2013 one cybersecurity firm, McAfee, apologised for and retracted a claim that cybercrime cost US$1 trillion globally, after issuing a new report suggesting the cost was less than a third of that. Not long after, Microsoft claimed the cost of cybercrime was actually $100 billion a year. And genuinely independent reports (that is, those not produced by firms that make money from selling cybersecurity products) showing the cost of cybercrime is falling, and falling significantly, are increasingly common.
As for the ‘fastest growing crime’ tag, the FBI used that description of cybercrime in the US a decade ago, and they haven’t used it since (because it’s not true), but it has taken on a life of its own since then, an unkillable factoid still to be found in politicians’ speeches, law enforcement statements and media reports.
And the Stupid gets worse—much, much worse—when it comes to one specific type of ‘cybercrime’: software and content ‘piracy’. File sharing has by itself generated a mini-industry dedicated to proving its profoundly damaging economic effects, despite content industries continuing to perform strongly. The US broadcasting industry’s revenue has risen 22 per cent since 2009 to hit a record US$121 billion in 2012. Worldwide movie ticket sales have soared in recent years—in 2013 they set a new global record of US$36 billion and ticket sales have risen 22 per cent since 2009—and while the music industry has seen steady decline in sales of recorded music in recent years, global live music sales have smashed previous highs, with an all-time record $4.8 billion spent in concert ticket sales in 2013, a 30 per cent increase on 2012. But according to a litany of industry-funded studies, file sharing is on the verge of destroying those industries. According to a study prepared by an economics firm for an Australian film studio body in 2010,* file sharing would result in the loss of more than 8000 Australians jobs in content industries that year and cost those industries $900 million. ‘Nation of unrepentant pirates costs $900m’ was the scolding headline in an ensuing media report.
But some basic checking would have raised significant questions about the report. Rather than falling, employment in both the creative and performing arts and motion picture and sound recording industry subsectors had, according to data from the Australian Bureau of Statistics, steadily risen over the decade to 2010, despite the alleged impact of movie piracy (first via pirated DVDs—a now almost-forgotten form of crime, like wreckers luring ships onto rocks—then online). In 1999, employment in Australia’s creative industries averaged 27,000 people over the year and motion picture production 22,000. In 2013, they averaged 41,000 and 27,000 jobs respectively. More Australians are employed making films and in the creative industries now than when, courtesy of a dirt-cheap Australian dollar, George Lucas made those wretched Star Wars prequels here.† Further, the numbers were different to those in another report by another economics firm for another content industry group mere months earlier, which concluded movie piracy cost Australia $551 million and caused the loss of 6000 jobs across the whole economy.
Okay, so you say 6000, I say 8000—there’s not such a big difference . . . but there had been another report four years earlier showing piracy cost the Australian movie industry just $230 million annually in Australia. However, the US authors of that report later admitted they’d got some basic calculations wrong and inflated their estimates. And when that first report, the 8000 jobs one, was eventually released for public scrutiny—the journalist who wrote the story at the time hadn’t seen the report, just been told what was in it—it was revealed the report was simply the application of data from a European report to Australia, without any attempt to use local data or take local circumstances into account. Worse, the source report—the European report, yes, I know it’s hard to keep track, but as I said, there’s an entire mini-industry producing these—had been discredited, particularly its prediction that piracy would cause the loss of 1.2 million jobs in creative industries in Europe. In 2013, the EU was lauding its creative industries as an important and, for the Europeans, all too rare, source of economic growth.
All right, all right, the blur of numbers is getting too much. And a generous soul, seeking to explain such inconsistencies, might think that there’s just something innately difficult to calculate about the internet, what with it being cyber and virtual and borderless and everything having to be calculated, presumably, in binary numbers. But increasingly we have a broader environment of all kinds of public debate in which statistics and economic data are routinely invented or manipulated, and contrary data ignored, in the service of a preferred narrative that suits specific interests, including the media’s. Across dozens of diverse public policy issues, we’re awash in all kinds of rubbish numbers about jobs and costs gleaned from misused data, absurd economic modelling and nonsensical reports produced by vested interests and handed to a gullible or collusive media for reporting.
Like ever-rising levels of crime, there are some numerical claims that draw their strength from being incessantly repeated despite being contradicted by reality, and these routinely feature in the media and in political polemic. Like crime, suicide is regularly claimed by the media to be rising to ‘epidemic’ levels, when, as we have seen, it has fallen dramatically in Australia over the last two decades in the non-indigenous community, and remained at about the same (far too high) level among indigenous people. Since the Howard government’s hard-line industrial relations regime was replaced by a more employee-friendly system by a Labor government in 2008, employer groups and conservative politicians have been regularly warning that it would lead to fewer jobs, lower productivity, unsustainable wage rises and union militancy. In fact, 2010 and 2013 both saw the second-lowest level of industrial disputes since records began in the 1980s; labour productivity grew significantly faster under the new system compared to under the Howard government, when it stagnated or fell; 2013 was the lowest year for wages growth in Australia on record; and over 700,000 jobs had been created under the new system. Nonetheless, the claims continue to be made regardless of their inaccuracy, with claims of a ‘productivity crisis’ and ‘wages explosion’.
But the primary abuse of numbers in public debate relates to their manipulation and invention. And so persistent and widespread is this form of Stupid that we can list the dodgy techniques that are used over and over again in public debate by those who try to use numbers in their own interests. These are just a few:
Norman Lindsay’s beloved Magic Pudding would always re-form into a pudding no matter how often he was eaten, providing a handy metaphor for generations of Australian economists and politicians, who would often accuse their opponents of ‘Magic Pudding economics’. But a staple of bogus economic numbers is a reversal of this—a peculiar form of commerce in which money lost from one industry (invariably, the industry that has paid for the report concerned) never goes anywhere else, but instead entirely vanishes from the economy, a pudding that magically consumes itself if no one else does.
Thus, the money lost from content industries because of piracy, for example, is assumed to simply disappear, unspent. A consumer who had downloaded something for free rather than paying the content industry’s inflated prices forever retains the money they might have spent buying the CD or DVD or going to the cinema. They don’t put it in the bank, where it might add to savings that can be used for investment; they don’t spend it watching other movies or buying other music (in fact, there’s evidence file sharers spend considerably more money buying music than those who don’t also download it for free), nor do they spend it elsewhere, creating jobs and growth in other industries. It just sits there in their pockets, never touched again, or perhaps they bury it every time they download something, so it never flows to any other part of the industry or the economy.
A variant on the reverse magic pudding is the type of report that argues government spending in a particular area would create thousands of new jobs both in that industry and, because of multipliers, in other industries as well—without mentioning that similar spending in other areas or industries might via other multipliers create more jobs, or that there may be greater benefits for the economy in governments reducing support for industries, curbing expenditure or cutting taxes.
Economists debate the issue to this day, but some say multipliers are a branch of . . .
This is a special branch of applied mathematics, studied closely not in the halls of academe but in consultancies working for governments and sporting bodies. This area deals with the remarkable properties of major event numbers, which do not comply with the ordinary laws of maths but instead have, rather like the additional dimensions of string theory, further layers of multiplication and an innate capacity to erase negative symbols. This is because, unlike conventional maths, major event maths works backwards from the answer you want to give you the numbers you need.
Only in the arcane world of major event mathematics, for example, can the Melbourne Formula One grand prix, which costs the state of Victoria $50 million a year, somehow generate a net $30 million in economic benefits. Likewise, by employing major event mathematics, bidding cities always ignore the fact that nearly every host city ends up spending at least twice more than budgeted to host the Olympic Games and loses money doing so. It’s major event maths when builders of big road projects start from the traffic levels they need to make the project viable, then work backwards to their traffic forecasts. And when FIFA announces that 26 billion people have watched a soccer tournament on a planet with only 7 billion people, them’s major event numbers, presumably delighting advertisers who are reaching not merely every human on earth, but tens of billions of aliens as well.
This trick’s a clever one, because spotting it requires quite specialist knowledge or a willingness to go digging for information: use a comparison to make your case for you while not explaining all the ways in which the comparison is meaningless. Say you want to argue that costs in Australia’s construction industry are too high. But too high compared to what? Comparing them to Chinese or African building costs will alert even casual readers that you’re making a dodgy comparison. Why not the United States? It’s a developed country, just like Australia, and will make your point that Australia should reduce wages and introduce greater ‘workplace flexibility’. The only problem will be if someone bothers to check and spots that you’re comparing costs in something entirely inappropriate. You might compare Australian construction costs to costs in the Texas building industry, which is half the size of the entire Australian industry and thus benefits from huge economies of scale, and which relies on immigrant labour, much of it illegal, for more than a third of its workforce, and which has a workplace death rate many times higher than Australia’s. The comparison in fact is a proposal that Australia massively scale up its construction sector using illegal immigrants whom we don’t mind seeing killed.
On the other hand, there are times when industries insist that it’s everyone else who is making the inapt comparisons. In response to a parliamentary inquiry into why software, content and IT products cost more in Australia—often 50 per cent more—than in the United States and other markets, companies like Microsoft, Apple and Adobe and the copyright industry argued that international comparisons, even for software and content that could be delivered online, were meaningless. Such comparisons were, in the words of Microsoft, ‘of limited use, as prices differ from country to country and across channels due to a range of factors’. There’s no point comparing prices internationally, you see, because they might be different.*
You’re an industry body eager to influence policy, but unfortunately you just can’t get the numbers to work for you, even with a reverse magic pudding or major event maths or using an inapt comparison. What to do? There’s always the opinions of your own members, which can be sampled at minimal expense if you’ve got their contact details: surveys of business executives on their expectations inevitably yield concerns about the rising cost of regulation, taxes and wages. That’s even the case when it is businesses themselves that have been bidding up the cost of labour, as happened during Australia’s resources boom, when resources firms competed with each other for a limited pool of skilled labour to build new mining and extraction projects.
Or you’re a PR firm looking for a cheap’n’cheerful way to generate some coverage for a client. How about a quick survey of their customers, preferably on something inanely ‘fun’ or related to sex so that the media will pick it up? Who knows, it might be a slow news day or get picked up on social media and ‘go viral’.
Not that such exercises have to be the preserve of hard-up industry bodies or inspirationless PR firms. The prestigious World Economic Forum Competitiveness Report is regularly cited by chin-stroking business figures and commentators from around the world as an important insight into how different economies compare when it comes to how governments can make life easier for business. What’s never mentioned is that much of the report is based on the responses of a handful of business executives in each country, rather than any objective assessment of regulatory practices. This meant that in the 2012 report, a small number (fewer than seventy) Australian executives rated Australia, then under a Labor government, worse for government nepotism than Saudi Arabia, which is run by a single family; they rated Australia lower on ‘trust in government’ than Bahrain, where anti-government protesters are butchered and jailed; and they rated Australia’s judiciary as less independent than that of Qatar. Of course, given that senior Gulf state business executives are usually related to, or are themselves, key government figures, such results aren’t surprising.
Want to discredit a policy that has no significant impacts in the short term? Two of the most widely used tricks in Australia involve time-travel economics. Don’t be intimidated, it’s dead easy: compare the industry employment outcomes from a modelled scenario with a business-as-usual scenario (employing the reverse magic pudding of course) and declare that a particular policy will ‘cost thousands of jobs’, as though thousands of workers in an industry will be sacked tomorrow, even when the relevant industry will in fact grow, just at a slightly lower rate than under the business-as-usual scenario and thus produce slightly fewer additional future jobs than otherwise.
Alternatively, compare the GDP outcomes from the modelled scenario with the GDP outcome of the business-as-usual scenario, do so over a period long enough to generate a substantial difference—say, fifty years—and then use the difference in GDP fifty years hence to claim that a policy will ‘shrink the economy by X per cent’, as though we’ll immediately enter a prolonged recession rather than have a minutely slower growth path over decades.
Better yet, the really clever will calculate the per capita or per household GDP ‘loss’ and use that figure to warn that a policy will cost each household—cost you!—thousands of dollars, giving the impression families will somehow be stripped of income or have assets seized in a midnight raid.
Both of these tricks have been repeatedly used to attack climate change policies in Australia, and have made a comeback under Australia’s new climate change denialist conservative government, which since its election in 2013 has claimed a carbon price will reduce GDP by $1 trillion ‘over the next few decades’—without mentioning that’s actually a difference of 1–2 per cent of total GDP between now and 2050.
Jealous that business groups and large companies were having all the fun with dodgy modelling, in recent years NGOs, research institutes and non-profit lobby groups decided to get into the action themselves and created a new sub-industry around modelling the ‘social costs’ of various undesirable things such as alcohol or illnesses deemed not to be attracting sufficient research funding from governments. ‘Social costs’ in this instance isn’t used in the narrow economic sense, but consists of a more nebulous mix of externalities, economic costs (lost productivity, mainly), fiscal costs to taxpayers via the health system or criminal justice system, estimates of economic loss associated with illness and premature death, and reverse magic pudding costs that are actually expenditure for other sectors of the economy—out-of-pocket medical costs for unfunded treatment for illnesses, for example.
In Australia, economists working for public health groups have thus estimated that the costs of various problems run into the many hundreds of billions. Drug use, for example, is said to cost Australia $56 billion a year. Eating disorders have been estimated to cost $70 billion a year; obesity was calculated to cost over $50 billion a year in ‘lost wellbeing’ (the comparable US estimate was $190 billion). Alzheimer’s disease is predicted to cost Australia over $ 80 billion a year in the 2060s (adventures-in-time alert!). Depression costs $13 billion a year, illiteracy is said to cost $18 billion, stress nearly $15 billion—even insomnia costs nearly $15 billion, we’re told.
All of these are real problems affecting the lives of people and, indeed, possibly reducing their productivity or sending them to early graves. But the procession of claims about their ‘social cost’ has the cumulative effect of suggesting that the best way to massively expand the GDPs of Western economies would be to send the entire population to the gym and counsellors.
Concerns about this form of Stupid long predate the emergence of economic modelling and polling in the twentieth century. As the quote from Adam Smith that begins this chapter indicates, the provenance and accuracy of the calculation of statistics have been disputed almost from their inception. Moreover, the original name, ‘political arithmetic’, was enormously significant—‘statistics’ as a name didn’t catch on until an enterprising Scotsman, John Sinclair, rebranded ‘political arithmetic’ using a German term for qualitative (as opposed to quantitative) descriptions of the characteristics of states (states . . . statistics . . . geddit?).
But statistics, right from its very inception, was always about power and politics. It had been an Englishman who first referred to ‘political arithmetic’—William Petty, an early demographer (among other things, in the pre-twentieth-century tradition of polymathy) who followed in the footsteps of another Englishman, John Graunt. Graunt had produced the first work of demography, Natural and Political Observations on the London Bills of Mortality in the 1660s.
Graunt and Petty, who was also an economist and who urged better statistical information to improve tax collection, correctly understood that accumulating demographic data was an inherently political act. The growth of statistics was an inevitable consequence of the growing dominance of monarchies in the early modern period, even if government did not yet have many of the recognisable characteristics of the modern bureaucratic form.* In particular, the demands of warfare—of which monarchies were very fond—the shift to permanent (and increasingly national, rather than mercenary) armies and economic policies like mercantilism that conceived international trade as a zero-sum, competitive game drove the emergence of the centralised state. Such entities needed to understand the size and structure of their population so as to know how many men of military age they could butcher, their economic resources to pay for wars and even how long their citizens lived for, given the early modern habit of granting leases, awarding pensions and selling annuities for several lives rather than a set period of years.
The early growth of statistics was accompanied by the beginnings of probability theory by English and European mathematicians—in particular, the work of Thomas Bayes, which would re-emerge in the twentieth century—thereby providing the tools to start better using the data being collated. But while that was occurring, the English grappled with the politics inherent in political arithmetic. A bill to conduct the first national census in England in 1753 was defeated by ‘country’ opponents of the government—conservative landed aristocrats who claimed to defend the traditional liberties of Englishmen (mainly property rights) from a central government bureaucracy—aka the ‘court’. The ‘country’ opposition wasn’t just opposed to a centralised bureaucracy in theory—they themselves controlled the local bureaucratic, law enforcement apparatus and ecclesiastical apparatus that would have collated census data across England.
As a result, the first national census wasn’t conducted in the UK until nearly half a century later, after the first census had been conducted in the United States. In the US, contrarily, despite a similar antipathy towards centralised power to that which thwarted the UK census, censuses were not merely considered non-threatening, they were written into the Constitution as the basis for taxation and state representation in Congress. But back across the Atlantic, as if to prove the point of the ‘country’ opponents, the transformation of France from ancien régime to Napoleonic empire via revolutionary chaos saw a dramatic rise in governmental statistical compilation, even via basics such as Napoleon’s standardisation of measurements throughout France. Statistics and centralisation of power went hand in ink-stained hand.
The innately political nature of the compilation and use of data continued to be demonstrated as political arithmetic—now rebranded as the shorter, but harder to pronounce, ‘statistics’—and probability expanded in the nineteenth century. Statisticians and mathematicians developed key tools of probability, such as the law of large numbers, the least squares method and normal distribution, and re-argued the medieval debate about nominalism in determining categories and classifications of data, while data collection played a bigger and bigger role in public debate. At the same time, complaints that statistics were being exploited inappropriately or by vested interests to pursue self-serving agendas grew. The French medical profession bitterly divided over the meaning of data from cholera epidemics in 1820s Paris. New statistical techniques combined with data collected from English parishes was used in the debate on Poor Law reform, with some statisticians arguing that tough welfare laws were correlated with lower poverty rates. And a big spur to debate over statistical methods in the late nineteenth and early twentieth centuries were the efforts of eugenicists and social Darwinists to identify links between heredity and intelligence. Much of the hard work of merging social statistics and probability into a single subject area was done by men eager to argue non-white races were intellectually inferior.*
A form of that eugenics argument was still going on in the 1990s with the most notorious recent example of statistical Stupid in the service of an agenda, Richard Herrnstein and Charles Murray’s The Bell Curve. The book was named after the curve of normal distribution, on the left side of which, the authors suggested, African Americans and immigrants were to be disproportionately found. The coverage of the book prompted criticism about the failure of many US journalists to identify the poor methodology underpinning its conclusions about the links between race and intelligence, as scientists, psychologists and statisticians raced to show the profound flaws in the book.
The discovery of normal distribution curves in sociological and medical data accompanied the earliest data gathering, but it was the Flemish statistician Adolphe Quetelet who made them famous in the early to mid-1800s. In particular, Quetelet claimed that the normal curve that represented, say, the distribution of human height or weight, was also applicable to human morality, measured by marriage, crime and suicide statistics. Quetelet was thus a ‘moral statistician’ as much as any other type. He considered the average human, the one closest to the mean at the centre of the normal curve, to be an ideal of moderation, with all others a kind of imperfect copy, too tall or too short, too heavy or too light, too unethical or too officious, all small departures from the will of the Creator for the perfect being found at the apex of the curve.
Quetelet’s view initially informed the sociology of Émile Durkheim at the end of the nineteenth century, albeit from a different perspective: for Durkheim in his early work, the mean human was the product of social forces, the most representative creation of the society in which they had been born and raised. But later, the mean became instead synonymous with mediocrity; the average human for Durkheim was average indeed, lacking strongly positive or negative qualities, a compromise, moderate in everything and good in nothing, including talent and ethics.
This shows how inherently political statistics are: the shift in characterisation of those on the left-hand side of the normal curve to ‘below average’ led to that segment of the population being deemed by some to be a threat to Western societies. Francis Galton was a cousin of Charles Darwin and an eminent late-Victorian polymath across many fields, including statistics, in which he devised the concept of standard deviation. Galton also invented the terms ‘eugenics’ and argued for the ‘weak’—those on the left side of the curve—to be kept celibate and for eminent families—the right-siders—to intermarry in order to improve the racial stock. His protégé, Karl Pearson, one of the key figures in the twentieth-century development of statistics, also advocated race war and rejected the utility of trying to improve social conditions. ‘No degenerate and feeble stock will ever be converted into healthy and sound stock by the accumulated effects of education, good laws, and sanitary surroundings,’ Pearson insisted. Pearson believed himself to be a stickler for intellectual rigour, telling the British Medical Journal in 1910 that social scientists ought not to prostitute statistics for controversial or personal purposes.
Herrnstein and Murray’s arguments were thus simply a reiteration, almost verbatim, of the arguments of eugenicists a century before. They used their statistics to call for less welfare, lamenting that America was subsidising ‘low-IQ women’ (i.e. African American and immigrant women) to have children rather than encouraging high-IQ women. The eventual result, they argued, would be a kind of IQocalypse in which the intelligent (whites and Asians) lived in fortified compounds shielded from the teeming slums of low-IQ masses.
An important development in statistics in the twentieth century was the development of input–output models and the pioneering of econometrics, and their use once computers became available after World War II to help process large amounts of data. A key driver of the proliferation of this kind of Stupid in public debate has thus been the spread of computable general equilibrium (CGE) economic models that use real economic data to model the likely impacts of policy changes or economic reforms.
These once required significant IT power to run—the ‘computer’ used to model the British economy in the early 1950s was a two-metre tall machine that used coloured water. This limited their use to academic institutions and official policymaking bodies like central banks. But even small computers can now run such models easily, and anyone can download simplified versions of widely used economic models like that of the Australian economy developed by Monash University. Economic consultants and academic institutions now advertise their models, either self-developed or bought from institutions with a proven track record, as a key part of their service offering to potential clients.
And there are more economists than ever before to run such models. Since the 1980s, Australian higher education institutions have seen a dramatic increase in students choosing to study business administration and economics, increasing their numbers per annum almost threefold between 1983 and 2000, by which time business admin and economics was, despite slower growth in the 1990s, the most popular field of study for Australian students. The US saw a similar rapid rise in the number of economics undergraduates from the mid-1990s onwards, while all Anglophone economies seem to have seen increases in the numbers of economics students since the global financial crisis showed the profession in such a favourable light. The English-speaking world appears to be suffering from a plague of economists, far in excess of what banks and governments need, leaving the rest to wander the streets holding ‘will model for food’ signs.
The other key ingredient such models rely on are input–output tables put together by the government statisticians that detail the relationships between and interdependence of different industries and sectors of the economy. But input-output tables themselves, without the labour or expense of CGE modelling, are often used to generate output, jobs and income multipliers for particular industries, thus providing the ingredients of the reverse magic pudding used by industry lobbyists to demonstrate the case for assistance.
So rife had the misuse of these multipliers by economic consultants, industries and governments arguing for handouts become in Australia in the 1990s that the Australian Bureau of Statistics actually stopped providing multipliers in its input–output tables, in essence saying that if people were going to misuse them they could produce them themselves. ‘Users of the I–O tables can compile their own multipliers as they see fit, using their own methods and assumptions to suit their own needs from the data supplied in the main I–O tables,’ the ABS said. ‘I–O multipliers are likely to significantly over-state the impacts of projects or events. More complex methodologies, such as those inherent in Computable General Equilibrium (CGE) models, are required to overcome these shortcomings.’
Magical multipliers and fictional industry reports about the huge benefits or costs of particular policies or misused data should have little direct impact on policymakers, who have access to more reliable and genuinely independent assessments of policy impacts. That doesn’t stop governments from pursuing bad policy, of course, but it means they have less excuse—the NSW government, for example, repeatedly pointed to evidence of declining violence in Sydney before caving in and agreeing to a get-tough reform package on alcohol-related violence. But dodgy reports can be effective at influencing the media and voters.
Reports, or extracts from them, are thus often given ahead of release to journalists or outlets regarded as sympathetic, because it complements an outlet’s ideology or partisanship, and then offered to readers under that most abused term ‘exclusive’. Alternatively, journalists too innumerate or too time-pressured to subject reports to basic scrutiny get them. Few journalists outside science and economic rounds have sufficient grounding to subject economic modelling to rigorous scrutiny, and few have the time to dig through evidence that would undermine material served up to them by lobbyists, NGOs and industry. Moreover, it enables the media to create the illusion of journalism. The classic description of news is what somebody does not want you to print, and the rest is advertising. Bad maths is advertising masquerading as news, filling column inches and the minutes between ad breaks in news bulletins, and without even being paid for. The benefit to media outlets, instead of revenue, is a saving, generating the appearance of journalism without the need to invest in the resources required for actual newsgathering, leaving the consumer to do the work of critically examining what’s been offered.
The discomfort or lack of interest of many journalists when it comes to numbers is a cliché that has been the subject of complaints for decades—journalists were, it’s long been said, the kids at school who topped English, not maths or science. But it becomes plain when they write more traditional stories where hard data is available but needs to be researched and explained, rather than handed over as a gift. Stories about crime trends, for example, rarely contain actual evidence about crime rates, even though crime statistics, which have been trending downwards in many Western countries in recent years, aren’t hard to unearth. Instead, journalists prefer anecdotes over data: anecdotes are harder to discredit and provide an immediate human hook for media consumers, regardless of how meaningless or unrepresentative personal stories may be. Actual crime data is likely to provide an unappealing counterweight to individual stories of out-of-control thugs, alcohol-fuelled violence or rampant cybercriminals.
Different problems arise from numbers that journalists should be more comfortable with: those produced by polling organisations. Opinion polling was a nineteenth-century creation of American newspapers and magazines—the first one was for the 1824 presidential election—until polling became professionalised and more statistically rigorous in the twentieth century. That was after a magazine called Literary Digest predicted a landslide to Republican Alf Landon in the 1936 presidential election. The reason you’ve never heard of President Landon is that he actually lost in a landslide to FDR and the Digest shut soon after, setting an example of media accountability that, alas, has rarely been followed since.
In most democracies, the media continues to be a key customer of polling companies, particularly around elections, although it is now rare for a media company, rather than a marketing company, to own a polling organisation, as News Corporation does with Newspoll in Australia. The relationship tends towards one of interdependence or, perhaps, symbiosis, though it’s unclear whether the media or marketers are the parasite. A polling company without a media outlet struggles to match the influence or profile of companies that are linked to national media. For media companies, which invest in the costly process of polling either by owning a pollster or contracting with one, a poll provides influence and precious column inches for its political journalists.
Historically, Australia has had relatively good-quality polling, mainly because we force people to vote, which removes the challenge of predicting voter turnout that bedevils US polling, and we refuse to let them exhaust their vote on minor parties, removing the lottery of first-past-the-post psephology in the UK. And there’s a lot of polling, too, for such a small country: until recently, there have been around ten national polls a month in Australia outside elections.
But problems arise in the interpretation of the results by journalists. Partisanship among journalists and outlets plays a role—my side edges down 3 points, but yours plummets 2—but more common is the practice of retrofitting narratives onto polls. Having invested in the expensive process of polling 1000 people (usually riding on regular omnibus marketing polling conducted by the pollster), media outlets feel obliged to get their money’s worth by dramatising the results, regardless of what they are.
This yields the sight of even very good journalists being compelled to explain small changes in polls, including those within the margin of error of the poll (around 3–4 per cent for a sample size of 1000), as arising from specific political events or a change in tactics by a party, establishing a narrative even when none exists. Thus, rises or falls in polls, even those resulting purely from statistical noise, generate their own positive or negative coverage as journalists rely on post hoc ergo propter hoc logic and scour preceding political events for explanations of shifts that may be entirely random.
Sometimes even this doesn’t work, when polls go in a direction not anticipated by journalists, thus requiring ‘government has failed to benefit from . . .’ narratives. Testing of such explanations is never undertaken, and the many assumptions embedded in them remain unrevealed: not merely that the change in polling results is statistically significant rather than random variation, but that voters have paid sufficient attention to politics to react to events that precede the polling outcome and that the reaction has occurred in a time frame that has been detected in a poll.
Even if polls and polling interpretation don’t have a lot of influence on voters—in fact, there is little evidence that polls change voters’ minds, such as via any bandwagon effect—they do have an impact on politicians, especially in countries with a short political cycle, like Australia, where a federal election is never more than thirty-six months away and parties have taken to removing even electorally successful leaders at the first hint of trouble. The result is a strange feedback loop of Stupid, in which meaningless numbers are interpreted as meaningful and influence the behaviour of those ostensibly the subject of the numbers.
As someone paid to write about public affairs, I have a certain professional as well as personal interest in the Stupid that hides in numbers. Having written for years about polling, I’ve decided that voting intention other than immediately before an election is of limited interest except to the extent to which it seems to influence the views voters express on other issues. I also get my share of ‘independent reports’ from publicists, NGOs and other purveyors of such things, although not nearly as many as some other people in the Australian media, much of whose entire journalistic output consists of ‘exclusives’ about new reports commissioned by major industry groups. But mostly my interest is centred on trying to debunk bullshit reports, a task that feels like King Canute trying to whack aquamoles while holding back the waves.*
It can be a tedious task—going straight to the methodological explanation of a report, if it has one, checking the data and assumptions, checking the results of other reports, consulting independent sources of data, putting together your own data. There are certain tricks you can use. There is often a remarkable difference between what companies involved tell investors and stock market regulators and what their ‘independent modelling’ says, like the foreign-owned power companies who produced ‘reports’ stating that a carbon price in Australia would see them shut down generators and go out of business, while they told foreign shareholders Australia would continue to be an excellent market for them. And sometimes there are surprises—I once encountered a report from one of Australia’s best-known economic consultancies that significantly understated the case they were making for the client who had hired them. Another time, the data in a report outright contradicted the conclusion in the executive summary, which is all most people ever read.
But mostly it’s the same weary trudge through Stupid, often from the same consulting firms, via media spokespeople and PR firms who mysteriously don’t yet have a report to give you despite a media release or an ‘exclusive’ article about it being carried in a newspaper that day. Such reports are inevitably ‘probably available next week’, that being a Friedman Unit–type period in which corporate communications people think you’ll forget you wanted a copy of the report. In fact, one ends up feeling less like Canute than Travis Bickle, God’s Lonely Man, walking through streets filled with statistical depravity and mathematical filth, hoping that one day a real econometric rain will come and cleanse the place.
To be sure, this form of Stupid isn’t as deadly as some. People don’t die from biased modelling or inflated claims, at least not in the way that people die, say, from vaccination denialism or the War on Terror. But this kind of Stupid offends, offends egregiously, producing a stench that combines the sickly odour of people on the make with the rot of intellectual dishonesty. It dresses itself in the garb of rigour, and quotes data at you; it purports to adorn public debate by adding to our understanding of economic and social impacts. In fact, it is collations of lies, generated for the purpose of skewing, not informing, public debate, and leaving us misinformed, not better informed.
After three centuries of collating statistics, developing the probability techniques to use them and building the econometric tools to understand how we interact economically, there has never been more mathematical Stupid. In fact, most of us are worse off than Adam Smith, who at least knew to be sceptical of statistics even before they were called that.
BK
* Australian studio bodies are simply subsidiaries of Hollywood bodies, and parrot exactly the same lines on file sharing.
† In a similar period, employment in the US arts and entertainment industry has risen 16 per cent.
* Now, peculiarly, one of the Australian employer groups that incessantly argues that Australian costs are too high, the Australian Industry Group, takes a somewhat different view when it comes to overpriced IT products. AIG told the parliamentary inquiry into the differential that large companies like Microsoft shouldn’t have to adjust their local prices downwards to reflect currency fluctuations because of ‘the desirability for consumers, suppliers and retailers of having relatively consistent pricing of goods’. That wasn’t very long after AIG had argued for a tiny minimum wage raise because small businesses were struggling with the impact of the high Australian dollar. Naturally its position had nothing to do with the fact that Microsoft is an important partner of AIG.
* Marshall McLuhan argued the development of statistics in the sixteenth and seventeenth centuries was an inevitable consequence of the arrival of printing and the more centralised, visually oriented world it enabled.
* The reader will recall from an earlier chapter that William Jennings Bryan, in the Scopes Trial, was doggedly opposed to Darwinism, not merely because he believed it incompatible with the Bible and Tennessean folk wisdom, but because he believed it promoted conflict.
* Yes, I know, Canute got a bad rap—he was demonstrating the stupidity of trying to hold back the waves, not seriously attempting it. Call it the Caligula effect—sarcastically threaten to make your horse a senator because actual senators are such duds, and the next thing you know you’re being portrayed as barking, or more correctly neighing, mad.