Not just jargon but official figures, too, displease prominent insider economists.
Chapter 17
Facts are stubborn things, but statistics are pliable.
Our government bureaucrats gather measurements then massage the numbers to create statistics. Yet from one bureau to the next, the counts vary. It’s as if the statisticians force the figures through vigorous calisthenics to make them flexible.
Stepping up to the plate, the US Surgeon General required academic publishers to paste onto articles about economics a caveat, like on a pack of cigarettes: “Warning: Statistics contain high levels of political influence. Trusting them can be hazardous to your financial health.” And if he didn’t, he should.
Some insiders find official figures worse than useless. Most professional economists don’t mind the absence of a total for the worth of Earth in America but do object to the presence of tallies that are inaccurate and irrelevant and thus distracting. Official fuzzy numbers are a perfect match for the standard risk-adverse jargon (Ch 16).
Experts cite them, and most people base crucial decisions on them. Especially when the figures come couched in officialese, people tend to accept them as gospel. But are they? Are they tainted by politics? Is a competing unofficial number better? Decider, beware. One should take these figures with a grain of salt.
At the risk of biting the hand that feeds us, and kicking a dog while it’s down, and mixing metaphors, let’s continue the critique. Not that we love finding fault, it’s just that we love knowing facts. Conversely, officialdom seems unable or unwilling to convey them.
One expects criticism of official statistics from critics of either left or right – but check these out; sometimes the critics of statistics are mainstreamers.
“…the focus on the two headline indicators … created great incentives to governments to compile figures on deficit and debt that look good, instead of them being good from an economic substance POV. There is a clear tendency to continuously look for ‘grey areas’ to manipulate the relevant national accounts data… These practices have substantially increased in ‘popularity’ since the start of the financial crisis during which significant pressures on government finance emerged, amongst others by the direct and indirect effects of the economic downturn and the bailouts of banks.”
– “Government Finance Indicators: Truth and Myth”
The OECD has produced useful reports before on the link between land value and economic growth, so down the road maybe they could become a standard bearer for determining the size of all locational value.
If only US public agencies had the chutzpah to cry out when the emperor wears no clothes. Well, actually, sometimes some bureaucrats do. Officials at the US Federal Reserve and their staff are already dismissing large swathes of the most recent economic data because they view it as unreliable.
Economic data is constantly revised, and final reads are often significantly higher or lower than initial measurements. Twisting around the stats can leave investors, businesses, and households twisting in the wind. Their plans can be wrecked by the central bank’s next interest-rate move.
There are three types of lies -- lies, damn lies, and statistics.
Like an urban myth, groundless numbers persist. Diane B. Paul, formerly an associate professor of political science at the University of Massachusetts, wrote a book about that: The Nine Lives of Discredited Data. Once entrenched, false figures escape detection and hence correction.
Then economists who play it safe – safety is where the money and honors are – perform calculations using approved numbers. Thereby GIGO (Garbage in, Garbage out) strikes again. According to Otis Dudley Duncan (1921-2004) in Notes on Social Measurement: Historical and Critical, those academics suffer from statisticism.
“Coupled with downright incompetence in statistics, we often find the syndrome that I have come to call statisticism: the notion that computing is synonymous with doing research, the naïve faith that statistics is a complete or sufficient basis for scientific methodology, the superstition that statistical formulas exist for evaluating such things as the relative merits of different substantive theories or the ‘importance’ of the causes of a ‘dependent variable’; and the delusion that decomposing the co-variations of some arbitrary and haphazardly assembled collection of variables can somehow justify not only a ‘causal model’ but also, praise a mark, a ‘measurement model’. There would be no point in deploring such caricatures of the scientific enterprise if there were a clearly identifiable sector of social science research wherein such fallacies were clearly recognized and [kept] emphatically out of bounds.”
While our public agencies do not tell us how much we’re all spending for the land and nature we use in total, let’s not feel singled out. They slight other curious groups, too, who’d like to know statistics like a qualitative GDP, the true inflation rate, real unemployment rate, total assets of governments, actual debts of governments, etc. The more important the indicator, the more massaging it gets.
“There are three sorts of economist. Those who can count, and those who can’t.”
Ecological economists object to GDP since it measures quantity of growth, not quality of growth. E.g., clear-cutting trees from a hillside, causing erosion that degrades a stream, contributes to GDP no differently than does selective logging that leaves a forest available to hunters and hikers. Nevertheless, the media report faster growth – no matter what kind – as a social good. And whoever is in office gladly takes credit for it.
The Report by the Commission on the Measurement of Economic Performance and Social Progress (Stiglitz et al.) says, “the time is ripe for our measurement system to shift emphasis from measuring economic production to measuring people’s well-being.”
Leftists economists point out that the definition of unemployment was changed to consider the under-employed as employed and to not consider those no longer futilely seeking a job as unemployed. A smaller figure for unemployment, in the eyes of many, makes those in office look good. The real unemployment figure is actually double, or triple, or over quintuple the Bureau of Labor’s figure.
Populist economists remind anyone listening that the official definition of inflation has been changed at least 20 times in 30 years. One of those official changes deleted the very thing we’re looking for, which is the value of locations. Using the older definition, inflation would be at least 7%, probably more like 10%.
These critics come from within the discipline, so they themselves don’t lack credibility. Their alternative stats do. What they gain in accuracy, they lose in credibility. The alternatives rely on raw data that come from officialdom. And even if unofficial calculators can find a way around that conundrum, their measure is still not official. Hence, nobody pays it much attention. The major players making policy and huge investments ignore the homemade figures and stick with convention.
While I feel for all those critics cited above being ignored by most of their colleagues, their suffering is soothing. Their voices shouting in the statistical wilderness give us room to talk, to question the official dismissal of rents. As we search for any sign of rents, the relevant stats we’ve found do not inspire gobs of confidence. It feels better knowing other critics have gone before.
While the above academics were able to fault an existing stat, they have their own axe to grind and their own pet theory to promote. They leave themselves open to a different criticism. They were not able to critique the absence of a statistic – the worth of Earth.
No matter what phenomenon they measure, bureaucracies always fail to agree on one estimate. The statisticians of one bureaucracy cannot explain why the totals of another bureaucracy differ. Nor do they seem to take these discrepancies seriously.
Most professional economists shrug off absent data, even misleading “data.” Too many academics are indifferent. They have an attitude of “oh, that’s good enough” when clearly the figures are not. It’s like they and their statistician brethren have jobs with no curiosity allowed. Caution and conformity should be the job requirements listed right under the job title for a Public Information Officer.
For the academics and bureaucrats compiling them, the jumble of tables is what’s important. Whether they have any accuracy or utility or insight does not seem to matter. Doing a job that pays well and gets paid attention from the business media and academics authoring articles (since officials have a monopoly on both data and status, where else can the curious turn?), that’s what matters, not the datum for Earth’s worth.
In particular, bureaucracies … count things of minor import – e.g., consumer confidence; over-count some indicators – e.g., GDP;under-count other indicators – e.g., unemployment or inflation; and bundle what should be kept apart – e.g., housing with utilities. Their false frames yield misunderstandings and distorted world views.
Statisticians go deep but not broad. Going deep, economic statisticians measure an enormous quantity of minutiae, like “advance US retail and food services sales.” Failing to go broad, they leave out customs like trust which make civilized trade possible. Alan Greenspan, who was the most powerful person in economics while he reigned at the Federal Reserve, confessed to being surprised to learn that trust matters. That was his comment upon observing Russian criminals take over the conversion of so-called communism into capitalism.
Conversely, economists go broad but not deep. Despite economies being nothing if not systems of incentives, economists fail to unbundle the two kinds of spending. Spending for human-made goods and services versus for natural assets are as different as a beard and a barbarian. The economists’s catch-all category for spending is reminiscent of speed-reader Woody Allen’s review of Tolstoy’s War and Peace (or War and the World in the author’s native language): “It was about some Russians.” It was about some purchases.
And to top it off, going too broad, economists include political behavior, like lobbying within market behavior like producing output. They fault “market failure” when actually what happened was “lobbying success,” or in their jargon, successful “rent-seeking.”
Official stats are not only way off the mark – real GDP is lower, real inflation higher – but their measurements shed little light on what their measurements shed little light on what economies are up to. And what we need to do in response. Not knowing how much society spends to never reward labor and and capital (i.e., our spending for land) means that economists cannot make good statistical arguments. That guarantees the futility of economics.
Trying to calculate aggregates of items is just the opposite of measuring the size of particles. Physicists have their angstroms down to the trillionth. Chemists measure parts per billion. Economists pretend that their stats are of equivalent stature and call their statistics “data.” No way. The commonly applied phrase “massaged data” is only half right; there is no data on the menu – only approximations. An official figure resembles an actual value about as much as a stick figure resembles a living body.
Are bad stats mistakes or conscious incorrections? It’s like those guys go out of their way to not make sense. Official figures create a near impenetrable fog that hides rent – a useful smokescreen for somebody’s capturing of rents.
What officials attempt – aggregating many sales into one grand total – is challenging. Even under the best of conditions, as prices are always fluctuating, it’s not easy. But add the political pressure to look away from land and you get the mess we got.
At the end of the day, it’s a lot of noise to go with precious little signal. Official tabulation is a morass and it gets worse. Just wait until we reveal where the official statistics for housing and other proxies for land went astray.