Ethan Bueno de Mesquita
ECONOMICS STANDS DEEPLY COMMITTED to quantification, especially in its most policy-facing branches. Indeed, a particular approach to quantification for policy analysis is what many applied economists mean by economics. This dogma of quantification creates perils for policy that are, in my view, as significant as the market fundamentalism the EfIP authors highlight. As economists rethink the relationship between their discipline and public policy, they would be well served grappling with these issues.
In a textbook vision of policy analysis, quantification is simply a tool; it measures and scores policy alternatives rather than shaping the alternatives themselves. We are invited to think of quantification as in service of policy aims that are defined elsewhere and by others. But this view, while popular, is misleading. We cannot divide the world into a neat dualism of aims and tools. How and what we quantify shapes and determines the aims of public policy, just as those aims shape and determine what we quantify.
The fiction that quantification is some wholly technocratic undertaking underlies three perils of quantification: it flattens the normative standards we use to evaluate policy; it distorts the incentives of those who make and implement policy; and it narrows our field of vision, limiting the policy problems we acknowledge exist or can be addressed.
First, though, an affirmation. Quantification is essential because it creates a framework of contestability. When costs, benefits, and values are quantified, the terms of debate and standards of evidence are clear—which is critical for democratic accountability and good governance. But quantification is not perfect, and we must look its limits squarely in the face.
Despite the rich panoply of normative standards considered by moral and political philosophers, essentially all quantitative policy analysis is rooted in welfarism, the view that policies should be evaluated based on their implications for human well-being. Moreover, the welfarist standard that predominates is what I call crass utilitarianism: it defines well-being largely in terms of material costs and benefits, such as economic prosperity, health, and other factors for which willingness to pay is straightforward to measure.
Crass utilitarianism lends itself easily to quantitative analysis. While it is hard to quantify the value of rights, duties, or equity, crass utilitarianism is so easy to work with that it has become part of the “standard assumptions” in the background of applied economic thinking. As a result, today we are often not only crass utilitarians, we are unreflective crass utilitarians. The process of trying to maximize net utility—ignoring questions of rights, duties, responsibilities, equity, dignity, and so on—is so ingrained in our practice and thought that we simply take for granted that a good policy is one that optimizes material benefits.
Here we see how misleading the textbook vision is. The aims of public policy are deeply entwined with quantification. We don't quantify because we are utilitarians. We are utilitarians because we quantify. As Michel Foucault put it, utilitarianism has ceased to be a philosophy; it has become “a technology of government.”
The next peril of quantification is that it distorts incentives. Typically, we can only quantify a few of the many inputs that go into addressing a social problem. And incentives tend to follow measurement. If we want to hold teachers accountable, and the only thing we can measure is test scores, then a natural line of thinking is, “let's give teachers incentives if their students’ test scores improve.” The problem is that incentivizing only quantifiable tasks can create perverse distortions in behavior, such as incentivizing teachers to “teach to the test” while neglecting harder-to-quantify skills, such as conflict resolution, self-control, and creative thinking. This distortion can create overall outcomes that are worse than the scenario with no incentives. This problem is, of course, not limited to education policy.
The final peril of quantification, a narrowed field of vision, is in some sense a consequence of the previous two. Many policies create short-run costs for the small number of people alive today, but long-run benefits for the vast number of people who make up future generations. The most obvious example is interventions limiting carbon emissions to prevent global warming. Such policies pose two challenges to any welfarist quantifier. First, if we believe that all people should be treated equally in cost-benefit analyses, we ought to be spending almost all of our current resources on policies that benefit future generations. Second, since policies that affect the future affect such a vast number of people, comparison is hard. Everything looks either infinitely good or infinitely bad.
Quantitative policy analysis responds with a technical fix, called “discounting the future.” This idea is inspired by, but distinct from, the financial concept of the time value of money. Say you would be indifferent between receiving 90 cents today or a dollar a year from now. Then the value of money you will acquire in a year is discounted by 0.9, and this diminution continues exponentially as we go further into the future. Cost-benefit analysis extends this methodology to discount future generations’ welfare because doing so solves the quantifier's problem of infinite future benefits by writing off the distant future.
But something is suspect here. Yes, I value money for me in the future less than money for me in the present. But, other than the small chance the world will end, why should we value people in the future less than people in the present? Frank Ramsey—who in the 1920s laid the intellectual foundations for thinking rigorously about intertemporal considerations —understood this. He argued that discounting the welfare of future generations “is ethically indefensible and arises merely from the weakness of the imagination.” If we do not discount future generations we cannot get on with the business of quantifying benefits and cost. So, here again, the dictate to quantify shapes our normative standards, perhaps without our even noticing.
Quantification also narrows our field of vision by distorting incentives—pushing policy makers to focus on issues that are easily quantified, whether or not those are the most pressing issues. In the United States, for example, the Office of Management and Budget can essentially veto any major regulatory action if it finds the cost-benefit analysis wanting. As a consequence, Lisa Heinzerling, former head of policy at the Environmental Protection Agency, described constantly asking not if something was the right thing for environmental protection, but “How can we make this acceptable to OMB?”
In some sense, of course, this is exactly the goal. If quantification requirements do not change the regulations we get, why have them? The concern, however, is that the mandate to quantify does not simply prevent agencies from promulgating regulations for which the cure is worse than the disease; it discourages work on regulations for which there are good arguments, when quantification is too expensive or impractical.
Perhaps no field of inquiry has had greater impact on policy thought than economics. Quantification, as much as market fundamentalism, underpins that impact. We have been too willing to accept modern quantitative policy analysis as an unalloyed good, without sufficient reflection on the balance of its merits and demerits. This moment of self-examination inspired by Naidu, Rodrik, and Zucman is an opportunity to adjust course. Whether or not quantification's role in policy discourse is ultimately defensible, it has to be defended. I hope that by highlighting some of the perils of quantification, this response might contribute to that process of reflection and reform.