Economic theory identifies the likely consequences of different market structures. Those consequences can be surprising. Rent control, for example, is usually touted as a measure to make housing more affordable. Standard economic theory surprises us by showing that it tends to make housing less affordable. (The evidence seems to support this conclusion of economic theory. For theory and evidence, see Coyne and Coyne 2015.) Journalists and others often speak of “the law of unintended consequences” when discussing such surprises. An economic theory of experts has its surprises as well.
Many of us tend to think of experts as reliable and truthful. Examples of expert failure may be met with calls for oversight or “regulation.” We are not used to asking about the structure of the market for expert opinion. We should. We tend to think of experts in hierarchical terms, but we should take a transactional approach. Different market structures will create different outcomes. The general thrust of both mainstream and mainline economics is that competition tends to produce outcomes that are generally viewed as favorable, whereas monopoly and monopsony tend to produce outcomes that are generally viewed as unfavorable. This generalization applies to the market for expert opinion as well. Details matter. One must not simply invoke the potentially empty words “competition” and “monopoly,” declaring the one to be good and the other bad. But in the market for expert opinion, as with other markets, the general rule is that competition tends to outperform the available alternatives.
In a competitive market for expert opinion, the return to the marginal expert’s specialized knowledge will tend toward the ordinary rate of return adjusted for factors such as risk and the pains or pleasures of acquiring and using that knowledge. Entry restrictions will tend to raise the rate of return on the expert’s specialized knowledge and increase the expert’s monopoly power as measured by the elasticity of demand. In Chapter 11 I will note that professional organizations such as the American Medical Association may work toward entry restrictions that tend to reduce the supply of such professionals and raise the price of their professional advice.
Economists often judge markets by efficiency. Efficiency is good; waste is bad. There are exceptions, to be sure. Efficiency in the market for assassins is probably bad. Nor is efficiency the only thing that matters. Fairness is important, and economists today do not neglect it. (See, for example, Smith 2003; Henrich et al. 2005; Smith 2009 and Henrich et al. 2010.) But efficiency is an important normative criterion often invoked by economists. The economic theory of experts has mostly neglected efficiency so far. The focus has been not efficiency, but veracity. Truth is good; falsity is bad. In spite of this shift in normative criterion, the generalization favorable to economic competition tends to hold in the market for expert opinion.
An economic theory of experts must identify the commodity being traded. As we have seen, past thinkers have defined experts by their expertise, with the exception of writers on expert witnesses in court. Expertise, however, is not a commodity. It is human capital that allows the expert to produce expert opinions. The expert’s expertise as such is not for sale. The relevant commodity for an economic theory of experts is expert opinion.
An economic theory of experts is a theory of the supply and demand for expert opinion. The commodity has unique properties distinguishing it from other commodities. But market participants are not extraordinary. In particular, experts are people and do not change their human qualities when supplying expert opinions.
Experts respond to the same incentives as people in other areas of human action, and in the same ways. Levy and Peart call this principle “analytical egalitarianism” and apply it “not only to policy makers but also to the experts who influence policy” (2017, p. 7). They say, “We have used the phrase ‘analytical egalitarianism’ to describe the presumption that people are all approximately the same messy combinations of interests” (2017, p. 7).
Experts are not likely to be mustache-twirling fiends, but neither are they likely to be selfless servants of the public interest. For example, experts may be biased by sympathy for their clients. Such bias may emerge from human qualities we value, and yet cause the expert’s opinion to deviate from the public interest. This insight is a truism: Experts are ordinary humans; they are humans and not otherworldly creatures. The disciplined pursuit of this common-sense observation helps us to reach conclusions about experts that might be surprising or counterintuitive.
An economic theory of experts should thus rely on the underlying logic of public choice theory. The Calculus of Consent, which was first published in 1962, is the great early statement of public choice theory. In it, Buchanan and Tullock assumed that “the representative or the average individual acts on the basis of the same over-all value scale when he participates in market activity and in political activity” (1962, p. 19). People are the same in economic and political exchange. The economics of experts pushes the same basic idea by assuming experts are driven by the same motives as nonexperts. In particular, we must abandon the idea that experts seek only the truth without regard to motives such as fame and fortune (Peart and Levy 2005, pp. 87–8). What Buchanan has said of public choice applies to the economics of experts as well. “Public choice did not emerge from some profoundly new insight,” he notes. It “incorporates a presupposition about human nature that differs little, if at all, from that which informed the thinking of James Madison” and, indeed, the “essential scientific wisdom of the 18th century,” which was largely “lost” by the middle of the twentieth century (Buchanan 2003, pp. 11–12). Like public choice theory, to paraphrase Buchanan, the economics of experts incorporates a rediscovery of eighteenth-century wisdom and does little more than incorporate a rediscovery of this wisdom and its implications into analyses and appraisals of experts.
Because of its similarities to public choice theory, we might call the economics of experts information choice theory. The expert must choose what information to provide to others. Just as public choice theory includes a theory of government failure, information choice theory includes a theory of expert failure. It helps us to understand, in other words, when relying on experts may not produce the outcomes we desire and expect. It helps us decide when experts are more or less “reliable” in the sense of Chapter 2 and when nonexperts are more or less “empowered.”
Information choice theory supports the view that monopoly expertise tends to produce a poorer epistemic performance than competition. It notes, however, that many variables besides the number of experts influence the performance of epistemic systems, including redundancy, “synecological redundancy” (defined in Chapter 9), the correlation structure among expert errors, and conditions of expert entry and exit. Information choice theory replaces the naïve model of the “objective” expert by supply and demand models in which the opposed interests of rival experts can be leveraged to enhance epistemic outcomes. I develop the theory of expert failure in Chapters 10 and 11.
The term “information choice” suggests that scholars should recognize that experts choose what information to convey. This point is recognized in many contexts, including models of asymmetric information, principal-agent models, signaling games, and sender-receiver models. Economists and other scholars do not always apply the insight consistently, however. Levy (2001) and Peart and Levy (2005) note that economists tend to assume other economists are pure truth seekers. In earlier chapters we have seen that experts are sometimes lionized and represented as immune to ordinary incentives.
In information choice theory, an “expert” is anyone paid for their opinion. Economists, forensic scientists, and statisticians are experts; racecar drivers are not. My definition of expert implies that entrepreneurs and profit-seeking enterprises are not experts. An entrepreneur’s output might, of course, be his or her opinion. Consultants are paid for their opinions. But the entrepreneurial function is not identical to that of the expert, nor is one an aspect or subset of the other. The entrepreneur is paid for their output. The young Steven Jobs, for example, was paid for his computers, not his opinions on the future of digital technology. This is true even though Jobs would not have cofounded Apple Computers if he had not held prescient opinions on digital technology. Experts are in a different position. They are paid for their opinions themselves.
An economic understanding of experts would improve understanding in areas that economists have given scant attention. Much of the literature on forensic science, for example, assumes that forensic scientists are either pure truth seekers or willful frauds. In an important article on observer effects in forensic science, Risinger et al. (2002) distinguish fraud from unconscious bias. “We are not concerned here with the examiner who, in light of the other findings, deliberately alters her own opinion to achieve a false consistency. That is the perpetration of an intentional fraud on the justice system, and there are appropriate ways with which such falsification should be dealt” (p. 38). Information choice theory challenges this sharp distinction.
Information choice theory tells us that “the line between ‘honest error’ and willful fraud is fluid” in part because “there are no bright lines as we move from the psychological state of disinterested objectivity to unconscious bias to willful fraud” (Koppl 2005a, p. 265). An expert has many ways to introduce bias into their work. The expert themself may be only half aware of their use of such techniques, or completely unaware. When incentives skew honest error, the erring person knows, presumably, what their incentives are. The error may be “honest,” however, if the person does not know that those consciously known incentives have altered their perceptions. The error may also be “honest” if the person underestimates the effect and therefore fails to fully compensate for it.
There are many very different perceptions that may be altered by incentives. The fingerprint examiner may not notice dissimilarities between a known and unknown print, for example. A research scientist must search for deviations from experimental protocol before accepting the data generated by an experimental trial. That search may be more diligent or thorough when an experimental trial has produced disappointing results. If the scientist is unaware of this asymmetry in their search efforts, the results will be biased in spite of a conscious desire to be unbiased.
If incentives skew “honest” errors, then we should recognize that experts choose what information to share and that incentives influence that choice.
Information choice theory is the application of familiar economic logic to relatively straightforward questions about experts and expertise. Sandra Peart and David Levy have made the most complete articulations so far of an economic theory of experts (Feigenbaum and Levy 1993, 1996; Levy 2001; Peart and Levy 2005; Levy and Peart 2007, 2008a, 2008b, 2010; Levy et al. 2010; Levy and Peart 2017). My coauthors and I have considered comparative institutional analysis and the mechanism design problems associated with information choice (Koppl 2005a, 2005b; Koppl, Kurzban, and Kobilinsky, 2008; Cowan and Koppl 2011, 2010). Milgrom and Roberts (1986), Froeb and Kobayashi (1996), Feigenbaum and Levy (1996), and Whitman and Koppl (2010) are information choice models. Koppl (2012b, pp. 177–8) explains why Sah and Stiglitz (1986) is not an information choice model.
Milgrom and Roberts (1986) is a canonical model in information choice theory. The authors consider a naïve recipient of information confronting competitive suppliers of information. “The question at issue,” they write (1986, p. 25), ‘is under what circumstances competition among providers of information can help to protect unsophisticated and ill-informed decision-makers from the self-interested dissembling of information providers.” If the competitors’ interests are “strongly opposed,” as in a civil trial in a common-law country, even a naïve information recipient will come to the full-information decision. Interests are strongly opposed when for every pair, d, d', of alternative choices the information recipient might make, one of the interested information suppliers prefers d to d' and the other prefers d' to d. If the interests of the competing information suppliers are strongly opposed then one of them always has an incentive to provide additional information. Assume for a moment that the information revealed to the recipient does not induce the full-information choice, d*. Then it leads to some other choice, d0. Because interests are strongly opposed, one of the information suppliers prefers d* to d0 and thus has an incentive to reveal more information. Even though the decision maker is naïve, competition ensures that he reaches the full-information decision.
The Milgrom and Roberts model shows that a battle of the experts is not a race to the bottom (Koppl and Cowan, 2010). It shows that competition among experts will influence their choices of what information to share. Their result suggests the epistemic value of having opposing interests for competing experts.
Feigenbaum and Levy (1996) is also a canonical model of information choice. The authors imagine a biased researcher estimating the “central tendency” of random variable. They consider both the researcher who wants as large a number as possible and the researcher who wants the smallest number possible. The researcher will use several estimators and report the result that best fits their bias. Feigenbaum and Levy (1996) run a simulation study with several symmetric distributions. They computed the “central tendency” of each distribution in four different ways, namely, “the mean, the midrange, the median and a 20 percent trimmed mean” (p. 269). (To estimate a 20 percent trimmed mean ignore the largest 20 percent and the smallest 20 percent of values in a sample and take an ordinary mean of the remainder.) Each of these techniques (when applied to symmetric distributions) is unbiased considered in itself. But the technique of using them all while reporting only preferred results is decidedly biased, as Feigenbaum and Levy (1996) show in detail.
Feigenbaum and Levy (1996) show that fraud may be unnecessary for a biased scientific expert. The strategic choice of which results to report supports the expert’s bias. The expert in their model must make a choice about what information to share.
Whitman and Koppl (2010) present a rational-choice model of a monopoly expert. In their model a monopoly forensic scientist chooses whether to “match” ambiguous crime-scene evidence to a suspect. (The assumption of binary choice is a simplification. In forensic-science practice the word “match” is used less often than words such as “individualization,” “association,” and “consistent with.”) Because the evidence is ambiguous, the forensic scientist must choose when to declare a match. The forensic scientist must make a choice about what information to convey. He or she must choose whether to report “match” or “no match.” Whitman and Koppl show that a rational Bayesian will be influenced by the results of the forensic examination, but also by their prior estimation of the probability of guilt and by the ratio of the disutility of convicting an innocent to the utility of convicting the guilty. In some cases, priors and utilities may render the results of the forensic examination irrelevant to the expert’s expressed opinion. They note the importance of institutional factors in influencing both priors and utilities. Working as an employee of the police department, for example, will likely increase the forensic scientist’s prior belief in the suspect’s guilt and lower the disutility of convicting an innocent relative to the utility of convicting the guilty. The institutional structure creates a bias even when experts are perfectly “rational” Bayesian decision makers.
If experts are paid for their opinions, then they are agents of the payers, their principals. Thus, the subject of expertise has been treated in the economics literature mostly in the context of principal-agent models. It is probably fair, however, to distinguish standard principal-agent models from information choice theory. In the canonical model of Ross (1973), the principal cannot observe the agent’s action; the principal can observe the payoff, which depends on chance and the agent’s action. This model clearly applies to situations in which the agent is not an expert. Workers paid on commission or at piece rates, for example, are not being paid for their opinions, but for their results. Ross’s assumption that payoffs are observable does not always apply to experts. The doctor tells me I will die tomorrow if I do not take his patent medicine. I take the tonic and live another day. My continued existence does not help me to discriminate between the hypothesis that the doctor is a quack and the hypothesis that the doctor saved my life.
A similar logic seems to apply to the expert opinions of economists. Economists have debated whether “the stimulus” created by the American Investment and Recovery Act of 2009 worked. This debate turns mostly on the size of the Keynesian multiplier, and opinions differ on that topic. If the multiplier is low, the stimulus did not work. If the multiplier is high, then the stimulus prevented output and employment from going even lower. In this situation, the payoff of the actions taken is not observable.
In the example just given, there may be some ambiguity about the identity and preferences of the principal. In other cases, though, it seems clear that experts may be hired to provide correct information and that the information they provide cannot be confirmed or can be confirmed only at a relatively high cost to the principal. The case of Brandon Mayfield illustrates. Brandon Mayfield was arrested as a material witness to the Madrid train bombing of March 11, 2004. Mayfield was arrested after the FBI made a “100% identification” of him as the source of a latent fingerprint at the crime scene (OIG 2006, pp. 64 and 67–8). Mayfield’s attorney requested an independent opinion and the court agreed to pay for an examiner to be chosen by the defense (OIG 2006, p. 74). That examiner supported the FBI identification of Mayfield as the source of the crime-scene print (OIG 2006, p. 80). The Spanish authorities, however, connected the crime-scene fingerprint to a different person. It seems the Spanish authorities were right and the FBI mistaken. The FBI withdrew its identification and declared the crime-scene fingerprint to be of “no value for identification purposes” (OIG 2006, pp. 82–8). Mayfield was released and the FBI issued an apology (OIG 2006, pp. 88–89). In this case, the independent fingerprint examiner was an agent for the defense; the agent provided information that the defense could not challenge or question. This examiner’s error might not have been revealed if the Spanish authorities had not fortuitously identified a more likely source for the crime-scene fingerprint.
The example of Brandon Mayfield illustrates a difference between standard principal-agent models and information choice models. In Ross (1973), the agent’s actions are distinct from their output. If the agent is an expert, the observable outcome of the expert’s activity is a part of the agent’s action and not separable from it. This lack of separability does not help the principal to monitor the agent if the principal lacks the expertise that the agent was hired to deploy. Thus, it may be difficult for the principal to assess the outcome of the agent’s actions. As the Mayfield case illustrates, getting opinions from other experts may not solve the principal’s monitoring problem if errors and inaccuracies are correlated across experts. The model of Milgrom and Roberts (1986) discussed previously in this chapter suggests that competition among experts with opposed interests may help third parties to judge expert opinions.
Information choice theory is more sensitive to the fallibility of experts than standard principal-agent models. It makes the somewhat innovative assumption that incentives skew expert errors, including “honest” errors. Information choice theory also gives greater attention to four motives absent from standard principal-agent models, namely identity, sympathy, approbation, and praiseworthiness. Finally, information choice theory does not presume an isolated principal-agent institutional structure. Instead, the theory recognizes that the larger institutional context may create different degrees and forms of competition among experts. In both its positive and normative aspects, information theory explicitly considers what we might call the “ecology of expertise.”
Information choice models will often assume asymmetric knowledge, but information choice and asymmetric information are distinct. A model of asymmetric information might contain no experts, and a model of information choice may contain no asymmetric information. In the illustrative model of Akerlof (1970), that of the market for used cars, the used-car owner has information unavailable to the potential buyer. But the owner is not an expert because he is not paid for his opinion, but for his car. The potential buyer knows the relative frequency of lemons, but has no information indicating whether any particular car is a good car or a lemon. Thus, Akerlof’s basic model has no experts and is not an example of information choice theory. A referee or umpire in a sporting event need not necessarily have asymmetric information, particularly when it is cheap to make a video recording of the event and study an instant replay of close calls. Nevertheless, we pay referees and umpires to give their expert opinions about whether a goal was scored or a foul made. Arbiters do not always have asymmetric information about the facts of the dispute or the rules governing dispute resolution. And yet they are paid as experts to give an opinion that may be binding on the parties. Thus, information choice does not necessarily imply asymmetric information. Nevertheless, it seems likely that the experts in most information choice models will have asymmetric information. A physician, for example, is an expert with asymmetric knowledge. The patient is buying a mixture of medical services and medical advice.
Expert opinion is often a “credence good,” which Darby and Karni (1973, p. 69) define as goods whose quality “cannot be evaluated in normal use.” As the umpire example illustrates, however, expert opinion may be subject to “evaluation in normal use.” Thus, although models of credence goods may prove useful for information choice theory, the two model classes are distinct. The credence goods literature has focused on cases such as car repair in which the same party supplies diagnosis and treatment. The question is whether the supplier will recommend a needlessly costly treatment. Darby and Karni (1973) find that market mechanisms such as branding can mitigate, but not eliminate, the risk of fraud with credence goods. They doubt the efficacy of “governmental intervention even in markets where deliberate deception is a regular practice” because “governmental evaluators will be subject to much the same costs and temptations as are present for private evaluators” (p. 87). Emons (1997, 2001) finds fraud-free equilbria under both competition and monopoly. Dulleck and Kerschbamer (2006) provide a simple model that generates most of the earlier results in this literature. The literature shows that under both monopoly and competition, cheating or overtreatment occurs less than our untutored intuition might have supposed.
There are many sources of demand for expert opinion. Households may demand expert opinions when they are dissatisfied with reputational mechanisms and word of mouth. Word of mouth transmits something similar to expert opinions. But instead of an explicit quid pro quo, it is a form of gift exchange. (Mauss 1925 is the classic study of gift exchange, but see also experimental studies such as McCabe et al. 2001 and Henrich et al. 2005.) I tell you my experience with different butchers and you tell me your experience with different bakers. We share our opinions. When this way of getting information begins to seem unsatisfactory, information seekers may begin to demand expert opinions. Presumably, such dissatisfaction grows more likely as group size grows. Gossip may provide adequate information about social partners for groups of about 100–200 persons, the group size Aiello and Dunbar (1992, p. 185) associate with the emergence of modern human language. But this sort of information sharing may still function well in many relatively small-numbers contexts such as that of a local neighborhood in a city.
Film critics once provided expert opinions on what movies their readers would likely enjoy. Information aggregation sites such as Rotten Tomatoes have at least partially displaced this function for many moviegoers. In general, information aggregation services can sometimes substitute for the paid opinions of experts.
Businesses also demand expert opinions. As I noted in Chapter 2, managers require the opinions of experts in many areas, including engineering, accounting, and finance. Such experts will often be members of professional organizations and bound by professional standards and ethics. Langlois (2002, pp. 19–20) notes the modular structure of the division of labor in market economies. Such modularity in production corresponds to modularity in knowledge, which is the context for the emergence of professions such as accountancy. Business managers draw on this modularized knowledge by seeking the expert opinions of various professionals. These professionals have a duty to their clients or employers, but also to the epistemic and ethical standards promulgated (for good or ill) by their professional associations.
Both businesses and households face an uncertain future. Expanding on the concepts of a “market for preferences” (Earl and Potts 2004) and “novelty bundling” (Potts 2012), Koppl et al. (2015a) discuss “novelty intermediation.” Koppl et al. (2015b, p. 62) say: “With Potts (2012) and Earl and Potts (2004), the idea is that certain businesses know about recent innovations that have already taken place, whereas the retail consumer does not. These businesses inform the consumer by suggesting certain combinations or offering products that exhibit certain combinations.” In the analysis of Koppl et al. (2015a), instead, “the intermediary knows what combinations of inputs to the firm’s production process may generate new discoveries” (2015b, p. 62) and their corresponding innovations.
Finally, of course, governments may demand expert opinions. As I have already noted, the American progressive movement essentially wanted to establish the rule of experts (Wilson 1887; Leonard 2016). But even in more liberal and democratic regimes, governments will rely on experts in law, military strategy, espionage, and so on.
Households, businesses, and governments may demand expert opinions to help them know the unknowable. I have discussed the oracle mongers of Athens. Ancient generals often sought auguries. Traders in financial markets demand “technical analysis” of stock price movements even though theory, history, and good common sense show it to be useless. (See, for example, LeRoy 1989; Arthur et al. 1997; and Brock and Hommes 1997.) There has always been brisk demand for medical advice, even when medical experts were more likely to do harm than good, and for weather forecasting, even before scientific meteorology existed. There is always a brisk demand for magical predictions of the unpredictable. Expert failure is likely in the market for impossible ideas even under more or less competitive conditions. Competition helps even here, but only so much.
There are many sources of supply of expert opinion. As we have seen, in some discussions, experts are taken to be figures whose opinions are, in Schutz’s (1946, p. 465) formulation, “based upon warranted assertions.” This sort of view makes the suppliers of expert opinion a breed apart. Bogus experts then become Lakatosian “monsters” (Lakatos 1976, p. 14) who are to be explained away by fraud, or lack of regulation, or some other special external consideration. A theory building on a broadly Hayekian conception of the division of knowledge recognizes that anyone can be an expert. For this reason, I have little to say in general about the supply of expert opinion beyond examining the market structure. But in the next chapter, I will discuss information choice theory’s assumption about the motives of experts.