10 Expert Failure and Market Structure

Experts fail when they give bad advice. In its broadest meaning, expert failure refers to any deviation from a normative expectation associated with the experts advice.

Two Dimensions of Expert Failure

Expert failure is more likely when experts choose for their clients than when the clients choose for themselves. And in broad brush, we may say that expert failure is more likely when experts have an epistemic monopoly than when experts must compete with one another although the details of the competitive structure matter, as we shall see. In Chapter 5 I noted that the unavoidable word competition may easily create misunderstanding. I will use the phrase ecology of expertise in part to underline the synecological quality of the sort of competition among experts that might help to reduce the chance of expert failure. These two dimensions of expert power suggest the four-quadrant diagram of Table 10.1, which identifies four cases: (1) the rule of experts, (2) expert-dependent choice, (3) the quasi-rule of experts, and (4) self-rule or autonomy. The greater the freedom of nonexperts to ignore the advice of experts, the lower is the chance of expert failure, ceteris paribus. And the more competitive is the market for experts, the lower is the chance of expert failure, ceteris paribus.

Monopoly expert Competitive experts
Expert decides for the nonexpertRule of experts
   Examples include state-administered eugenics programs, central planning of economic activity, and central bank monetary policy
   Highest chance of expert failure
Quasi-rule of experts
   Examples include school vouchers, Tiebout competition, and representative democracy
Nonexpert decides based, perhaps, on expert adviceExpert-dependent choice
   Examples include religion under a theocratic state and state-enforced religion.
Self-rule or autonomy
   Examples include periodicals such as Consumer Reports, the market for preferences, and venture capital.
   Lowest chance of expert failure

The Rule of Experts

The rule of experts creates the greatest danger. Here, monopoly experts decide for nonexperts. State-sponsored eugenics may be the most obvious example. The state hires a eugenicist to tell it which persons should be allowed to reproduce. I noted in Chapter 4 that we cannot, unfortunately, view this sort of thing as entirely behind us (Galton 1998; Stern 2005; Ellis 2008; Johnson 2013; Shreffler et al. 2015). Of course, eugenic principles were taken to devastating extremes in Nazi Germany.

Examples of the rule of experts include more seemingly moderate and reasonable cases of expert control such as monetary policy under central banking. Steven Horwitz (2012) examines this case closely in a penetrating analysis of the history of the Federal Reserve System in the United States. He notes that in the financial crisis of 2008 the Fed began to exercise a variety of new powers that they saw as necessary to deal with the unfolding crisis even though they had no authority to do so. [T]hose powers were largely seized in the sense that there was no real debate, either in Congress or the public at large, over whether the Feds acquisition of those new powers was desirable or not (p. 67). Horwitz notes, pointedly, [I]n the face of what the Fed claimed (rightly or wrongly) was the impending meltdown of the financial system, claims of expertise triumphed over democratic political processes (p. 68). We have seen both Turner (2001) and Jasanoff (2003) express an aversion to expertise unfettered by democratic constraints. Horwitz (2012) points out how difficult it can be to exercise democratic control of experts in a context such as central banking. He says: Where there is only one person or organization responsible for a complex task, it inevitably will look to experts to help achieve its goals and use that need for expertise as a way to shield it from outsiders in general, and critical ones in particular (p. 62). Horwitz here describes the sort of mystery-making we saw Berger and Luckmann (1966) warn of in their critique of nihilation.

Horwitz (2012, pp. 72 and 77) recognizes a dynamic aspect to the link between expertise and monopoly. A belief in expertise calls forth monopoly, and monopoly calls forth a need for expertise that is genuine given the institutional context of monopoly. In the case of central banking, experts come to be in charge of the decision-making process with no alternative sources of expertise and no possibility of exit for those who use the product. But then there can be no strong checks on the accuracy of the decisions being made and the expert policymakers are further able to shield themselves from feedback by cloaking their decisions in language that only other experts can really understand. Monopoly creates the need for conscious policy, and then policymakers are able to close off competing perspectives and obfuscate exactly what they are doing and why.

Evidence in White (2005) supports Horwitzs suggestion that experts in monetary policy have shielded themselves from criticism. He shows that macroeconomic researchers in the United States are dependent on the Federal Reserve System.

White concludes dryly: Fed-sponsored research generally adheres to a high level of scholarship, but it does not follow that institutional bias is absent or that the appropriate level of scrutiny is zero (White 2005, p. 344).

Both eugenicists and economists provide examples of the rule of experts. As we saw in Chapter 3, John Maynard Keynes favored the rule of experts in both areas, as well as morals. He wanted state policy in economy, population, and morals. We saw Singermans interpretation of Keynes, wherein successful planning in any one area depended on successful planning in the other two (2016, p. 564).

Singermans interpretation of Keynes sheds new light on Keyness famous letter to Hayek on the latters book, The Road to Serfdom. Keynes told Hayek he was in deeply moved agreement with his book. And yet he went on to defend state economic planning against Hayeks criticism: But the planning should take place in a community in which as many people as possible, both leaders and followers, wholly share your moral position. Moderate planning will be safe if those carrying it out are rightly orientated in their own minds and hearts to the moral issue. Keynes told Hayek:

Keynes seems to have thought that the real core of Hayeks warning was that the wrong morality might prevail, a concern he shared deeply. He thought Hayek had overlooked the vital eugenic dimension to moral error and for this reason was led to a spurious antiplanning stance. Hayek got off to a good start with his call for good morals, Keynes thought, but ran off the rails by neglecting the eugenic dimension of morality, which requires economic planning.

Expert-Dependent Choice

Religion provides examples of expert-dependent choice. The monopoly priest offers his advice on correct behavior and how to enter paradise. In many cases the priests advice has no coercive force behind it, and the priest is left to complain of his parishioners sins.

In the United States, religions compete freely. Experts on the afterlife and other religious matters compete. But in many times and places, such as the Roman Empire after Constantine, religious experts have enjoyed a state-supported monopoly. In Chapter 2 we saw Adam Smith argue that religious competition produces candour and moderation in religious leaders. Smiths analysis is supported by Buddhist texts describing how the Buddha drew followers in a competitive market for gurus: Instead of mysterious teachings confided almost in secret to a small number, he spoke to large audiences composed of all those who desired to hear him. He spoke in a manner intelligible to all He adapted himself to the capacities of his hearers (Narasu 1912, p. 19). Walpola Rahula (1974) says that faith or belief as understood by most religions has little to do with Buddhism. The Buddha taught religious toleration and emphasized the students need to see the truths being taught for themselves rather than accepting them on authority. Rahula says:

The question of belief arises when there is no seeing The moment you see, the question of belief disappears. If I tell you that I have a gem hidden in the folded palm of my hand, the question of belief arises because you do not see it yourself. But if I unclench my fist and show you the gem, then you see it for yourself, and the question of belief does not arise. So the phrase in ancient Buddhist texts reads: Realizing, as one sees a gem in the palm.

(1959, pp. 89)

The metaphor of the gem in the palm helps to suggest that competition turns wizards into teachers.

If Smith is right about religious competition, then monopoly priests tend to be mysterious and immoderate. And, indeed, in 1427, more than forty years after the heretic John Wycliffe died, the Roman Pope ordered his bones to be exhumed and burnt, and his ashes scattered on the River Swift. This action completed the anathema pronounced on Wycliffe, and on a list of 267 articles from his writings, at the Council of Constance on 4 May 1415 (Hudson and Kenny 2004). While the action was taken in response to Wycliffes published heretical opinions, it seems reasonable to guess that Wycliffes case was further damaged when he helped to inspire others to translate the Latin Bible into English. This translation ensured that that book would be, as the near-contemporary chronicler Henry Knighton put it, common and open to the laity, and to women who were able to read, which used to be for literate and perceptive clerks (Knightons Chronicles as quoted in Hudson and Kenny 2004). The Bible was once the exclusive and mysterious province of literate and perceptive clerks who wished to exclude the laity from reading it.

The story of John Wycliffe shows that experts may act with great force and violence to preserve their monopoly position as the officially accredited definers of reality (Berger and Luckmann 1966, p. 97). But if their expert advice to the laity is not enforced by measures more concrete than the threat of damnation, the laity can choose whether to follow their advice. Rich laics in the old days could sin freely and then pay for an indulgence. Purchasing indulgences may have seemed a good hedging strategy to rich unbelievers who were not fully convinced that the priests were wrong.

I do not want to look like an apologist for Buddhism. The beauty of the Buddhas message of acceptance and toleration does nothing to alter the crooked timber of humanity. There are, unfortunately, examples of Buddhist crimes far worse than the Churchs judgment on Wycliffe. For example, Buddhist monks in Burma have recently inspired and perpetrated murderous violence against Muslims in that country (Coclanis 2013; Kaplan 2015; Siddiqui 2015). The stated justifications for these attacks are, of course, spurious. The thinnest of arguments will suffice when our will is strong. As Benjamin Franklin noted wryly, So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do (Franklin 1793, p. 27).

In religion as in other markets, it can be hard to maintain a monopoly. The Churchs fury at Wycliffe did not prevent other heretics from coming along and founding what we now call Protestant sects. We saw in Chapter 3 that philosophy in the Socratic tradition challenged the monopoly of official religion in Athens. In that case, it was not one religion against another. Rather, Socratic philosophy challenged the monopoly of Athenian religion, much as a coffee importer might challenge a tea monopoly.

The Quasi-Rule of Experts

Under the quasi-rule of experts, experts choose for nonexperts, but compete among themselves for the approval of nonexperts. Voucher programs create the quasi-rule of experts. With vouchers, parents may exercise some choice of public school without thereby acquiring control of the schools curriculum. Tiebout competition provides another example. Tiebout (1956) noted that local communities compete for residents. If the costs of moving from one jurisdiction to another are low enough, communities will specialize in the mix of services provided, thereby attracting citizens with similar preferences over public services. The public services provided under Tiebout competition will generally entail expert choices imposed on the local citizens. Local government experts choose school curricula and programs. Local judges interpret the law. City planners decide how to pave sidewalks and where to put traffic lights. Such expert choices typically involve some prior citizen input, to be sure. In the end, however, the experts decide. Finally, representative democracy creates the quasi-rule of experts. The citizens vote for representatives who are or will become experts in public policy questions. These representatives then choose for the people what path to take.

The theory of public choice shows that representative democracy may go wrong, especially when the state takes on a large number and variety of functions. On the other hand, Tiebout competition seems to produce good outcomes for people rich enough to exercise their exit option relatively easily. The difference in outcome in these two cases depends in part on the feedback loop. With Tiebout competition, the citizens experience the consequences of expert choice fairly clearly and quickly. And the decision to exit can have a fairly immediate negative consequence for the expert. In representative democracy in larger jurisdictions such as the nation-state, the feedback loop will generally be looser. It is hard to know whether expert decisions have made things better or worse in general or for you in particular. Thus, the quasi-rule of experts is less likely than the rule of experts to induce expert failure, but more likely than autonomy.

Self-Rule or Autonomy

Finally, experts may compete among themselves to provide mere advice to nonexperts, who may freely accept or reject that advice. Consumer Reports magazine is a relatively pure example of such autonomy. The magazine has experts examine and test a variety of products within some category such as microwave ovens or baby cribs. The team of experts gives their opinions on each products performance including safety and reliability. No one is required to subscribe to the magazine. The magazines subscribers can buy the product recommended by the experts or one disparaged by the experts, who have a strictly advisory role and no power to choose for the consumer. Moreover, the magazine has many competitors. It is but one source of expert opinion on consumer products. The probability of expert failure in this case is low. And, indeed, I am aware of no argument to the effect that Consumer Reports is somehow bad or dangerous. There have been errors, although the number seems to be quite low. In 2007, for example, the magazine found all but two of the child safety seats it tested failed to provide adequate protection in a side-impact crash at thirty-eight miles per hour. In fact, the test had been conducted at about seventy miles per hour. The magazine corrected the error within two weeks. (See Claybrook 2007.) Notice that in this case the error was not one that might have put children in danger, only manufacturers.

In the previous chapter we discussed the market for preferences (Earl and Potts 2004 and Potts 2012) and novelty intermediation. These are further examples of self-rule or autonomy. Venture capitalists may provide novelty intermediation as well as capital. With both the market for preferences and novelty intermediation there seem to be relatively few cases of expert failure.

My comments on self-rule may suggest that we do not have to assume that each individual is the best judge of their own case to be in favor of autonomy. Adam Smith (1759, II.2.11) said that Every man is, no doubt, by nature, first and principally recommended to his own care; and as he is fitter to take care of himself than of any other person, it is fit and right that it should be so. Holcombe (2006) notes, Economists often argue that individuals are the best judges of their own well-being (p. 210). We are sometimes told that each individual is the best judge of their own interests and their own comparative advantage. This statement may seem to suggest that only one person is judging the best use of my time, namely me. But in commercial society many decentralized actors have a role in judging how I should spend my time. I am one of them, but so are my family members, my employer, potential employers, religious leaders, doctors, lawyers, financial advisors, and reporters for Consumer Reports. This list includes persons who act as experts in the sense of this book. Employers and potential employers are important figures on this list, but they are not experts in my sense. One of the functions of the entrepreneur is to judge how to use the labor time of others. In a more or less unfettered market economy, entrepreneurs with a comparative advantage in making such judgments will usually continue to be in a position to offer workers a guaranteed wage in exchange for the right to direct the workers efforts. Outside the workplace, the individual has many sources of advice on how to behave, which brings us back to experts. They may seek advice from religious leaders, self-help manuals, life coaches, and so on. The Great Original self-help book in America is the autobiography of Benjamin Franklin, which includes Franklins Plan for Attaining Moral Perfection. The advantage of individual autonomy is not so much that the individual chooses their own path. The individual is not always the best judge of their own well-being. The advantage of autonomy consists in the increased probability, relative to available alternatives such as the rule of experts, that the individual will be guided, in the different aspects of their life, by persons enjoying a comparative advantage in providing such guidance. We should not compare an idealized picture of self-rule to a realistic or worse than realistic picture of alternatives such as the rule of experts. Nor should we compare an idealized picture of alternatives such as the rule of experts to a realistic or worse than realistic picture of self-rule. We should compare self-rule, the way it really works, to alternatives, the way they really work. In other words, we should not commit the Nirvana fallacy (Demsetz 1969).

Identity, Sympathy, Approbation, and Praiseworthiness

In earlier chapters we saw references to the motives of identity, sympathy, approbation, and praiseworthiness. These motives can induce expert failure if they are in some way misplaced. For example, some forensic scientists have a strong sense of identification with law enforcement that may create an unconscious bias toward results favoring the police theory of a case. An expert with more sympathy for other experts than their client may fail to detect an error in their colleagues work even when that error is damaging to their client. An expert who seeks the approbation of other experts may be led into expert failure. Even the seemingly pristine motive of praiseworthiness can go wrong. A forensic scientist who knows they are working on a murder case may feel a duty to be sure the case is solved. That desire to solve the case, however, may precipitate a false positive match to the police suspect. The noble desire to be worthy of praise may nevertheless be a cause of expert failure. This possibility underlines the important of rivalry, redundancy, synecological redundancy, and other aspects of market structure. Even if experts were angels, a poor market structure could promote expert failure.

Observer Effects, Bias, and Blinding

Observe effects induce bias, which is an important cause of expert failure. Within the current literature, blinding is probably the leading therapeutic response to bias. Blinding is the hiding of information, as in double-blind drug studies. In such studies, the patient does not know whether they are receiving medicine or a placebo. And the experimenters do not know which patients get the drug and which the placebo. (See Schulz and Grimes 2002 for a biting and cogent analysis of blinding protocols and their sometimes poor application.) Blinding is desirable in a large variety of cases. But synecologically bounded rationality creates inherent limits to blinding, which implies the necessity of auxiliary precautions.

Podolsky et al. (2016) trace blinding of subjects back to the trick trials of the late sixteenth century. Disputes over exorcism led to the adoption of bogus holy water and sham relics of the holy cross being used in exorcism trials to determine whether overenthusiasm, autosuggestion, or deceit as opposed to the devil was the cause of the behavior of those afflicted (p. 46). Later, Louis XVI commissioned a group including Benjamin Franklin, Antoine Lavoisier, and Joseph-Ignace Guillotin to investigate Franz Anton Mesmers claims that he could use animal magnetism to cure sick patients. This group knew of the trick trials from reading Montaigne (Podolsky et al. 2016, p. 46). Franklin and the others used techniques such as blindfolding the patients to hide from them information about whether or when animal magnetism was at work. They concluded that Mesmers technique had no scientific merit (Podolsky et al. 2016).

Podolsky et al. (2016) trace awareness of observer effects only back to the nineteenth century, when observer bias was noted to occur across all scientific disciplines, conventional or unconventional. Most notably, astronomers had described the impact of the personal equation in the recording of seemingly objective data (p. 50). A study from 1910 seems to have applied observer blinding in an irregular, unplanned, and haphazard manner (Podolsky et al. 2016, p. 51). Hewlett (1913) is the earliest study Podolsky et al. (2016) note in which the observers were subject to consistent blinding protocols (p. 51).

The Hewlett (1913) study begins: It has been claimed that the sodium salicylate prepared from natural oils is superior as a therapeutic agent to the sodium salicylate prepared by synthetic methods. It thus addressed an issue of urgent concern to pharmaceutical manufacturers. The study compared natural and synthetic sodium salicylate for the treatment of different diseases, especially rheumatic fever (Hewlett 1913, p. 319). The natural version was prepared by a practitioner from oil of birch, and the synthesized version was manufactured by Merck. The study was initiated by the AMAs Council on Pharmacy and Chemistry, which had been founded in part to promote cooperation between medical professionals and the newly emergent pharmaceuticals industry (Stewart 1928). The study concluded: natural and synthetic sodium salicylate are indistinguishable so far as their therapeutic and toxic effects on patients are concerned (Hewlett 1913, p. 321). Thus, the organization that was formed to promote good relations with pharmaceutical manufacturers conducted a study showing that one manufacturers product was not inferior to the competing natural version. The reader may judge whether the Hewlett study was, therefore, an auspicious beginning to random controlled trials.

Podolsky et al. (2016) report that we see the scattered uptake of observer blinding from the 1910s through the 1930s (p. 52). The practice grew more systematic after World War II: By 1950, Harry Gold and his colleagues could for the first time officially label studies in which both patients and physicians (or subjects and researchers) were blinded as double-blind tests (Podolsky et al. 2016, p. 53). Robertson (2016, p. 27) says that mostly in the last 60 years the double-blind randomized, placebo-controlled trial has become the gold standard for scientific inquiry.

In a great variety of contexts, blinding is a valuable and important tool to mitigate observer effects. In forensic science, for example, sequential unmasking is a desirable protocol (Krane et al. 2008). Scientists regularly employ a variety of blinding procedures. In Chapter 9 we saw that Mendels results seem to have been distorted by observer effects and might have been improved by the use of blinding protocols.

I am not aware of any reason to question the basic claim that blinding can reduce bias. But we should recognize, I think, an important limit to the principle of blinding. Blinding protocols cannot eliminate what I will call synecological bias, which is the bias arising from synecologically bounded rationality. The division of knowledge makes it impossible for anyone to avoid a limited and partial perspective, which implies a kind of parochial bias in our perceptions and judgments. Only multiplying the number of experts and putting them in a position of genuine rivalry can mitigate this important form of bias. Blinding would be sufficient if knowledge were hierarchically structured and if the only bounds on rationality left it guided by the all-seeing eye described by Felin et al. (2017). In that case, all bias would be induced bias, as we may call it. Domain-irrelevant information, inappropriate incentives, or the emotional context may induce a bias. Remove that distorting influence and nothing is left to skew the flat plane of reason away from the objective truth. Reason thus conceived has no need of synecological redundancy, because it is, when free of induced bias, automatically in accord with the truth. The situation, however, is more richly textured if there is a Hayekian division of knowledge in society.

If knowledge is SELECT, the rationality of experts is synecologically bounded, and blinding protocols can only be partial measures. Like all of us, experts have a limited and partial perspective on events. Multiplying these perspectives increases the opportunity to make appropriate connections and discover superior arguments and interpretations of the evidence. But it is necessary to engage those multiple perspectives fruitfully. As Odling (1860), Milgrom and Roberts (1986), Koppl and Cowan (2010), and others suggest, we can create such engagement by pitting experts against each other. Koppl and Krane (2016) speak of leveraging bias. In other words, the ecology of expertise should have both rivalry and synecological redundancy.

Bias can be induced by information and incentives. Blinding can remove or at least reduce induced bias. But synecological bias is not induced. It is not caused by any special or specific cause. It is not a distortion. It is inherent in the social division of knowledge that formed the context and starting point for Berger and Luckmanns (1966) theory of experts. It is inherent in the social division of knowledge without which the problem of experts would not arise in the first place.