Turner (2010) points out:
Charles Perrow [1984] used the term ‘normal accidents’ to characterize a type of catastrophic failure that resulted when complex, tightly coupled production systems encountered a certain kind of anomalous event. These were events in which systems failures interacted with one another in a way that could not be anticipated, and could not be easily understood and corrected. Systems of the production of expert knowledge are increasingly becoming tightly coupled.
Turner builds a theory of expert failure based on this idea of “normal accidents of expertise.” Others have applied Perrow’s theory somewhat more narrowly to problems in forensic science (Cole 2005; Thompson 2008; Koppl and Cowan 2010). James Reason (1990) builds on Perrow.
Following Charles Perrow (1984), Reason (1990) notes that latent errors are more likely to exist and create harm in a complex, tightly coupled system than in a simple, loosely coupled system. Perrow borrowed the vocabulary of “tightly” and “loosely” coupled systems from mechanical engineering. In that context, Perrow explains, “tight coupling is a mechanical term meaning there is no slack or buffer between two items” (1984, pp. 89–90). In the context of social processes, a tightly coupled system is one in which failure in any one component or process may disrupt the function of others, thus generating an overall system failure.
Perrow lists four characteristic features of tightly coupled systems: (1) “Tightly coupled systems have more time-dependent processes: they cannot wait or stand by until attended to”; (2) “The sequences in tightly coupled systems are more invariant”; (3) “In tightly coupled systems, not only are the specific sequence invariant, but the overall design of the process allows only one way to reach the production goal”; (4) “Tightly coupled systems have little slack” (pp. 93–4). Perrow (1984, p. 79) calls a system “complex” when it has many “hidden interactions” whereby “jiggling unit D may well affect not only the next unit, E, but A and H also.” Systems for the production of expert knowledge, “expert systems” as Turner (2010) dubs them, can exhibit these qualities in varying degrees.
Forensic science today is a good example of a complex, tightly coupled “expert system.” For example, the Houston Crime Lab in past years invited cross-contamination of evidence from one type of testing to another. A 2002 audit reports: ‘The laboratory is not designed to minimize contamination due to the central screening area being used by serology, trace, and arson. Better separation of these disciplines is needed. The audit team was informed that on one occasion the roof leaked such that items of evidence came in contact with the water’ (FBI Director, 2002, p. 21).
Forensic science is a complex, tightly coupled system within which multiple individuals, organizations, processes, technologies, and incentive systems mediate conflict between individuals. Such systems are easily subject to the “latent errors” Reason identified. These latent errors typically lay dormant until active errors trigger them.
This context includes both technological and organizational aspects. Indeed, these two layers in the error-enabling process are not orthogonal. The word ‘structural’ also underlines the view that complex, tightly coupled systems can be reengineered to reduce the chance of error. Inappropriate structural features of the context enable active errors. Once these structural flaws are identified, the system can be reengineered to produce a better result. Errors do not happen in a vacuum. An individual causes an error to occur within a set of social and economic structures. Economic and social structures create incentives that can create expert failure.
Complexity is central to Perrow’s notion of “normal accidents.” In Turner’s use of Perrow, the relevant complexity is in the production process of the experts, for example in the complex, tightly coupled system that is a modern crime lab. But complexity of the domain may also lead to expert failure. The phenomena we ask experts about may be complex, uncertain, indeterminate, or ambiguous. Here, too, forensic science provides an example. The evidence in forensic science is often ambiguous. The latent prints left at a crime scene, for example, may be partial or smudged. One print may be deposited on top of another. The material on which the print was deposited may be irregular. And so on. DNA evidence, too, can be ambiguous (Thompson 1995, 2009). We ask economic experts to predict the economy, which is complex, uncertain, and indeterminate.
Consider the stock market. If stock payouts could be predicted (and risk levels determined), then everyone would know the value of every stock, which would equal its price. In this scenario, no one could do better than to simply buy and hold. But then it would be pointless to research stocks, and no one would do it. But if no one researched stocks, their prices would pull away from their underlying values, at which point it would it would pay to research stocks. Brock and Hommes (1997) show how this sort of logic can lead to unpredictable dynamics. If good predictors of stock behavior cost more than poor predictors, stable equilibria fall apart as cost-bearing sophisticated traders shift to cheap but myopic predictors. The resulting instability makes it worthwhile to bear the cost of the superior predictor and the system temporarily shifts back toward a stable equilibrium before the aperiodic cycle resumes. It is hard to forecast when the system will switch between stable and unstable dynamics (Brock and Hommes 1997). Arthur (1994) and Arthur et al. (1997) have also shown how interacting agents can generate complex dynamics endogenously.
When the object domain is complex, uncertain, indeterminate, or ambiguous, feedback mechanisms may be weak or altogether absent. It can be hard to decide whether prior expert opinions were good or bad, true or false. Until recently, this lack of feedback has been evident in macroeconomic disputes. Different schools of thought would insist that the other schools had failed the test of history. And yet very few macroeconomists switched from one school to another because the newly adopted school of thought had a superior empirical record. In recent years, this situation has changed somewhat as macroeconomists have converged on a common model class (DSGE models), even though differences in theory and policy remain.
As I noted in Chapter 8, this sort of unpredictability does not always discourage demand. In this case, expert failure is quite likely indeed. Demanders of expert opinion will pay for opinions that cannot be reliable. Government demand for macroeconomic projections seems unrelated to their quality or reliability. After the 2008 financial crisis, Queen Elizabeth asked economists: “Why did nobody notice it?” (Pierce 2008). The British Academy gave her something of an official answer. “Everyone seemed to be doing their own job properly,” they told the Queen, “and often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction” (Besley and Hennessy 2009). Koppl et al. (2015a) comment, “Rather than questioning the dynamics of the econosphere, this answer questions the organization of economic authorities. If we had had a better organization amongst ourselves, the whole thing could have been prevented” (p. 6).
Governments are not the only ones to demand magical predictions. Fortunetellers continue to ply their trade. Stock pickers and mutual fund managers continue to find work. And a great variety of prophets support themselves by forecasting doom or salvation. If, as I believe, competition turns wizards into teachers, then competition may help to reduce the chance of expert failure even in such markets. Competition among priestly experts tends to make them less dogmatic and more disposed to “candour and moderation.” The emergence of wisdom traditions in Western philosophy might also suggest that competition has this ameliorative effect on expert opinion even in markets with little or no feedback between an expert’s opinion and subsequent events.
Many of the issues discussed in Chapter 10 might be described as issues of “incentive alignment.” When expert incentives are not aligned with the truth, expert failure is more likely. Some cases of misaligned incentives may not fit easily in the categories of this or the previous chapter. For example, through court-assessed fees, some crime labs are being funded in part per conviction (Koppl and Sacks 2013; Triplett 2013). State law in at least fourteen states requires this practice and it has also been adopted in other jurisdictions (Koppl and Sacks 2013). Triplett (2013) says the practice exists in twenty-four states. One must be attentive to the institutional particulars of any given case to maximize the chance of identifying the relevant incentives tending to promote or discourage expert failure. As a general matter, however, “incentive alignment” is a matter of “market structure.” I have argued that “competition” in the market for expert opinion tends to reduce the incidence of expert failure. In this chapter I develop the point by looking more closely at the structure of the market for expert opinion.
Competition among experts is not simply a matter of the number of experts or the client’s ability to select among experts. For example, as we have seen in Chapter 3, licensing restrictions and professional associations produce an enforced homogenization of opinion among experts. For the ecology of expertise to minimize the chance of expert error it must include rivalry, synecological redundancy, and free entry.
There must be rivalry among experts. There is no rivalry unless clients have at least some ability to select among experts. Without this rivalry, there are only weak incentives for one expert to challenge and possibly correct another.
There must also be, of course, multiple experts. But multiplicity is not enough. They must be different one from another. Thus, we require not simple redundancy, but synecological redundancy. We saw in Chapter 3 that professional associations tend to reduce synecological redundancy. Indeed, the point of a profession is to create homogeneity across experts. One professional may be more skilled than another, but they all agree on the knowledge codified by the profession. They all represent the same knowledge base. But if that knowledge base is imperfect in any way, as it will be almost by definition, then the homogenizing tendency of professions will encourage expert failure. I develop these points about professions in the section of this chapter entitled “Professions.”
Finally, it is unlikely that the full range of relevant expert opinions will be available to clients if entry is controlled. We require, therefore, free entry as well. As Baumol (1982, p. 2) points out, “potential competition” is more important than the number of incumbent competitors. (Baumol cites Bain 1956, Baumol et al. 1982, and others.) The same logic applies in the market for expert opinion. State support for professional organizations such as the American Medical Association encourages expert failure by creating barriers to entry. To borrow once again from Berger and Luckmann (1966), the outsiders have to be kept out and the insiders have to be kept in.
Professions such as medicine, law, and pharmacy, I have said, may often serve to keep outsiders out and insiders in. They provide an interesting intermediate case between full autonomy and the rule of experts. Such professions may be a source of expert failure. On the one hand, we are often free to choose our doctor, our lawyer, our pharmacist. On the other hand, as we saw in Chapter 3, their professional organizations create an epistemic monopoly that limits competition between them. Licensing restrictions support the monopoly position of the professions with the coercive power of the state. It may be that the power of professions would be inconsiderable without licensing restrictions. In Chapter 3 we saw the lamentations of nineteenth-century “men of science” over the putatively intolerable variety of opinions by expert witnesses claiming the mantle of science. There were no licensing restrictions for “science.” We saw calls for measures to eliminate sources of divergence in the scientific opinions expressed in court. This history supports the conjecture that professions would tend to create neither expert power nor homogeneity of expert opinion without measures of state support such as licensing restrictions.
Kessel (1970) explains how licensing restrictions gave the American Medical Association (AMA) the power to restrict the supply of physicians. The system of medical education existing in 1970 was a consequence of the “Flexner report,” which was published by the Carnegie Foundation in 1910. “This report discredited many medical schools and was instrumental in establishing the AMA as the arbiter of which schools could have their graduates sit for state licensure examinations. Graduation from a class A medical school, with the ratings determined by a subdivision of the AMA, became a prerequisite for licensure” (Kessel 1970, p. 268). The report recommended that all medical training in the United States be conducted on the model of the medical school of The Johns Hopkins University. Rather than attempting to “evaluate the outputs of medical schools,” the report placed the “entire burden of improving standards” on “changes in how doctors should be produced” (pp. 268–9). Moreover, the report’s author, Abraham Flexner, “implicitly ruled out” any model of medical education other than “the one he observed at Johns Hopkins” (p. 269). Kessel says, “The implementation of Flexner’s recommendations made medical schools as alike as peas in a pod” (p. 269).
Implementing the Flexner report reduced the output of physicians. “Organized medicine – again, the AMA – using powers delegated by state governments, reduced the output of doctors by making the graduates of some medical schools ineligible to be examined for licensure and by reducing the output of schools that continued to produce eligible graduates” (Kessel 1970, p. 267). These changes disproportionately reduced the supply of black physicians. “As a result of the AMA’s and Flexner’s endeavors, the number of medical schools declined from 162 in 1906 to sixty-nine in 1944, while the number of Negro medical schools went from seven to two. Moreover, the number of students admitted to the surviving schools decreased” (p. 270). During the Great Depression, “there was a cutback in admissions to medical schools, with Negroes and Jews bearing a disproportionate share of the reduction. Probably females also bore a disproportionate share of the reduction in admissions” (p. 271). As Kessel notes with excess delicacy, the Flexner report contains “patronizing” comments on race (1970, p. 270).
Kessel describes a professionalization of medicine supported by state licensing restrictions, which reduced the supply of physicians and increased the homogeneity and uniformity of the knowledge base of legally sanctioned medical practice to the disadvantage of the public in general and women and oppressed minorities in particular.
Before returning to the theme of professions in the theory of experts, it may be worth spending some time on the racism of the Flexner report. It includes the following passage on “the medical education of the negro”:
The medical care of the negro race will never be wholly left to negro physicians. Nevertheless, if the negro can be brought to feel a sharp responsibility for the physical integrity of his people, the outlook for their mental and moral improvement will be distinctly brightened. The practice of the negro doctor will be limited to his own race, which in its turn will be cared for better by good negro physicians than by poor white ones. But the physical well-being of the negro is not only of moment to the negro himself. Ten million of them live in close contact with sixty million whites. Not only does the negro himself suffer from hookworm and tuberculosis; he communicates them to his white neighbors, precisely as the ignorant and unfortunate white contaminates him. Self-protection not less than humanity offers weighty counsel in this matter; self-interest seconds philanthropy. The negro must be educated not only for his sake, but for ours. He is, as far as human eye can see, a permanent factor in the nation. He has his rights and due and value as an individual; but he has, besides, the tremendous importance that belongs to a potential source of infection and contagion.
The pioneer work in educating the race to know and to practise fundamental hygienic principles must be done largely by the negro doctor and the negro nurse. It is important that they both be sensibly and effectively trained at the level at which their services are now important. The negro is perhaps more easily “taken in” than the white; and as his means of extricating himself from a blunder are limited, it is all the more cruel to abuse his ignorance through any sort of pretense. A well-taught negro sanitarian will be immensely useful; an essentially untrained negro wearing an M.D. degree is dangerous.
Flexner claims an interest in “the physical well-being of the negro,” but the supposed problem of “hygiene” and “self-protection” for whites seems to be more important to him. We read, “self-interest seconds philanthropy.” We are told, “The negro must be educated not only for his sake, but for ours.” Thus, the implied readership is “white,” utterly excluding black people. The report tells its white readership to fear black people as “a potential source of infection and contagion.” When Flexner says that “The negro is perhaps more easily ‘taken in’ than the white” and “his means of extricating himself from a blunder are limited,” we recognize the gross racial trope of intellectual inferiority for black people. Unfortunately, no one should be surprised to see a white racist describing black people as naturally stupid and irrational. But notice also the implication. It would be “cruel” to allow such simple-minded folk to receive medical services from black doctors other than those few “we” have trained in “fundamental hygienic principles.” Thus, the forcible suppression of black medical schools is dressed up as a philanthropic act bestowed on them by their moral and intellectual betters. Flexner takes it for granted that white people have permanent dominion over black people. Whites should exercise that dominion with compassion, but without neglecting “self-protection.”
Flexner’s compassion for “the negro” seems to have been constrained by his notion of “self-protection.” Let us momentarily set that fact aside, however, to take up the hypothesis that Flexner’s compassion for “the negro” was sincere and abundant. If so, it was unaccompanied by any principle of equality. Because blacks are inferior, whites have a duty born of compassion to make important choices for them. Thus, compassion unaccompanied by respect for the equal dignity and autonomy of others becomes an instrument of oppression. It is tempting to conclude that the principle of equality is more important for human welfare than that of compassion, but we are probably unable to correctly imagine a world in which we are all perfectly equal, but devoid of compassion. In any event, all attempts at imagining such a world would be empty speculation. I think we can conclude, however, that compassion without equality may lead to oppression. As Adam Smith and Bernard Mandeville both clearly recognized, compassion is a virtue. It is a good thing. But it may turn into a bad thing if the compassionate think themselves superior beings with a right or duty to choose for others. Leonard (2016, p. xii) notes the “unstable amalgam of compassion and contempt” of Progressive Era reformers. I doubt, however, that this evil amalgam is unstable.
Flexner’s racism and the AMA’s success in shutting down black medical schools illustrates the claim that licensing restrictions and other barriers to entering a profession are likely to be disproportionately harmful to the least privileged and most oppressed members of society. Monopoly power tends to increase the scope for bigotry to operate.
According to OpenSecrets.org, the AMA spent well over $300 million on lobbying over the period 1998–2016, a number exceeded by only two other organizations (Open 2016). Lobbying expenditures reflect both the positive efforts of special interests to acquire a special benefit and defensive efforts against predation by the state actors such as Congress. Nevertheless, the large sums spent by the AMA may suggest that it continues to act in ways that support the interests of physicians more effectively than the general public. This view is supported by the analyses in Svorny (2004) and Svorny and Herrick (2011).
Svorny (2004) excoriates licensing restrictions in medicine. She says: “[M]any economists view licensing as a significant barrier to effective, cost efficient health care. State licensing arrangements have limited innovations in physician education and practice patterns of health professionals” (p. 299). Svorny and Herrick (2011) note that these licensing restrictions hurt the poor most of all. A profession serves to enforce an official view, which the professionals tend to accept and perpetuate. We have seen Berger and Luckmann (1966) discuss how monopoly experts may use “incomprehensible language,” and “intimidation, rational and irrational propaganda … mystification,” and “manipulation of prestige symbols,” to ensure that outsiders are “kept out” and insiders “kept in” (p. 87). Professions help to provide such “propaganda” and “mystification.”
In the United States, “federal, state, and local governments today impose an array of limits” on professional speech (Sandefur 2015–16, p. 48). The American legal system “regulates communications between licensed professionals and their clients” (Zick 2015, p. 1291). Presumably, these regulations serve a variety of purposes and arose from a variety of causes. Nevertheless, one important function seems to be creating uniformity of opinion within the profession. This issue deserves more attention than it has yet received in the American law literature. Haupt defends such restrictions on the grounds that the professional represents their profession and its supposed “body of knowledge” when communicating with a client. And, with Haupt, it is the profession, not the professional, that produces truth. The approach requires not only a non-SELECT view of knowledge, a top-down view of knowledge, but also a naïve view of professional interests. They are “producing truth.” But we have seen the effects of professions in Chapter 3. To establish reasonable doubt for the defendant in a criminal case may require deviations from the professional “consensus,” whose very existence is often a product in part of state coercion. Let us consider the example of the chemical reaction from Chapter 3. Let us also consider the Flexner report, which imposed the spurious “knowledge” that blacks were inferior as well as the probably spurious “knowledge” that there is but one true way to do medical education.
We saw in Chapter 8 that expert errors may be correlated and that, consequently, multiplicity in expert opinion could produce a false confidence in them. State-supported professional organizations tend to produce homogeneity of opinions and, therefore, increase the correlation of errors across experts in a given profession. This reduction in synecological redundancy tends to encourage expert error. Something like this seems to have happened in the rape and murder trial of Keith Harward, who was falsely convicted in 1982 and exonerated in 2015. (He thus spent thirty-three years in prison for a rape and murder he could not have committed.) Harward has explained to me that bite-mark evidence was central to his case. Six odontologists all supported the now demonstrably false claim that he had bitten the victim. It is Harward’s conjecture that this perverse uniformity of opinion reflected the professional commitments and affiliations of these experts. This conjecture seems plausible, although the same correlation of errors might also be explained by the likely fact that each of them had access to the police’s case file before giving an opinion.
When professional associations are linked to licensing restrictions, as with the AMA, the speech of professionals in their communications with clients may be regulated. Zick (2015) examines three cases: “restrictions on physician inquiries regarding firearms, ‘reparative’ therapy bans, and compelled abortion disclosures.” He defines “rights speech” as “communications about or concerning the recognition, scope, or exercise of constitutional rights” (p. 1290) and laments the existence of legal restrictions on the “rights speech” of professionals with their clients:
Regulations of professional rights speech are, and ought generally to be treated as, regulations of political expression based on content. As such, they raise important free speech concerns and merit strict judicial scrutiny. The fact that the speakers are licensed professionals, and their audiences are clients or patients, does not eliminate the need to guard against state suppression or compulsion of speech—particularly, although not exclusively, when the speech concerns or relates to constitutional rights.
Haupt (2016) argues that restrictions on the speech of professionals when interacting with clients can be made to survive First Amendment challenges. Her argument relies on the idea that the professional is representing a supposed “body of knowledge” to the client. This defense of speech restrictions assumes that the professional “body of knowledge” should be homogeneous across professionals, which is consistent with my claim that professional associations often serve function of fostering uniformity in expert opinion.
Professionals such as doctors and lawyers may represent sources of oppressive power to many of their clients. Some clients may be able to judge when a professional can be trusted and when they cannot be trusted. In Chapter 2 we saw Goldman (2001) and others discuss strategies for nonexperts to judge experts. The simple expedient of getting a second opinion is one obvious strategy, but not everyone can afford it, and the value of a second opinion may often be reduced by the professional attachments of the expert. The limited value of second opinions is illustrated by Brandon Mayfield’s fingerprint expert, who supported the state’s identification of Mayfield with the crime-scene evidence.
Formal education often helps in the acquisition of good judgment regarding experts. Wealthy people are more likely to have friends and family in professions such as law and medicine. Such connections may often reduce the risks for such persons of suffering bad consequences of expert failure. Only so many people are rich and well educated, however. And even for such persons it can sometimes be hard to judge the quality of expert advice. The anger and resentment that nonexperts sometimes feel toward experts has good foundation in many cases.
I have emphasized market structure, contrasting competition with monopoly. Markets are also subject to overt state regulation, which shapes the market structure, the ecology of expertise. Just as in other markets, such “regulation” tends to produce results contrary to the stated goals of the advocates of it.
Many would-be reformers of forensic science seek some form of regulation. In particular, the National Academy of Sciences (NAS 2009) has called for the creation of a national regulatory body to be called the National Institute of Forensic Science (NIFS). It seems fair to say that the great majority of reformers in forensic science favor such “oversight” and “regulation.”
Rehg (2013) seems to favor the regulation of “dissenting experts” with minority opinions on scientific questions such as global warming. He castigates the “lax attitude toward dissent” of some authors. He notes approvingly the epistemic value of dissenting opinions. “But a dissenter who engages in political advocacy is, like any citizen, morally responsible for his or her political judgments and advocacy” (p. 101). It seems only reasonable to note such a moral responsibility. But Rehg goes on to say, “Experts should be held responsible, at the very least, for labeling their opinions with something like officially approved safety labels or warnings, which signal their (un)reliability. Officially appointed panels like the Intergovernmental Panel on Climate Change (IPCC) already do this, and I do not see why dissenters should get a free pass” (p. 102). While this statement is ambiguous, it seems to call for state regulation of expert dissent.
Proposals for the “regulation” of experts run into a problem of infinite regress that we might call the “turtles problem.” We need to regulate experts. We need other experts for the job. Call these experts “meta-experts.” The same logic that tells us to regulate the experts tells to regulate the meta-experts too. And the meta-meta-experts. And the meta-meta-meta experts. And so on. Quis custodiet ipsos custodes? This problem of infinite regress may remind the reader of the story of an old woman at a public science lecture. She upbraids the speaker for foolishly claiming the earth is a ball spinning in empty space. The earth, she insists, is a flat plate resting on the back of a turtle. He asks, “On what does the turtle rest?” Another turtle. And what does the second turtle rests on? “It’s no use, young man,” she replies triumphantly, “it’s turtles all the way down.” For some advocates of regulation, it’s experts all the way down.
Regulation has a turtles problem. It also creates the risk of regulatory capture. Regulatory and oversight bodies are supposed to constrain special interest and protect the general interest. When they instead serve special interests, they have been “captured.” An industry must offer something in return if it is to capture a regulator. The reciprocation may consist in campaign contributions to members of Congress providing oversight of the regulatory body. It may take any of an indefinitely large number of other forms. Capture is the norm, unfortunately, which makes beneficial change hard. The first great regulatory body in the United States was the Interstate Commerce Commission (ICC), which was established in 1887 to control railroads. The Interstate Commerce Act prohibited price discrimination and required that “all charges … shall be reasonable and just.” This language seems to constrain the railroads, and yet the railroads supported the act. Posner (1974, p. 337) explains: “The railroads supported the enactment of the first Interstate Commerce Act, which was designed to prevent railroads from practicing price discrimination because discrimination was undermining the railroads’ cartels.”
The interest that captures a regulator may not be the regulated industry. “Crudely put, the butter producers wish to suppress margarine and encourage the production of bread” (Stigler 1971, p. 6). For example, the railroads sometimes used state regulators to suppress trucking. In the 1930s, “Texas and Louisiana placed a 7,000-pound payload limit on trucks serving (and hence competing with) two or more railroad stations, and a 14,000-pound limit on trucks serving only one station (hence not competing with it)” (Stigler 1971, p. 8).
The theory of supply and demand predicts that a commodity sold on a competitive market will end up in the hands of those who value it most, as measured by willingness to pay. The theory does not tell us, however, who is willing to pay the most. Similarly, the theory of regulatory capture does not tell us who will win in the contest of interests to capture a regulator. It is a continuous fight; victory may be partial and fleeting. Nevertheless, we can say that concentrated interests aid victory. Well-organized groups with relatively large and homogeneous interests have an advantage in the contest.
Calls for the regulation of experts should be tempered by the risk of regulatory capture. Consider, for example, the NAS (2009) proposal to create NIFS. A coalition of law enforcement agencies may be in the best position to capture a federal regulator of forensic science. According to Bureau of Labor Statistics data, in 2012 the number of employees in law enforcement exceeded 1.3 million (BLS 2015). These people are part of a relatively large, concentrated, well-organized, and homogeneous interest group.
An episode from 2013 suggests that some such coalition exists and is capable of acting cooperatively. On August 30, 2013, a consortium of law enforcement groups wrote a letter to the U.S. Attorney General strongly condemning his new policy of respecting the liberal state laws on marijuana in Colorado and Washington (Stanek et al. 2013). Commenting on this episode, one journalist has opined, “[P]olice organizations have become increasingly powerful political actors” (Grim, 2013). Is there any other interest group, such as the innocence movement, in a good position to compete with law enforcement? And, if so, for how long? Cole (2010) recognizes the risk of regulatory capture in forensic science. He says of the proposed NIFS, “If it is ‘captured’ by law enforcement, it becomes less obvious that it would be a force for improvement rather than stagnation” (p. 436).
Law enforcement has a distinct advantage in the struggle to capture a forensic science regulator. But victory is not guaranteed as illustrated by an episode involving the National Commission on Forensic Science, which was created in 2013 jointly by the Department of Justice and the Commerce Departments’ National Institute of Standards and Technology (NIST). One of the duties of the Commission as stated in its charter is “[t]o develop proposed guidance concerning the intersection of forensic science and the courtroom.” When, however, the Commission’s Subcommittee on Reporting and Testimony came forward with a proposal (authored by law professor Paul Giannelli) for increased disclosure of criminal forensics the Commission’s cochair from the Department of Justice determined that the proposal was beyond the scope of the Commission’s charter. Subcommittee cochair Judge Jed Rakoff resigned in protest, citing the Commission’s charter as clearly creating scope for just such a proposal. The very next day, January 30, 2015, the Commission’s cochair reversed this decision and invited Judge Rakoff back onto the Commission. It was the Committee’s cochair and not the Department of Justice that interpreted the Committee’s charter to exclude the Subcommittee’s proposal, and the decision was quickly reversed. This episode nevertheless illustrates the sort of thing that may happen in the struggle to capture regulators. The Commission was not renewed at the expiration of its charter in April 2017.
Monopsony is a source of expert failure. Monopsony is the existence of only one buyer in a market. It makes even nominally competing experts dependent on the monopsonist and correspondingly unwilling to give opinions that might be contrary to the monopsonist’s interests or wishes. The police and prosecution are often the only significant demanders of forensic science services in the United States.
There is also a kind of narrow monopsony that may sometimes encourage expert failure. An expert is hired to give an opinion to their client. The client is the only one demanding an opinion for that client. This exclusivity is a narrow monopsony. The expert may have other customers, but none of them is paying the expert to give this particular client an opinion. If it is sufficiently easy for third parties to observe the advice given to the client or sufficiently probable that this advice will be revealed, then the expert has an incentive to give opinions that will seem reasonable to other prospective clients. If not, the expert has an incentive to offer pleasing opinions to the client even if that implies saying something unreasonable or absurd. Toadies and yes men respond to such incentives. Michael Nifong, the district attorney in the Duke rape case, induced the private DNA lab he hired to withhold exculpatory evidence (Zucchino 2006). The lab was private and thus nominally “competitive.” Presumably, it could have declined this particular request without particular harm to its bottom line. But the lab chose to go along with Nifong’s desire to hide evidence. The District Attorney’s narrow monopsony created an incentive to do so. Only Nifong had effective control of the DNA evidence in this case.
If the client seeks multiple opinions and there is synecological redundancy on the supply side, then different suppliers may give different expert opinions. In this case, each expert has an incentive to anticipate the opinions of other experts and explain to the client why his or her opinion is best. Doing so increases the chance that the client will review the expert favorably and, perhaps, return to the same expert in the future. Again, competition tends to make experts less like mysterious wizards and more like helpful teachers.
Butos and McQuade (2015) argue that the United Nations Intergovernmental Panel on Climate Change (IPCC) is a “Big Player” in research on climate change. Yeager and I define a Big Player as “anyone who habitually exercises discretionary power to influence the market while himself remaining wholly or largely immune from the discipline of profit and loss” (Koppl and Yeager 1996, p. 368). Koppl (2002) develops the theory of Big Players in relative detail. According to Butos and McQuade, the IPCC has a disproportionate, if indirect, influence the funding of climate change research. It has “become a dominant voice in the climate science community, and its summary pronouncements on the state of the science carry significant weight among scientists” (p. 189). Prudently, Butos and McQuade (2015) do not pretend to have an opinion on whether human activity is a significant contributor to adverse climate change. They argue, instead, that “that a confluence of scientific uncertainty, political opportunism, and ideological predisposition in an area of scientific study of phenomena of great practical interest has fomented an artificial boom in that scientific discipline” (p. 168). They describe how “the herding induced by the IPCC in the scientific arena interacts with the government-funding activities in mutually reinforcing ways” (p. 167). The Big Player influence they chronicle would seem to increase the risk of expert failure. Whether fears of “global warming” are generally too high, too low, or just right, Butos and McQuade point to an important general truth. Big Players in science and other areas of expertise increase the risk of expert failure.
I have extolled the benefits of competition in the market for expert opinion, at least when the “competitive” market has rivalry, synecological redundancy, and free entry. But even with competition, expert failure is possible. In the market for expert opinion, “total truth production” (Goldman and Cox 1996) can be meager. The market for expert opinion is part of the “market for ideas.” Thus, my theory leads to pessimism about the market for ideas.
Liberals in the tradition of Mandeville, Smith, Hume, and Hayek should not think it somehow a problem or disappointment if the market for ideas does a poor job of inducing true beliefs in its participants. The synecological and often tacit knowledge of how to do things is brought into conformity with the purposes to which it is put through a process of evolutionary shaping. These practices tend toward a kind of rough and ready conformity with our needs, even though evolution does not produce optimality. But any tendency toward true beliefs or correct statements as opposed to useful practices is likely to be weaker. As we have seen with Ioannidis (2005), Tullock (1966), and others, false beliefs may persist even in science. The relatively poor quality of our propositional knowledge is consistent with the generally skeptical attitude of liberalism.
The doctrine of “consumers’ sovereignty,” as W. H. Hutt (1936) dubbed it, holds that consumer decisions to buy or not to buy determine the production of goods and services. “Competitive institutions are the servants of human wants” (Hutt 1936, p. 175). This doctrine of consumers’ sovereignty applies no less forcefully in the market for ideas than in the market for men’s shoes. Truth is only one of many things demanders want from ideas. And sometimes truth has nothing to do with it. Often, demanders in the market for ideas want magical thinking. By “magical thinking” I mean an argument with one or more steps that require something impossible. Unfortunately, experts often have an incentive to engage in magical thinking. Under competitive conditions in the market for ideas, the demand for magical thinking meets a willing supply. As Alex Salter (2017, p. 1) has said, in the market for ideas, competition occurs “on margins unrelated to truth.” Goldman and Cox (1996, p. 18) say, “if consumers have no very strong preference for truth as compared with other goods or dimensions of goods, then there is no reason to expect that the bundle of intellectual goods provided and ‘traded’ in a competitive market will have maximum truth content.”
Coase (1974) takes a similarly skeptical view of the market for ideas. And yet Goldman and Cox (1996, p. 11) say that “Certain economists, including … Ronald Coase, simply assume the virtues of the free market for ideas (or assume that others grant these virtues) and proceed to defend the free market for goods as being entirely parallel with the market for ideas.” But in the cited article, Coase (1974) twice says that there seems to be “a good deal of ‘market failure’” in the market for ideas. Coase’s claim was not that the market for ideas is somehow efficient or otherwise wonderful. His point concerned asymmetry: It seems inconsistent to support government action to correct “market failure” in the market for commodities while repudiating such action in the market for ideas. Coase is careful to point out that this asymmetry does not depend on the spurious assumption that the market for ideas is structurally identical to commodity markets. “The special characteristics of each market lead to the same factors having different weights, and the appropriate social arrangements will vary accordingly.” But, “we should use the same approach for all markets when deciding on public policy.” And yet, Coase laments, many scholars and intellectuals assume that state intervention is generally skillful and beneficial in the one domain, clumsy and destructive in the other. “We have to decide whether the government is as incompetent as is generally assumed in the market for ideas, in which case we would want to decrease government intervention in the market for goods, or whether it is as efficient as it is generally assumed to be in the market for goods, in which case we would want to increase government regulation in the market for ideas” (Coase 1974, p. 390).
If imperfections in the market for ideas are not always best handled by state regulation, then perhaps imperfections in commodity markets are not always best handled by state regulation. The plausible inference runs the other way too. If the perils of regulation should stay our hand from state intervention in commodity markets, then perhaps the considerable infirmities of the market for ideas are no more compelling an inducement to state intervention. The liberal defense of free speech is not based on any claim that the market for ideas somehow eliminates error or erases human folly. It is based on a comparative institutional analysis in which most state interventions make a bad situation worse. Free speech is the worst possible rule, except for all the others.
The economic theory of experts studies how market structure determines the risk of expert failure. The broad generalization that competition tends to produce better results than monopoly holds in the market for expert opinion as in other markets. It might then seem a simple matter to turn from “positive” to “normative” analysis. My prescription, it would seem, must be “let there be competition!” It is not so easy, however, to “let there be competition.”
First, as I have tried to emphasize, details matter. I have spoken of the “ecology of expertise” partly in hopes of getting past the potentially empty categories of “competition” and “monopoly” to focus on the details of market structure. The term “competition” is vague and may include institutional structures that promote, or do little to prevent, expert failure. A good ecology of expertise will generally have rivalry, synecological redundancy, and free entry. Demanding “competition” is not a design ensuring these features are present. Second, design is difficult. Devins et al. (2015) argue that constitutional design is impossible. Designing individual markets is less ambitious than designing constitutions, but fraught with difficulty nevertheless, as Smith (2009) notes. The methods of experimental economics can help us to overcome at least some of the difficulties of market design. Koppl et al. (2008) is an example.
Koppl et al. (2008) tested the consequences of using multiple experts in the context of forensic science. Koppl et al. (2008) provide experimental evidence that this ameliorative measure has the potential to improve system performance. They had “senders” report to “receivers” on evidence simplified to one of three shapes (circle, triangle, square). The senders are analogous to forensic scientists, and the receivers are analogous to triers of fact (judges or jurors). Bias was induced in some senders by giving them an incentive to issue a false report. The receivers were asked to conclude what shape (circle, triangle, square) the sender(s) had actually been shown. In some cases receivers got one report, in others multiple reports. Because the “errors” in reports were independent (senders did not know the content of reports from other senders or what incentives they had received), the receivers made fewer errors when they receive multiple reports. The results of Koppl et al. suggest that competitive epistemics (as we might call such redundancy) will improve system performance if the number of senders is three or higher, but may not improve performance if the number of senders is two. Moreover, further increasing the number of senders beyond three does not seem to improve system performance (Koppl et al. 2008, p. 153).
Interestingly, in one set of experiments, the use of multiple senders seems to have degraded the average performance of senders while improving the performance of the system. Robertson (2010, pp. 214–19) makes a similar proposal for “adversarial blind” expertise in civil cases.
The use of random, independent, multiple examinations is a form of blinding. Examiners will not know whether other labs have examined the same evidence and, if so, what the results of such examinations were. They are blinded from this information. Such blinding gives the examiner an increased incentive to avoid scientifically inappropriate inferences so as to minimize what Koppl et al. (2015c) have called “reversal risk,” which they define as “the risk that a decision will later be determined to have been mistaken.”
The experimental results of Koppl et al. (2008) may be surprising. But to improve system reliability, it is not necessary to improve the reliability of the individual units within the system (senders). A chain is only as strong as its weakest link; a net is stronger than its individual knots, and a network is stronger than its individual nodes. A net may be stronger than a chain even though the average knot in the net may be weaker than the average link in the chain.
Experiments such as Koppl et al. (2008) form part of the field of “epistemic systems design,” which is the application of the techniques of economic systems design to issues of veracity, rather than efficiency. This study adapted the techniques of economic systems design (Smith 2003) to aid in discovering institutional changes that will improve not the efficiency, but the veracity of expert markets. The change in normative criterion from efficiency to veracity creates epistemic systems design. Economic systems design uses “the lab as a test bed to examine the performance of proposed new institutions, and modifies their rules and implementation features in the light of the test results” (Smith 2003, p. 474). It has produced a major change in how researchers design economic institutions. Epistemic systems design may have a similar potential to change how researchers design institutions. Examples include Koppl et al. (2008), Cowen and Koppl (2011), and Robertson (2011). Many past studies are precursors in some degree. Blinding studies such as Dror and Charlton (2006) and Dror et al. (2006) are examples to at least some degree, as are many studies in social psychology such as Asch (1951).
Epistemic systems design is possible because we construct the truth in an experimental economics laboratory. We are in the godlike position of saying unambiguously what the truth is and how close to it our experimental subjects come. (On our godlike position, compare Schutz 1943, pp. 144–5.) We construct the truth, the preferences, and the institutional environment of choice. We construct, in other words, the world in which in which we place our subjects. From this godlike perspective we are in a position to compare the epistemic properties of different institutional arrangements. When we return from our constructed world to the real world, we lose our privileged access to the truth and return to the normal uncertainty common to all. But we carry with us a knowledge of which institutional structures promote the discovery and elimination of error and which institutional structures promote error and ignorance. This knowledge can be carried from the constructed world of the laboratory to the natural world of social life because of the common element in both worlds, namely, the human mind. The one vital element of the experimental world that is not constructed is the human mind, which makes choices within the institutional context of the laboratory experiment. It is this same element that makes choices in the institutional structures of the natural world of social life. Thus, the sort of laboratory experiment described in Koppl et al. (2008) cannot tell us which particular expert judgments are correct and which incorrect, but they can tell us that the monopoly structure of forensics today produces a needlessly high error rate.
When applied to pure science, the techniques of epistemic systems design give us an experimental approach to science studies. In the past, disputes in this field could be addressed empirically only through historical research and field studies. It now seems possible to address a significant fraction of them with the tools of epistemic systems design. Thus, it seems possible to address the role of the network structure of pure science in producing reliable knowledge.
Epistemic systems design might help us to understand which social institutions produce truth and which do not. The related strategy for the discovery of truth and the elimination of error is indirect. Rather than attempting to instruct people in how to form true opinions, we might reform our social institutions in ways that tend to induce people to find and speak the truth. Comparing the epistemic properties of alternative social institutions is “comparative institutional epistemics.” At the margin it may be more effective to give people an interest in discovering the truth than to invoke the value of honesty or teach people the fallacies they should avoid. When we rely on experts to tell us the truth, it seems especially likely that institutional reforms will have a higher marginal value than exhortations to be good or rational. If virtue and rationality are scarce goods, we should craft our epistemic institutions to economize on them.