Several themes and issues recur in past discussions of experts. While these themes overlap, it may be helpful to briefly consider them under separate headings.
Thinkers who view experts as unreliable will generally fear expert power. Only those who view experts as reliable are likely to endorse increasing expert power. The more “powerless” nonexperts are in the sense of Table 2.1, the greater is the threat posed by expert power.
As we have seen in Chapter 2, Foucault (1980, 1982) sees “disciplines” as sources of power. When some people can impose knowledge on others, we have a power relationship in which those imposed upon are oppressed at least in some degree. Turner (2001, pp. 123–9) says that Michel Foucault and others in the tradition of “cultural studies” tend to the view that expert actions and categories “constrain” consumers “into thinking in racist, sexist, and ‘classist’ ways” (p. 126). This remark fits many scholars who cite Foucault favorably. Foucault himself, however, was subtler than Turner’s remark suggests. It nevertheless seems fair to say that Foucault tended to work in grand categories that are disconnected from both the individual meaning structures described carefully by Berger and Luckmann and the social processes that give rise to them.
Foucault’s excess reliance on grand categories reflects his conception of his “problem” as “a history of rationality.” He has said, “I am not a social scientist.” He was examining not “the way a certain type of institution works,” but “the rationalization of the management of the individual” (Dillon and Foucault 1980, p. 4). He thus emphasized the imposition of unitary knowledge schemes on populations such as prisoners and students.
The issue of expert power is important to figures generally considered “left,” such as Foucault (1980, 1982) and Habermas (1985). It is also of concern, however, to liberal thinkers, who are sometimes dubiously considered “right.” Easterly (2013) is a good representative of this strain of concern over expert power.
Easterly (2013) repudiates the “technocratic illusion,” which he defines as “the belief that poverty is a purely technical problem amenable to such technical solutions as fertilizers, antibiotics, or nutritional supplements” (p. 6). Easterly says, “The economists who advocate the technocratic approach have a terrible naïveté about power – that as restraints on power are loosened or even removed, that same power will remain benevolent of its own accord” (p. 6). Poverty is about rights, not fertilizer. “The technocratic illusion is that poverty results from a shortage of expertise, whereas poverty is really about a shortage of rights” (p. 7). Easterly supports human rights in part because “the rights of the poor … are moral ends in themselves” (p. 6). He also identifies a basic mechanism making “free development” better at improving people’s lives. People with rights can choose whom to associate with, contract with, and vote for to help solve their problems. Cooperative social connections, demand, or votes will grow for the more helpful problem solvers, who will attract imitators. Good solutions spread and the people grow richer. Free development leverages epistemic diversity to find solutions to human problems. The “tyranny of experts,” by contrast, has very little epistemic diversity and limits feedback from the people to the expert. Expert schemes imposed on the people don’t allow for the heterogeneity naturally emergent from free development. It is usually one size fits all. Nor do they entail the ceaseless local searching and experimentation of free development. Top-down planning cannot equal “the vast search and matching process” (p. 249) of free development. Thus, Easterly views experts as fundamentally unreliable. Democracy and economic liberalism are valuable in part because they tend to empower nonexperts.
Easterly links his liberal argument on expert power to the Hayekian “knowledge problem,” which I discuss in Chapters 6 and 7:
Another way to state the knowledge problem is that success is often a surprise. It is often hard to predict what will be the solution. It is even harder to predict who will have the solution, and when and where. And it is even harder when the success of who, when, and where keeps changing. This is just restating Hayek’s insight about the knowledge problem with conscious design.
Like Berger and Luckmann (1966), Easterly links the problem of experts to the division of knowledge in society. Coyne (2008) makes broadly similar criticisms of expert power in connection with military interventions.
Central to the issue of power is the question of who chooses. Do experts choose for nonexperts or merely offer advice and opinion? State-sponsored eugenics experts may have the power to decide for others whether they should be sterilized. Such cases are not, unfortunately, “ancient history” upon which we may look back with a shudder and a sense of superiority.
Ellis (2008) says that “genetic factors contribute to criminality. Therefore, curtailing the reproduction rates of persons with ‘crime-prone genes’ relative to persons with few such genes should reduce a country’s crime rates” (p. 259). He explicitly labels this strategy a “eugenic approach” to crime fighting (p. 258). Noting that the use of antiandrogen drugs “is also called chemical castration,” he says: “administering anti-androgens to young postpubertal males at high risk of offending, especially regarding violent offenses, should help to suppress the dramatic surge in testosterone in the years immediately following puberty. Males with the greatest difficulty learning may need to be maintained on anti-androgen treatment for as much as a decade” (p. 255). Ellis imagines such policies would be administered with scientific neutrality and precision. He forgets that the administrators would occupy positions of privilege in a dominance hierarchy and act, therefore, in the unfortunate ways predicted by evolutionary psychology.
In the United States, formally recognized and legally sanctioned coercive sterilizations were performed well into the 1970s (Stern 2005; Shreffler et al. 2015). More recently, the Center for Investigative Reporting has found that “Doctors under contract with the California Department of Corrections and Rehabilitation sterilized nearly 150 female inmates from 2006 to 2010 without required state approvals” (Johnson 2013). David Galton (1998), though distancing himself from genocide and most of the coercive eugenics of the twentieth century, says that “a state body should intervene” if a woman pregnant with a “trisomy 21” child is “clearly unable to provide economically for the long term care of her handicapped child” (p. 267). Present-day eugenicists such as David Galton (1998) and Lee Ellis (2008) would empower eugenic experts to make reproductive decisions for others.
Eugenic policies are a subset of population policies. Population policy may range from seemingly benign measures such as state-sponsored child care to forced sterilization to genocide. Eugenic policies properly speaking depend on the assumption that disfavored human qualities are heritable. The broader category of “population policy” does not. Nevertheless, when the size or composition of the population is a policy objective, the people are being viewed as livestock meant to serve an end separate from the population’s component individuals, whose autonomy and dignity should be respected. This perspective of human husbandry may make coercive measures more desirable for some political actors.
De la Croix and Gosseries (2009, p. 507) advocate “Population policy through tradable procreation entitlements.” They favor “tradable procreation allowances and tradable procreation exemptions” to achieve “the optimal fertility rate.” They trace their proposal to Kenneth Boulding (1964), who seems to have been the first thinker to propose procreation vouchers. Hickey et al. (2016, p. 869) say “that the dire and imminent threat of climate change requires an aggressive policy response, that it is reasonable to think that this response should include population engineering.” “Further,” they argue, “aggressive implementation of well-designed choice-enhancing, preference-adjusting, and incentivizing interventions aimed at reducing global fertility would be morally justifiable and potentially effective parts of a global population engineering program.” Policies that do not “consider population as a variable to be manipulated, might turn out to be too little too late.” The greater the supposed urgency of global warming and income inequality, the more likely we are to have coercive population policies.
O’Neill et al. (2010, p. 17525) claim to have shown “that reduced population growth could make a significant contribution to global emissions reductions.” They find “that family planning policies would have a substantial environmental cobenefit.” Citing O’Neill et al. (2010), Johns Hopkins ethicist Travis Rieder has described children as “externalities” (Ludden 2016). “Rieder proposes that richer nations do away with tax breaks for having children and actually penalize new parents. He says the penalty should be progressive, based on income, and could increase with each additional child” (Ludden 2016). Rieder has appeared on the television show “Bill Nye Saves the World.” His comments on that show induced the host to ask: “So should we have policies that penalize people for having extra kids in the developed world?” (Rousselle 2017). Rieder responded, “So, I do think we should at least consider it.” Thus, coercive population policy measures in the United States are beginning to be vetted with the general public as a method of combatting the threat of global warming. Because global warming is often represented to the public as a catastrophic threat, it seems only reasonable to fear that coercive population policies will gain political acceptance in the relatively near future. We should again recall Berger and Luckmann’s (1966, p. 88) observation that the “general population is intimidated by images of the physical doom that follows” when experts are not obeyed.
Merton (1937, 1945, 1976) may have been too sanguine about expert power. He looked to the social structure of science and declared the “quest for distinctive motives” of scientists to be “misdirected” (1937, p. 559). He attributed the epistemic merit of science to the social structure of science rather than any personal merit of scientists. This connection between social structure and the epistemic performance of experts, in this case scientists, is important and valuable. As my earlier brief discussion suggests, however, Merton did not give adequate weight to the risk that an expert might misbehave or make mistakes. The discussion of sociological ambivalence in Merton and Barber (1976) illustrates.
Merton and Barber (1976) identify many forms of “sociological ambivalence” (pp. 6–12). They focus, however on the “core type” in which “incompatible normative expectations” are “incorporated” in a social role, social status, or set of simultaneously occupied social positions. So defined, “sociological ambivalence” is about the tensions within recognized and socially approved norms. It is not about when, whether, and how professionals might deviate from such norms. Thus, Merton and Barber (1976) obscure and ignore potential problems with experts by the very definition of “ambivalence,” which discourages any examination of misbehavior or unconscious bias among experts. Tellingly, when discussing the frustrations a professional’s client may feel, Merton and Barber (1976) say: “We focus on frustrations induced by the professional living up to his role” (1976, p. 26). And they do not distinguish the client’s reactions to “his doctor, his lawyer, his social worker or his clergyman” (pp. 26–7). But the “social worker” will not generally have been chosen by the client, whereas their clergyman (in the United States) typically is chosen under conditions of free competition. In other words, the social worker wields the power of the state, whereas the clergyman has only the “power” of persuasion. As I will discuss in later chapters, doctors and lawyers are intermediate cases: Licensing restrictions and professional organizations give these experts a degree of epistemic monopoly, but their power is at least somewhat mitigated by the client’s power to choose among certified experts. For Merton and Barber, however, it always the relatively reliable expert interacting with a potentially recalcitrant nonexpert who is in need of the expert’s ministrations, any recalcitrance notwithstanding.
Merton and Barber are insensitive to the risk of expert failure. Indeed, when at last they acknowledge that clients may “suspect the motivations of the professionals who minister to their needs” (p. 27), the focus is entirely on the clients’ anxiety and not the prospect that such anxiety may be very well justified. They say, “And once again, we emphasize that we are not dealing here with cases in which professionals do, by the standards of the time, exploit the troubles of their clients. We are concerned with legitimate practices and patterned situations, not with deviant practices, that produce ambivalence” (p. 28).
Merton (1937) saw that the epistemic success of science was a function of its social structure. He says, for example, that its disinterestedness is not a matter of the superior morality of scientists. “It is rather a distinctive pattern of institutional control of a wide range of motives which characterizes the behavior of scientists.” Scientists are not unusually ethical; rather, “the activities of scientists are subject to rigorous policing” (1937, p. 559). This link between social structure and epistemic performance was pathbreaking, and it remains important. And yet, when Merton turns to the professions, he neglects to ask when and whether such experts are “subject to rigorous policing.” He cordons off considerations of “deviant practices” and considers only the “ambivalence” arising from “legitimate practices.” The issue of power fades behind a screen of complacency about the expert’s expertise. This limits the utility of his analysis of ambivalence for an economic theory of experts.
The point is not always to constrain the power of the experts. As we saw in Chapter 3, Socrates seems to have sought to empower experts and called on Athenians to obey them. Cole (2010) wants to empower a “knowledge elite” within forensic science. Using medicine as his model, he explicitly calls for greater “hierarchy” in forensic science to empower the knowledge elite. He thus wishes to increase the power of some experts to reduce the power of others. Experts in the knowledge elite are reliable. Experts below the knowledge elite are unreliable. The power of these unreliable experts must be constrained by empowering the reliable experts. It seems fair to suggest that Cole tends to view nonexperts as fundamentally powerless, at least within the context of the criminal justice system. It is for this reason that he seeks a hierarchical and not a democratic solution to the problem of experts. (See my exchange with Cole in the Fordham Urban Law Journal: Cole 2012; Koppl 2012a.)
Ethics enter theoretical treatments of experts in at least three ways. First, there is the question of the virtue of experts. More virtuous experts are generally considered more reliable. And if experts are imagined to be more virtuous than nonexperts, as in much of the nineteenth-century literature on expert witnesses, it may seem sensible to minimize the power of nonexperts so that they may receive instruction from their betters. Second, there is the question of what ethical norms should constrain experts. Finally, there is the question of what social mechanisms would induce or constrain experts to act within ethical norms.
Experts are sometimes viewed as virtuous, and this virtue is often taken to bolster their supposed epistemic superiority over nonexperts. Thus, the supposed epistemic merit of the expert is in part attributable to their moral character. In Chapter 3 we saw this attitude expressed in both the Socratic tradition of philosophy and much of the nineteenth-century literature on expert witnesses. For example, Angus Smith (1860, p. 141) says:
Scientific men are bound together by mutual beliefs in a stronger manner than the community at large, and if placed in this honourable and independent position, they will act according to their knowledge and character, and cause to cease much unnecessary contradiction and opposition. Being bound to speak the truth, the whole truth, and nothing but the truth, they will feel in honour bound to do so when an opportunity offers.
Smith’s opinion that the combination of science and virtue ensures the correctness of expert opinions has arisen in forensic science as well. Until recently, the suggestion that forensic scientists might be subject to unconscious bias was widely considered a challenge to the moral character of forensic scientists. An internationally prominent forensic scientist and forensic science researcher once heatedly exclaimed to me: “But I am trained to be objective!”
More virtuous experts are not necessarily more reliable. Koppl and Cowan say that “some of the very qualities that may make a forensic scientist a good person may induce unconscious bias and consequent error” (2010, p. 251). If, for example, the forensic expert knows that the case they are working on is that of a heinous double murder, their very human decency and compassion for the victim may skew their judgment toward finding a “match” when there is no match at all.
The view among forensic scientists has often been that the moral character of the forensic scientist ensured the correctness of the forensic scientist’s opinion. But this view within forensic science, as in the quote from Angus Smith (1860), requires that such virtue be combined with “science.” It is only the specially trained expert whose moral uprightness ensures a correct opinion in the given domain of expertise. For this reason, it has been important to the apologists for forensic science to insist on the scientific nature of forensic science.
In the sort of view of represented above by Angus Smith (1860), “science” is also a guard against diversity of opinion. (See also Taylor 1859, p. 703 and Cook 1994.) It is necessary to resist and deny diversity of opinion if “science” and the expert’s personal virtue are to be held up as sufficient to ensure a correct expert opinion. Thus, forensic scientists have claimed a “zero error rate” for their methods, and they have testified that their results are correct to a “reasonable degree of scientific certainty” (NAS 2009; NIST 2011).
The view that experts are uniquely virtuous seems inconsistent with the view that experts require a code of ethics. Writers who question the reliability of experts are more likely to call for the explicit promulgation of an ethical code. Levy and Peart (2017) propose a code of ethics for experts. Such a code would mitigate the risks of partiality in expert opinion. A vital minimal requirement to this end is transparency. “[A]s a preliminary matter … it is critically important for experts to reveal information relevant to their financial interests” (p. 314). Sympathy for the client or for other experts may also create bias, as might a commitment to preferred policies or general frameworks for policy analysis. “Such sympathetic connections are more clearly revealed by detailing the history of one’s work, including not least one’s consulting history and the policy positions one has advocated in consulting and academic work” (p. 315).
Koppl and Cowan (2010, p. 241) say:
The ethics code of the American Statistical Association (ASA) comes to 3,395 words, none of which concern procedures in case of a violation. The ethical code of the American Academy of Forensic Sciences (AAFS) is 1,384 words long. Of these, however, all but 145 are preamble, section titles, or (mostly) procedures in case of a violation. Of these 145 words, 62 are devoted to saying that members must not act contrary to the “interests and purposes of the Academy” and 31 are devoted to saying that members cannot give any opinion as that of the AAFS without prior permission. That leaves 52 words for the ethics of forensic science analysis. Thus, the ethical code of American Statistical Association has over 65 times as many words devoted to ethical conduct than the ethical code of the American Academy of Forensic Sciences. This difference of one order of magnitude is explained by the greater specificity of the ASA code. Vague ethical guidelines are not likely to provide more than modest help in error prevention, correction, and detection.
Since the quoted article was written, the ethics code of the AAFS has been trimmed substantially.
A code of professional ethics should help the professional decide what actions and inactions are wrong and what actions and inactions are ethically acceptable. It may also be a mechanism for inducing ethical behavior. This is achieved in part by the informational function of the code. Most professionals have at least some desire to behave ethically. This desire may spring in part from the professional’s sense of identity with their profession. Simply articulating and promulgating a code of ethics will, then, have at least some effect on the behavior of the relevant professionals. A code also creates a standard for others to use in judging a professional, creating the possibility of censure. Such secondary effects of a code of ethics will also influence the behavior or professionals.
The theoretical literature addressing the problem of experts includes other mechanisms for helping to ensure the ethical behavior of experts. These include discussion and democratic control, which I consider below, as well as regulation and licensing restrictions. If experts are potentially unreliable, but subject to competition, they may be driven toward more ethical behavior, depending on the structure of market competition. Adam Smith saw such an ethical benefit in free competition among religions, at least if we view “candour and moderation” as ethical norms:
The teachers of each sect, seeing themselves surrounded on all sides with more adversaries than friends, would be obliged to learn that candour and moderation which is so seldom to be found among the teachers of those great sects whose tenets, being supported by the civil magistrate, are held in veneration by almost all the inhabitants of extensive kingdoms and empires, and who therefore see nothing round them but followers, disciples, and humble admirers.
“Reflexivity” is the methodological requirement that a theory applies to itself (Bloor 1976, pp. 13–14, Pickering 1992, 18–22). The problem of reflexivity arises most naturally when the theorist views experts as unreliable. The theorist themself may be considered an expert. Is the theorist’s theory, therefore, unreliable? When experts are viewed as reliable, reflexivity is less likely to be a problem. The theorist’s theory conveys the reliable expert’s opinion that expert opinions are reliable. Reflexivity is closely related to the anthill problem. The reliable expert imagines themself above the anthill looking down. To be consistent, the theorist who warns of unreliable experts must recognize themself as an ant in the anthill and emphasize this fact in their theory.
Turner (1991, 2003) discusses some of the issues as they arise in science studies. He criticizes Jasanoff and others when he disparages “the inner contradictions of the attempt to be anti-essentialist (or ‘social constructionist’) about science and at the same time to provide some sort of external God’s eye view ‘critique’ with ‘policy’ implications which bedevils ‘science studies’ attempts to be normative” (Turner 2003, p. viii). Jasanoff (2003, p. 394) in her turn also invokes a notion of reflexivity when she criticizes Collins and Evans (2002) by saying: “Nor is there an objective Archimedean point from which an all-seeing agent can determine who belongs, and who does not, within the magic ring of expertise.”
In Chapter 11 I will argue that we may use the techniques of experimental economics to skirt the vexed problem of whether to second-guess expert opinion. The use of these methods may mitigate problems of reflexivity in some degree. By constructing the truth in the human-subjects laboratory, we create for ourselves the “God’s eye view” disparaged by Turner. This perspective is valid only in the laboratory, however, and does not allow us to judge competing scientific claims in, say, climate change or epidemiology. I will argue in Chapter 11 that it helps us to judge how different institutions affect the chance of expert failure.
There are at least four other approaches to handling the potential paradoxes of reflexivity. First, the theorist may exempt themself from the theory. Second, the theorist may embrace the paradoxes of self-reference, but use irony and satire to prevent those paradoxes from destroying the theory from within. Third, the theorist may attempt to construct their theory in such a way that complete self-reference does not harm the theory. It is not clear that this strategy has been used successfully. Finally, in what may be a variant of the third strategy, the theorist may identify limits of explanation that prevent paradoxes from arising.
A theorist may skirt reflexivity problems by somehow exempting themself or their theory from the requirement of reflexivity. Marxism (in, at least, some of its variants) adopted this strategy. All ideologies reflect material forces. But the revolutionary vanguard, because of its unique position in history, can see things as they really are. They alone are exempt from the false consciousness implied by historical materialism. Exempting the theorist from their theory introduces an obvious asymmetry that may allow critics to complain of inconsistency.
Satire may be a vehicle for avoiding the paradoxes that might otherwise defeat efforts to put the theorist in the model. The satirist’s ironic voice admonishes readers to doubt their own motives and self-interpretations as well as the motives and self-interpretations of others. But this same skepticism must be applied with equal to vigor to the satirist as well. Satire entails an invitation to the reader to be skeptical of the satirist. If satire and irony do indeed help bring the theorist into the model, they may be the true voices of liberalism, as with Fielding’s Tom Jones. The satirical approach to reflexivity reflects the view that experts are unreliable. It may also reflect the view that nonexperts are potentially empowered. The satirist asks the reader to be empowered by doubting even the satirist themself.
Rather than exempting the theorist, we may “put the theorist in the model,” as David Levy and Sandra Peart have expressed it to me. (In Levy and Peart 2017, p. 192, they say “we need to put the economist in the model” because economic “experts share motivational structure with those we study.”) It is hard to put the theorist in the model. But such reflexivity may be required if the theorist is to avoid modeling themself as above the anthill looking down, at least when the model does not naively assume that we are all virtuous and our thoughts and motives self-transparent. Identifying limits of explanation may make it easier to put the theorist in the model.
Peart and Levy (2005, p. 3) define “analytical egalitarianism” as the social scientist’s presumption that “humans are the same in their capacity for language and trade; observed differences are then explained by incentives, luck, and history, and it is the ‘vanity of the philosopher’ incorrectly to conclude that ordinary people are somehow different from the expert.” They draw the phrase “vanity of the philosopher” from chapter 1, section 4 of Smith (1776). They have characterized this methodological norm as requiring that “differences among types of agents” in one’s model are “endogenous to the model” and that “the theorist” be viewed as of “the same type as the agents in the model” (Levy and Peart 2008a, p. 473).
Knight struggled with the anthill problem. He struggled with the issue of how to put the theorist in the model, as beautifully illustrated by a passage in Turner (1991). In a letter to the sociologist George A. Lundberg, who is generally classified as a “neo-positivist” (Wagner 1963, p. 738, n. 7; Shepard 2013, p. 17), Knight asks
why thinking about social questions shows such an overwhelming tendency to run into a sales competition between different forms of verbal solitaire. The social problem itself is, of course, largely that of how men’s minds work, and especially important are the minds of those who achieve some position of articulate bid for leadership, and among these there is surely no more challenging case than the mind of the behaviorist. In this connection, the principle of knowing oneself, or beginning at home, has a peculiar appropriateness. The first thing for behaviorism to explain is behaviorism! If it will take this problem first, instead of embarking on the emotional-religious spree of attempting to convert everybody to its peculiar type of enlightenment, it will immediately render the inestimable service of making itself harmless.
Knight wants the theory to explain the theorist.
Buchanan (1959), a student of Knight, also struggles with the anthill problem. He wishes to avoid imposing the economic expert’s values on others, while preserving an advisory role for that expert. As but one equal member of the community, the economist may seek agreement, which is to say unanimity. Recalcitrant or otherwise unreasonable members of the community, however, will likely make “full agreement” impossible. The argument for unanimity assumes “that the social group is composed of reasonable men” (p. 134). In the end, therefore, the “political economist … is forced to discriminate between reasonable and unreasonable men in his search for consensus” (p. 135). Rather than “absolute unanimity” we seek “relative unanimity.”
The move to relative unanimity seems doubtful. Relative unanimity is not agreement at all. It is an imagined agreement among imagined “reasonable men.” We have with Buchanan, as with Rawls, the attempt to deduce what “reasonable” persons would choose in a purely imaginary and hypothetical context that does not and cannot exist in reality. There is no discussion or bargain, only the theorist’s imagined discussion or bargain. But then the theorist is no longer one among equals. The theorist instead becomes everyone! (See Gordon 1976, pp. 577–8.)
In the end, then, “relative unanimity” does not prevent the “political economist” from imposing themself on others and thus breaking the moral and epistemic reflexivity Buchanan sought. By deciding what is “reasonable” and imagining what “reasonable men” would agree to, the political economist decides for the polity which opinions count. Opinions the theorist cannot imagine are thereby excluded and do not count.
F. A. Hayek also addressed reflexivity. The Sensory Order (Hayek 1952a) outlined a theory of mind in which mind is emergent from neural connections. These connections form at any moment a classificatory system. Ordinary thinking and scientific explanation are classificatory activities. Hayek argued that “any apparatus of classification must possess a structure of a higher degree of complexity than is possessed by the objects which it classifies; and that, therefore, the capacity of any explaining agent must be limited to objects with a structure possessing a degree of complexity lower than its own” (1952a, p. 185). It follows by logical necessity, for Hayek, that “the human brain can never fully explain its own operations” (1952a, p. 185), because it would have to be more complex than itself to do so. Later, Hayek would associate this argument with two mathematical results: Cantor’s diagonal argument and Gödel’s incompleteness theorem (1967b, p. 61, n. 49 and p. 62).
Hayek applied his notion of the “explanation of the principle” (1952a, p. 182) to the social sciences as well (Hayek 1967b). An explanation of the principle is one in which only general features of some phenomenon are accounted for. When the phenomena studied are sufficiently complex, this is all that can be hoped for. Thus, Hayek resolved the problem of reflexivity by identifying logically necessary and insuperable limits of explanation.
Earlier we saw Lee Ellis argue for the chemical castration of young men with “crime-prone genes.” Writers who, like Ellis (2008), view experts as reliable and nonexperts as powerless do not usually subject their theories to a reflexivity requirement. Ellis’s essay illustrates, however, the importance of the reflexivity requirement that all agents of the system be modeled. He models persons with “crime-prone genes,” but not the experts who would administer sterilization policies. He consequently wishes to place discretionary power in the hands of persons unlikely to exercise such power with the Solomonic disinterest and wisdom his policies would require even under the assumption that his eugenic ideas are correct. In the theory of experts, as in all of social science, all agents must be modeled if we are to minimize the risk of proposing policies that would require some actors to behave in ways that are inconsistent with their incentives or beyond human capabilities.
Nonexperts, I have said, may be powerless or empowered. If experts are defined by their expertise, then it may be hard to conceive of an empowered nonexpert unless that nonexpert has some approximation to the expert’s expertise. This sort of thinking crops up in discussions of the “well-informed citizen,” as with Schutz (1946). The well-informed citizen, then, has a natural place in theories that view experts as fundamentally reliable and nonexperts as empowered. If experts are defined by their expertise and seen as fundamentally reliable, then nonexperts can only be empowered by knowledge. Knowledgeable persons who are not experts are, by definition, well-informed citizens. If experts are unreliable and nonexperts are empowered, the well-informed citizen might have a role. Levy and Peart (2017), discussed in this section, is an example. The well-informed citizen would seem to have no role to play in a theory in which nonexperts are powerless. As I briefly discuss in this section, however, Turner (2003) attributes to Karl Pearson the desire to train citizens to be “junior scientists.”
Schutz (1946) distinguishes the “expert,” the “well-informed citizen,” and the “man on the street.” The “man on the street” is entirely unreflective. He applies “recipes” in life “without questioning” them (p. 465). The well-informed citizen stands between the expert and the man in the street. While not an expert, “he does not acquiesce in the fundamental vagueness of a mere recipe knowledge or in the irrationality of his unclarified passions and sentiments. To be well informed means to him to arrive at reasonably founded opinions in fields which as he knows are at least mediately of concern to him although not bearing upon his purpose at hand” (p. 466, emphasis in original).
These three figures are “ideal types.” Schutz was perfectly aware that no one is a pure man on the street in all domains. The most uncurious soul will usually have at least some opinions that go beyond mere acceptance of recipes, even though many of them may not be entirely “reasonably founded.” The social distribution of knowledge implies that no one can be an expert in all domains, nor well informed in all fields. Schutz places his hopes in well-informed citizens to balance the competing claims of different types of experts and impress their well-informed views upon the man in the street. “It is the duty and the privilege,” he concludes, “of the well-informed citizen in a democratic society to make his private opinion prevail over the public opinion of the man on the street” (p. 478). In this essay, Schutz views a part of the citizenry, those who are “well informed,” as empowered, and the rest as relatively powerless.
Barber (2004, p. xii) interprets Schutz (1946) to say “that the opinion of the well-informed citizen ought to take precedence over that of experts and the uninformed.” This way of putting it may overstate the power of the well-informed citizen to second-guess the expert. Schutz says, “[I]t is the well-informed citizen who considers himself perfectly qualified to decide who is a competent expert and even to make up his mind after having listened to opposing expert opinions” (p. 466). The well-informed citizen’s opinion is always derived from the more informed opinions of experts. Each expert, however, is locked within the frame of their specialty. “The expert starts from the assumption not only that the system of problems established within his field is relevant but that it is the only relevant system” (Schutz 1946, p. 474). This implies both a certain narrowness and disengagement from ethical norms. We “can expect from the expert’s advice merely the indication of suitable means for attaining pregiven ends, but not the determination of the ends themselves” (p. 474). The well-informed citizen weighs and balances the competing frames of diverse experts. In this sense we may say, with Barber, that the well-informed citizen’s opinion “ought to take precedence over that of experts.” But the well-informed citizen is always choosing among expert opinions. And it is in this sense that the well-informed citizen’s opinion is always derived from the more informed opinions of experts.
With Schutz, the well-informed citizen disciplines the man on street, thereby helping to ensure that the right expert opinion will “prevail.” Jasanoff (2003) gives the well-informed citizen the opposite role. She says: “expertise is constituted within institutions, and powerful institutions can perpetuate unjust and unfounded ways of looking at the world unless they are continually put before the gaze of laypersons who will declare when the emperor has no clothes” (2003, p. 398). Schutz seems to take it for granted that the well-informed citizen will support the expert. Jasanoff seems to take it for granted that the well-informed citizen will oppose the expert.
It is difficult to adjudicate the competing opinions of Schutz and Jasanoff, because it is not clear when the public opinion reflects the well-informed citizen and when it reflects the man on the street. One observer will say that members of the public who resist, say, Darwinism are uninformed. By definition, the well-informed citizen supports Darwinism. Another observer might say that only well-informed citizens understand intelligent design well enough to exercise a Jasanoffian resistance to the spurious experts trying to impose dubious Darwinism on innocent schoolchildren. (Admittedly, Blount et al. 2008 has made the example of intelligent design outdated.) Our assessment of which citizens are “well informed” may not be separable from our assessment of the expert opinion in question. It seems unlikely, moreover, that any very simple statement about the role of the well-informed citizen will be satisfactory for a theory of experts. Social processes of opinion formation are too complex to reduce to the sort of simple formulae we have seen from Schutz (1946) and Jasanoff (2003).
Turner (2003, pp. 97–128) discusses authors who in one way or another address the role of experts through “citizen competence.” He attributes to Dewey and Pearson ideas that amount to making the competent citizens “junior scientists.” Such figures were looking not so much to control experts as to make society more scientific.
Turner extols the more skeptical views of James B. Conant, who thought citizens needed to know not science, but (in Turner’s words) “how science works” (p. 121). Conant’s educational reforms were meant, Turner reports, “to produce members of a liberal public” (2003, p. 121). Conant did not think a well-informed citizenry would be sufficient to control experts, however.
Levy and Peart (2017) suggest that randomly chosen citizens can play the role often assigned to well-informed citizens. Randomness replaces informedness. As we have seen, Levy and Peart (2017) emphasize transparency, which tends to empower the laity. And they offer the relatively concrete proposal to mitigate expert bias by “extend[ing] the jury system to that of regulation.” They say: “Instead of appointed regulatory bodies with their experts making decisions, where the only people with a voice in the matter have a particular interest in the issues, we propose that decisions be made by people randomly selected, who have the issues explained to them by contending experts” (2017, p. 242). They do not discuss how this system might be put into practice. They stipulate that selection shall be random without specifying a social mechanism that will generate such a result. In this regard their proposal suffers the same infirmity as Sanford Levinson’s call for random selection of delegates to a constitutional convention (Devins et al. 2016, pp. 242–3). Levy and Peart (2017) compare their proposal to jury selection, which they describe as “random.” But the recent ruling of the US Supreme Court in Foster v. Chatman (14–8349, May 23, 2016) illustrates the complaint that jury selection is frequently nonrandom and, in particular, racially biased. The court ruled in this case that the prosecution had excluded jurors because they were black. The court ruled against this form of nonrandomness in this particular case. Unfortunately, it does not seem likely that the decision in this case will make randomness readily attainable in the future. The trial in question was held in 1987. Thus, the violation in question persisted for almost thirty years. Moreover, at least one journalist reports, “The decision was narrowly focused on Mr. Foster’s jury selection and is unlikely to have a broad impact. Evidence of the sort that surfaced in Mr. Foster’s case is rare, and the [precedent on which the decision was based] is easy to evade” (Liptak 2016). Randomness can be a good thing, but it can also be hard to achieve.
Colander and Kupers (2014, pp. 173–4) provide an interesting twist on the theme of the well-informed citizen. In the context of economic policy, they define the “problem of experts” as “the problem that there is no one to keep the experts humble.” They explain: “It’s not that they aren’t experts; it’s that the problems they are facing are so complex that no one fully understands them.” They “advocate … the idea of educated common sense. Educated common sense involves an awareness of the limitations of our knowledge that is inherent in” the conceptual framework enabled by modern complexity theory, including the mathematical theories of complexity associated with the Santa Fe Institute. “The complexity policy revolution involves not merely changing theory around the edges; it involves experts changing the way they think about models and policy.” They say, “A central argument of this book is that with complexity now being several decades old as a discipline (and much older as a sensibility), policy that ignore this frame fails the educated common sense standard.” Thus, the expert should arm themself with the sort of “educated common sense” Schutz associated with the well-informed citizen. Instead of waiting to be told, à la Jasanoff, that they are emperors with no clothes, the experts must learn enough humility to know and confess, indeed vigorously affirm, their own nakedness.
The idea of democratic control of experts is related to that of the well-informed citizen. The idea is often that we will gain from experts but somehow control them through democratic means. As with the well-informed citizen, the general idea of democratic control of experts probably best fits theories with empowered citizens, but surprising combinations of ideas may sometimes be found. I will briefly discuss Wilson (1887), who views nonexperts as fundamentally powerless. And yet the principle of democracy will somehow ensure that experts serve the commonweal.
The problem of expert control frequently arises in the context of government policy built on expert advice. Merton (1945) and Turner (2001) are examples. In this context, democracy is sometimes viewed as a safeguard against problems with experts. Loosely, the reason may be that the democratic electorate employs the experts, who are somehow bound, therefore, to serve the people. Or, loosely, the reason may be that the electorate somehow decides who counts as an expert or what counts as expertise, thereby retaining ultimate control.
Jasanoff gives the first sort of reason when she says that “public engagement is needed in order to test and contest the framing of the issues that experts are asked to resolve” and that “participation is an instrument for holding expertise to cultural standards for establishing reliable public knowledge, standards that constitute a culture’s distinctive civic epistemology.” She gives the second sort of reason when she invokes, as we have seen already, “the gaze of laypersons who will declare when the emperor has no clothes” (2003, pp. 397–8). Jasanoff makes two further defenses of “participation” in the same passage. In a strictly normative remark, she says: “in democratic societies … all decisions should as far as possible be public” (397). And finally, she appeals to a notion of the well-informed citizen when she says, “participation can serve to disseminate closely held expertise more broadly, producing enhanced civic capacity and deeper, more reflective responses to modernity” (p. 398).
As we have seen, Merton (1945) seems to view the expert’s position of “dependency” as sufficient to ward off at least the worst abuses of experts. Turner (2001) takes the second approach when he says, “Expertise is a deep problem for liberal theory only if we imagine that there is some sort of standard of higher reason against which the banal process of judging experts as plumbers can be held, and if there is not, it is a deep problem for democratic theory only if this banal process is beyond the capacity of ordinary people” (p. 146). Turner has confidence in the robustness of democracy to the problem of experts. But, invoking James B. Conant, he dismisses the idea of “democratic control of science” as a “dangerous illusion” (2003, p. vii, emphasis added).
Wilson (1887) acknowledged fears of “an offensive official class, – a distinct, semi-corporate body with sympathies divorced from those of a progressive, free-spirited people, and with hearts narrowed to the meanness of a bigoted officialism” (p. 216). But such fears, he assures his readers, neglect the vital principle “that administration in the United States must be at all points sensitive to public opinion” (p. 216). Wilson stipulates that the officialdom shall have and preserve a democratic spirit. He says, “The ideal for us is a civil service cultured and self-sufficient enough to act with a sense of vigor, and yet so intimately connected with the popular thought, by means of elections and constant public counsel, as to find arbitrariness or class spirit quite out of the question” (p. 217).
Wilson’s response to the fear of bureaucracy is breathtaking. I can buy a drunk a bottle of whiskey and stipulate that he will use it only for medical emergencies. My stipulation will not ensure such restraint, however. In this analogy, the drunk is Wilson’s “corps of civil servants prepared by a special schooling and drilled, after appointment, into a perfected organization, with appropriate hierarchy and characteristic discipline” (p. 216) and the whiskey is power.
Discussion is a recurrent theme in the literature on experts. The well-informed citizen is usually a relatively unimportant figure in theories that emphasize discussion. If discussion informs the citizen, then we do not need to invoke the prefabricated figure of the well-informed citizen. Theorists who emphasize discussion often think that discussion empowers the public. They tend to fit, therefore, in the right-hand column of Table 2.1. Both theories that view experts as reliable and theories that view experts as unreliable may emphasize discussion. Collins and Evans (2002) is an example of the former and Jasanoff (2003) of the later.
Several important names are associated with the role of discussion in politics, including John Stuart Mill, John Rawls, and Jurgen Habermas. Both Durant (2011) and Turner (2003) treat Habermas as an important source of thinking about experts. Durant (2011) places weight on Rawls as well. Levy and Peart (2017, p. 35) draw our attention to a passage in Mill’s On Liberty addressing the theme of contesting experts:
I acknowledge that the tendency of all opinions to become sectarian is not cured by the freest discussion, but is often heightened and exacerbated thereby; the truth which ought to have been, but was not, seen, being rejected all the more violently because proclaimed by persons regarded as opponents. But it is not on the impassioned partisan, it is on the calmer and more disinterested bystander, that this collision of opinions works its salutary effect. Not the violent conflict between parts of the truth, but the quiet suppression of half of it, is the formidable evil.
In Chapter 8 I will discuss the important model of Milgrom and Roberts (1986), which employs a logic close to Mill’s to show how competing experts may lead a disinterested party to the “full-information opinion.”
Frank Knight attempted to use the idea of discussion to ensure that the economic expert does not impose solutions on others. “For Knight,” Levy and Peart (2017, p. 48) explain, “the role of the economic expert was twofold: 1. Economic experts take the values (norms) of the society as given; 2. Proposals for change should be submitted for discussion in a democratic process.” “But discussion,” Knight says, “must be contrasted with persuasion, with any attempt to influence directly the acts, or beliefs, or sentiments, of others” (quoted in Levy and Peart 2017, p. 48). Citing Knight and the broader “discussion tradition” they identify within economics, Levy and Peart argue that experts should be “constrained by discussion and transparency” (2017, p. 7).
Earlier, we saw that Jasanoff (2003) defends “participation,” which is both a democratic idea and a discussion idea. Insisting that the “project of looking at the place of expertise in the public domain” is “a project in political (more particularly democratic) theory,” Jasanoff invokes “intense and intimate science-society negotiations” (2003, p. 394). Durant (2011) discusses “the debate between Sheila Jasanoff and Brian Wynne, on one side, and Harry Collins and Robert Evans, on the other” (p. 691). He associates “the approach of Collins and Evans to [John] Rawls’s notion of public reason, and more generally to a form of liberal egalitarianism,” and he associates “the theorizing of Jasanoff and Wynne to the contemporary project of identity politics, and more generally to [Jurgen] Habermas’s discourse ethics” (p. 692).
Levy and Peart (2017) identify a “discussion tradition” in economics. They view themselves as a part of this tradition, and they view the tradition as vital to their theory of experts and expertise. Levy and Peart (2017) associate the discussion tradition with Adam Smith and J. S. Mill among the classical economists and Frank Knight, James Buchanan, Vernon Smith, and Amartya Sen among more recent figures. Economists of this tradition “have expounded upon the rich moral and material benefits associated with discussion – benefits that contribute to a well-governed social order” (p. 30). While this tradition did not produce an economic theory of experts, it did provide some anticipations of themes relevant to the theory.
Levy and Peart (2017) do not include Bernard Mandeville in the discussion tradition, and for good reason. For discussion to get us to the truth, according to this tradition, it must be constrained. Levy and Peart explain: “The requirements for discussion, as these economists used the term, are stringent. Reciprocity and civility are needed and so, too, is real listening and moral restraint. In this tradition one accepts the inevitability of an individual ‘point of view’ and the good society is one that governs itself by means of an emergent consensus among points of view” (p. 30). Mandeville could not have taken such constraints seriously.
Discussion, for Mandeville, was always an occasion for deception of self and others. We are taught the “Habit of Hypocrisy, by the Help of which, we have learned from our Cradle to hide even from our selves the vast Extent of Self-Love, and all its different Branches” (1729, vol. I, p. 140). Indeed, “it is impossible we could be sociable Creatures without Hypocrisy” (1729, vol. I., p. 401). Hypocrisy and self-deception are too deeply ingrained in our nature to hope that discussion will lead to truth even when it is subject to “stringent” moral constraints. Nor can we hope for such constraints as “real listening” to be honored in practice. As we shall see in Chapter 6, however, Mandeville did think that we could slowly acquire “good Manners,” which do entail civility and reluctant reciprocity. We can learn to get along, and accumulated experience can lead to good practices such as skillful sailing and good manners. But truth is more elusive.
Mandeville’s deep doubts about our ability to see the truth raise the question of how he could pretend that he had somehow overcome universal hypocrisy and self-deceit to speak the truth of human nature and social life. Do not Mandeville’s own principles invite us to question his motives and doubt his arguments? It is hard to construct an explanatory model of social processes that can be applied equally to the theorist and others. We have seen that it is hard to put the theorist in the model and that such reflexivity is required if the theorist is to avoid modeling themself as special and somehow above ordinary persons, at least when the model does not naively assume that we are, all of us, virtuous and our thoughts and motives self-transparent. As I will argue in Chapter 6, Mandeville’s Fable was satiric. And I claimed earlier that satire is one vehicle to avoid paradoxes of self-reference. Mandeville did not put himself in the model. Rather, he invites the reader to put him in the model.
In an economic theory of experts, such as I am attempting to outline in this book, the question of market structure arises. Both the reliability of experts and the power of nonexperts may depend on market structure. While there are many market forms, we may broadly distinguish competition and monopoly. The theorist must then choose whether it is better that experts enjoy a monopoly or be subject to competition. This rough accounting is inadequate to the full complexity of events. But it helps us to organize theoretical treatments of experts. The economic concept of “competition” has the potential to create misunderstandings and must, therefore, be interpreted with caution.
We have seen Berger and Luckmann (1966) emphasize the dangers of monopoly in the market for experts. Earl and Potts (2004) discuss the “market for preferences” and Potts (2012) discusses “novelty bundling.” They consider businesses with expertise in new technologies or fashions. These businesses help less informed households to cope with novelty. The experts educate consumers on the possibilities and propose different combinations to them. Potts (2012, p. 295) illustrates novelty bundling with “fashion magazines such as Vogue,” which present readers with novel combinations of fabrics, clothing items, hairstyles, and so on. Earl and Potts (2004, p. 622) point to “product review websites such as Amazon.com.” Consumers do not know what they want when they are considering novel items and novel combinations. The experts help them form their (low-level) preferences. Thus, we may describe the market as a market for preferences. The notion of “novelty intermediation” in Koppl et al. (2015) builds on Earl and Potts (2004) and Potts (2012). I briefly commented earlier on Milgrom and Roberts (1986). In their model, competition between “strongly opposed” experts is beneficial because it allows a disinterested party to reach the “full-information opinion.”
Levy and Peart are oddly ambivalent on competition among experts. They clearly revile monopoly in expert opinion (Levy and Peart 2006). They note “the messy but perhaps salutary effects of competition in expertise” in a nineteenth-century British trial deciding whether information on contraception might be banned as obscene (Levy and Peart 2017, p. 108). And yet they favorably quote Frank Knight lamenting the supposed tendency of “competition [among economists] for recognition and influence” to “take the place of the effort to get things straight.” Knight sniffs at economists “hawking their wares competitively to the public by way of settling their ‘scientific’ differences” (Knight 1933, pp. xxvii–xxviii, as quoted in Levy and Peart 2017, p. 186). In this volume, I express an unambiguous preference for “competition” in the market for expert opinion. I have tried to emphasize, however, the importance of market structure. Market structures that might we might plausibly dub “competitive” may be poor safeguards against expert failure if the market structure lacks rivalry, “synecological redundancy” (defined in Chapter 9), and free entry. Thus, Levy and Peart’s seeming ambivalence may reflect the importance of structural differences between various markets in expert opinion that might all be considered “competitive.” It seems fair to say that the themes of competition and market structure are undertheorized in Levy and Peart’s work on experts. In Chapters 8–11 I give an “economic” theory of experts and expert failure in which market structure is central.
Before closing this chapter I should briefly place “information choice,” as I will call the economic theory of experts, in the context of the larger literature on experts. We have already seen that it probably best fits in the “unreliable-empowered” category of the basic taxonomy of Chapter 2. In this section, I comment on the definition of “expert” and briefly lay out my stance on each of the themes discussed in this chapter.
If knowledge is dispersed, then everyone has specialized knowledge peculiar to their place in the division of labor. If experts are defined by their expertise, we are all “experts.” It is hard to see what intellectual work can be done by a definition of “expert” that fails to distinguish between experts and nonexperts. I define “expert” not by expertise, but as anyone paid for their opinion. I am not proposing a theory of expertise, but an economic theory of experts.
In my theory expert power is to be feared because it makes experts less reliable. At least two conditions give experts undue power. First, experts may have a kind of monopoly in which they become “the officially accredited definers of reality” (Berger and Luckmann 1966, p. 97). Second, they may choose for nonexperts, rather than merely advising them. I seek mechanisms tending to increase the diversity of expert opinions and reduce, thereby, the monopoly power of experts. And I generally prefer that experts be in a merely advisory role that preserves the autonomy of nonexperts.
I think Foucault was right to see knowledge imposition as a power issue. I oppose the rule of experts, which entails knowledge imposition. In my closing remarks I will say that the problem of experts mostly boils down to the question of knowledge imposition. Like Foucault, I think when some people can impose unitary knowledge schemes on other people, the people imposed upon are oppressed in at least some degree. Unlike Foucault, however, I am a social scientist. I want to know how institutions work. Thus, my own analysis does not run in terms of “rationalities” or “discourses” or “disciplines” and their changing historical forms. My analysis is about who does what and why they do it. I feel bound to ensure, to the best of my ability, that the actions I impute to agents in my theory would be understandable to the real-world actors thus modeled (Schutz 1943, p. 147; Machlup 1955, p. 17). Foucault, instead, wished to “Refer the discourse not to the thought, to the mind or to the subject which might have given rise to it, but to the practical field in which it is deployed” (1972, p. 235).
I forcefully repudiate any suggestion that experts are more virtuous than nonexperts. I thus advocate a kind of ethical parity between experts and nonexperts. Good and substantive ethical codes tend to improve expert opinion, but are not powerful mechanisms to that end. Market competition has much more potential to improve outcomes.
I view reflexivity as a central issue. Following Hayek and others, I think that reflexivity implies limits to explanation. In particular, reflexivity implies that we cannot have the sort of causal theory of ideas that Bloor (1976) and others have attempted. Like Levy and Peart (2017) I place great importance on “putting the theorist in the model.” The theory, in other words, should not require the theorist to implicitly model themself as motivationally, cognitively, ethically, behaviorally, or in any other way different than the agents in the model. The theorist is but one more ant in the anthill.
My theory gives very little weight to any suggestion that the well-informed citizen has a special role to play in either disciplining experts or helping expert opinion prevail with the man on the street.
Democracy is a vital political principle and a bulwark against tyranny. But the logic of public choice theory (Buchanan and Tullock 1962) seems to suggest that democratic control of experts is unlikely to be very effective. The theory of regulatory capture (Stigler 1971; Posner 1974; Yandle 1983) seems to bolster such pessimism. Thus, in contrast to many other theoretical treatments of the problem of experts, I give little weight to the idea that the principle of democracy can somehow constrain experts from abusing their power. Democracy is not an effective check on expert power, error, or abuse. Rather, the rule of experts is inconsistent with pluralistic democracy.
As far as I can tell, the best opinion does not necessarily prevail in the market for ideas. But free and open discussion is nevertheless a bulwark against multiple evils, including expert failure and the abuse of power. No encomiums to discussion, however, can prevent the rule of experts from inducing expert failure. Thus, market structure is the fundamental issue rather than, say, the proper ethics of discussion.
I emphasize the importance of market structure for the regulation of expert opinion. Cowan and I have said that “Competition turns wizards into teachers” (Koppl and Cowan 2010, p. 254). I take a generally favorable view of “competition.” Unfortunately, words such as “competition” are subject to misunderstanding. The term “competition” covers very different market structures with very different epistemic consequences. Empty invocations of “competition” are no substitute for analysis. The word “competition” seems to suggest to many scholars ideas that have little or nothing to do with the sort of model I have in mind. In Chapter 5 I will review some of the economic concepts used in Part III of this study along with some common misunderstandings of them.
In this and the previous two chapters I have tried to show that there is a literature on experts. This literature sprawls across many scholarly disciplines and may easily seem to lack structure. Scholars in science and technology studies have made important contributions to this literature, and they cite and discuss one another’s work. The sociological and methodological literature on science is of relevance to the economics and sociology of experts. Butos and Koppl (2003) review this literature, which includes Bloor (1976), Kitcher (1993), Kuhn (1970), Latour (1987), Latour and Woolgar (1979), Merton (1937), Pickering (1992), and Polanyi (1962). This literature considers research science, however, rather than experts in general.
There is less coherence and no common conversation in the broader literature I have tried to identify. Scholars working along similar lines do not cite one another. Thus, Turner (2014) does not cite Peart and Levy (2005), and Levy and Peart (2017) in turn do not cite Turner (2001). The simple taxonomy of Table 2.1 may be helpful in creating some structure and coherence for this literature.
In this chapter I have identified several recurrent overlapping themes arising in the literature on experts. Besides the obvious issue of how to define “expert,” these common themes are power, ethics, reflexivity, the well-informed citizen, democratic control of experts, discussion, and market structure. For each theme I have tried to give at least some indication of what choices or strategies might be available for addressing that theme within the context of a theory of experts.
This and the previous two chapters consider the history of thought on experts. Chapters 8–11 lay out a theory of experts and expert failure. Hayek’s notions of spontaneous order and dispersed knowledge are central to this theoretical discussion. Thus, in the next three chapters I review these foundational concepts. I give Hayek’s idea of dispersed knowledge a lengthy discussion. I think this idea is every bit as important as Hayek claimed. But it is easily reduced the banal and inconsequential remark that “knowledge is dispersed.” I have tried to show that Hayek’s insight on knowledge went far beyond this banality, that in general it was not clearly understood before Hayek raised the issue, and that even since then skilled scholars have often fallen into error by an inadequate recognition of the nature of the problem of dispersed knowledge. Others have more sophisticated notions, and yet retain a fundamentally hierarchical view of knowledge that is, in my view, deeply mistaken.