6 Intellectual Integrity
Epistemic acceptance bridges the divide between the intellectual and the practical.1 It is practical in that it consists in a readiness to use a commitment in inference or action—that is, a readiness to do something. It is intellectual in that its range is restricted to contexts where the agent’s ends are cognitive. Although the practical realm is broader than the moral realm, my account raises the possibility that moral norms figure in epistemic acceptability. This suggestion is strengthened by my characterization of a realm of epistemic ends. Legislating members of such a realm have obligations to one another—obligations to treat one another as free and equal members in a joint epistemic enterprise. These obligations are at least political, if not moral. In this chapter I argue that moral requirements do not just constrain, but infuse responsible disciplinary practice. They not only promote the epistemic ends of a discipline, they are partially constitutive of those ends. I begin with a sketch of what Williams calls ‘thick concepts’, concepts like ‘disreputable’ and ‘courageous’, in which evaluative and descriptive elements are fused. I argue that trustworthiness is such a concept. I then consider how it shapes inquiry. I look at familiar threats and explain how they undermine the epistemic enterprise. This sheds light on what the enterprise is and what it achieves.
Thick and Thin
The widespread conviction that the moral and the factual are disjoint seems to stem from focusing on moral predicates like ‘good’ and ‘bad’, ‘right’ and ‘wrong’, which appear to have purely evaluative contents. But, as Williams shows, many predicates are simultaneously evaluative and descriptive. They are, in his terms, thick (Williams, 1985, 140–143). To say that a political maneuver is sleazy is typically to both characterize and disparage it. Occasionally, with a tinge of irony, ‘sleazy’ can express admiration. But, evidently, its use is never value neutral. ‘Good’ and ‘red’, on the other hand, are thin. That an item instantiates the predicate ‘good’ reveals nothing about what sort of thing it is; that it instantiates ‘red’ reveals nothing about whether it is any good.
To appreciate the difference, consider the newly coined predicate ‘gred’, which applies to all and only things that are both good and red. Such a predicate, like ‘sleazy’, is both descriptive and evaluative. A child’s mitten satisfies the descriptive requirement by being red, and satisfies the evaluative requirement by being good. But a gred item’s being red has no bearing on its being good; nor has its being good any bearing on its being red. The mitten qualifies as red because of its color; it qualifies as good because of its capacity to keep a small hand warm. The mitten just happens to instantiate the two predicates; it belongs to their intersection. Because we can prize apart the descriptive and the evaluative components, ‘gred’ is not a genuinely thick term. It is simply a contraction.
The relation between the descriptive and the evaluative elements in thick terms like ‘sleazy’, ‘truthful’, and ‘trustworthy’ is tighter. Thick terms involve a fusion of descriptive and normative elements. They do not admit of the sort of factor analysis that ‘gred’ does. Although ‘sleazy’ is a descriptive predicate whose instances we readily recognize, we have no evaluatively neutral way to delineate its extension. We have no way to identify the class of sleazy political maneuvers without intimating that they merit disapproval. If epistemic terms are similar, then we should have no way to identify their extensions without intimating that they are worthy of approval or of disapproval on cognitive grounds.
In Truth and Truthfulness, Williams (2002) discusses the ways the thick concept of truthfulness depends on and diverges from its thin descriptive precursor—the concept of uttering truths. I briefly review his discussion because it illustrates how thick terms relate to their less corpulent relatives. He develops a fictional genealogy to show how the virtue of truthfulness could have emerged from the practice of uttering truths. As a first approximation, a truthful person is someone who utters or is disposed to utter truths. We can identify such people without taking a stand on the value of uttering truths, just as we can identify terse people without taking a stand on the value of brevity. The evaluative element enters the picture later, when we deem uttering or being disposed to utter truths a good thing. At the outset, there is no fusion. The descriptive and evaluative elements can easily be separated. This does not last, because the virtue of truthfulness does not consist exactly in uttering or being disposed to utter truths. It involves both more and less than that.
Obviously, we do not and ought not expect a truthful speaker to utter or be disposed to utter only truths. Everyone makes mistakes. So if anyone is to count as truthful, truthfulness cannot require uttering only truths. It might, however, require saying only what one thinks is true. That would leave room for mistakes but exclude lies. This is a step in the right direction, but it demands too little, for at least two reasons.
First, misleading is as much a violation of truthfulness as lying is. It is possible to utter only truths and, by exploiting implicatures, induce one’s audience to form a false belief. If Ben tells his editor, ‘I can’t get the paper in before Tuesday’, he gives her to believe that he can get it in on Tuesday. If he knows full well that there is no chance it will be done before three weeks from Friday, he has not been truthful, even though the sentence he uttered was true.
Second, being a virtue, truthfulness involves responsibility. We would not consider a person truthful if she were unduly careless or gullible in her belief formation. Suppose that, despite her excellent education, Joan believes everything she reads in the tabloids and indiscriminately reports her beliefs. She believes with equal conviction that space aliens landed in Detroit and that Barry Bonds took steroids. Although she uttered the true sentence ‘Barry Bonds took steroids’, we would not consider her truthful. For she was equally prepared to utter the infelicitous falsehood, ‘Space aliens landed in Detroit’. Her utterances are not credible, since she does not adequately filter out infelicitously false claims. Although she says what she believes, she lacks the virtue of truthfulness.
Nor is it enough to impose a responsible belief requirement. Truthfulness does not require uttering every truth one responsibly believes. Some truths are private. When asked about them, Williams maintains, evasions, obfuscations, and even lies are permissible. Your questioner may have no right to know whom you plan to vote for, what your salary is, or how much you weigh. If refusing to answer unduly intrusive questions is not feasible and ‘It’s none of your business!’ is too blunt, Williams contends, a truthful person need not tell the truth. Her interlocutor should recognize that in asking such questions, she oversteps the bounds within which truthfulness is required.
I would go further. Someone who utters and is interpreted as uttering a felicitous falsehood would normally count as truthful. A regular rider who says, ‘It will take you twenty minutes to get to Park Street by subway’, would normally be considered truthful even if the transit authority lists the travel time as eighteen minutes. At best, truthfulness requires being true enough in the context of utterance.
Truthfulness, as Williams characterizes it, is intimately related to, but does not supervene on, uttering truths. For the requirements on speaking truthfully diverge from those on uttering truths. The reason for the divergence is practical. Because a more nuanced attitude toward truths is more valuable than uttering all and only things one believes to be true, that more nuanced attitude is the one we should admire and cultivate. Truthfulness does not admit of factor analysis because, without appealing to evaluative considerations, we have no way to delineate the extension of the class of topics about which one should say only what is true enough.
If Williams is right about the relation of a thick concept to its thin descriptive precursor, thick concepts are not a mere convenience. Without such normatively loaded concepts, we could not partition the world as we do. So the idea that thick concepts are simply contractions like ‘gred’, the idea that that we could make do with morally thin concepts—good, bad, right, and wrong—plus our stock of purely descriptive concepts, is incorrect. Lacking the thick concepts, we would have no way to mark out extensions we are interested in. If there are other thick epistemic concepts, we should expect them to display the same pattern. They should be anchored in, but not supervene on, a descriptive core. They should mark out extensions that we have no evaluatively neutral way to demarcate.
Trustworthiness
A critical consideration in fixing the contours of truthfulness is the idea that truthful people are trustworthy. Within the realms where truthfulness is required, when a truthful speaker says that p, his audience can take his word for it. Trustworthiness too is a thick concept. While truthfulness pertains to linguistic behavior, the scope of trustworthiness is broader. We deem an auto mechanic or a pastry chef trustworthy primarily on the basis of what she does, not on what she says. Trustworthiness is the genus of which truthfulness is a species.
Williams contends that truthfulness requires Competence and Sincerity (where the upper case indicates that these are quasi-technical terms). Competence is a general requirement, applying to all trustworthy behavior. Sincerity, however, seems restricted to symbolic behavior. Utterances, inscriptions, exemplars, and gestures can be Sincere or Insincere. But Sincerity seems not to apply to other forms of behavior. The mechanic who purposely uses inferior parts has some sort of character defect, but it is not clear that his knowingly installing a faulty gasket can be considered Insincere. Sincerity is, I suggest, a species of the generic concept of being Well-Intentioned.2 In installing the faulty gasket, the auto mechanic is not Well-Intentioned. To be trustworthy, an agent needs to be both Competent and Well-Intentioned. A Competent but Ill-Intentioned auto mechanic has the ability but lacks the incentive to properly fix the car; an Incompetent but Well-Intentioned mechanic has the incentive but lacks the ability.
How does this bear on epistemology? Because of the division of cognitive labor, epistemic agents need to depend on one another. So epistemic agents should be trustworthy. This involves having and properly using appropriate background assumptions and know-how. Having them can perhaps be construed purely cognitively, but being willing to properly use them is a matter of volition. If agents work together, or depend on each other’s actions, utterances, or products, they need to be in a position to trust one another. They need to be able take it for granted that those they rely on are in relevant respects both Competent and Well-Intentioned. I will argue that it follows from the division of cognitive labor that understanding has an ineluctably moral dimension. My main concern is not with individual concepts, but with norms and standards that are at once epistemic and moral. Again, for convenience, I focus on science, with an occasional aside to recognize parallels in other disciplines.
Ethics in Inquiry
Section 7009 of the America COMPETES Act (Federal Register, 2009) requires that all undergraduates, graduate students, and postdocs working on projects funded by the National Science Foundation (NSF) be given ‘appropriate training and oversight in the responsible and ethical conduct of research’. Probably no one denies that scientists should conduct their research ethically;3 and few would deny that young scientists need to learn how to do so. Nevertheless, many principal investigators apparently regard the requirement as an unfunded mandate—yet another demand on their already strained budgets, time, and attention. Evidently they think that teaching their protégés to be moral scientists is something different from teaching them to be scientists. As a result, generic handbooks and websites have arisen to instruct students how to recognize, avoid, and protect themselves from the more obvious forms of scientific misconduct. Such quick fixes treat moral requirements as addenda rather than as integral to scientific inquiry. This is a mistake. I will argue that there is an ineliminably moral moment in scientific inquiry. This feature of systematic inquiry is regularly overlooked. Ignoring it skews our understanding of the scientific enterprise and its products. If I am right, then although they may not have realized it, senior scientists have been imbuing their juniors with the relevant moral values already. The point of the NSF mandate, then, is not to ask them to take on an additional onerous chore for which they are, or may think they are, unqualified, but to do explicitly and self-consciously what they had been doing tacitly and unconsciously all along. Although the bulk of this chapter focuses on science, the moral dimension I discuss is integral to every discipline. Insofar as inquirers have to depend on one another, they have moral/epistemic obligations to one another. The understanding that emerges is shaped by the recognition of those obligations.
An overarching epistemological objective of science is to develop a comprehensive, systematic, empirically grounded understanding of nature. Although it is not science’s only objective, I will call such an understanding its epistemic goal. It is common to all the sciences: biology seeks to understand living organisms; meteorology, to understand atmospheric conditions; physics, to understand matter and energy; and so on. Two obstacles get in the way: (1) Nature is enormously complicated. (2) Findings are fallible; no matter how well established a conclusion is, it still might be wrong. I will argue that in order to pursue its goal in light of these obstacles, science incorporates values that are simultaneously moral and epistemic. These values, I will urge, are not mere means; they are integral to the sort of understanding science provides and underwrite its epistemic standing.
Very roughly, the argument is this: Scientific inquiry requires collaboration. That collaboration requires trust. Since trust is unreasonable in the absence of trustworthiness, scientists need to be, and to consider one another, trustworthy. Since trustworthiness is a moral attribute, scientific inquiry has an inextricably moral dimension.
Scientific inquiry is a social undertaking. Experiments, models, findings, and theories may be fruits of individual efforts, but they are constructed and evaluated by reference to standards devised, endorsed, and enforced by a scientific community. These standards, and therefore the items they sanction, are not arbitrary. They are grounded in a shared conception of the scientific enterprise—its ends, means, liberties, and constraints. They are, that is, generated and reflectively endorsed by legislating members of a community of epistemic ends. Moral values are integral to science. They emerge from and contribute to the scientific community’s conception of its enterprise. The practices they engender, the constraints they impose, and the activities they promote figure in its success. Although the values in question may initially be merely instrumental, they become intrinsic when scientists recognize that realizing them not only promotes good science, but in part constitutes good science.
Epistemic Interdependence
Modern science accommodates itself to the complexity of nature through a division of cognitive labor. Rather than starting from scratch, each scientist builds on previously established findings, uses methods others have devised and tested, employs measuring instruments others have designed, constructed, and calibrated, and analyzes data using mathematical and statistical techniques others have validated.4 Research teams rather than solitary investigators frequently generate results. Such teams can do more than any one of its members could do alone, not only because they can cover more ground, but also because team members typically have different cognitive strengths and often have different areas of expertise. Collectively they bring to bear a broader range of talents and training than any individual scientist possesses. This is not to say (although it might well be true) that it would be impossible for an isolated individual to come up with an empirically adequate understanding of some aspect of nature.5 My claim is only that the sort of understanding modern science supplies is rooted in epistemic interdependence. Science as we know it is a cooperative endeavor.
Because it depends on a division of cognitive labor, scientific collaboration occurs in a particularly harsh epistemic climate. Collaborators need not know each other well or understand each other’s contributions. The paper reporting the sequencing of the human genome had over 250 authors, working in institutions on four continents (Venter et al., 2001). Most of the authors were in no position to vouch for one another’s intellectual character or monitor one another’s work. Whether or not they were in the same venue, many were probably at best dimly aware of what some of their coauthors contributed. Although few scientific papers have so many authors, such a lack of proximity and close personal ties among members of a research team is common. Moreover, all scientists build on the findings of, and deploy methods developed by, predecessors they do not personally know. And even when they work in close physical proximity, the intellectual distance between collaborators who differ in expertise can be vast. Statisticians often have no clear idea how bench scientists generated the raw data they are expected to analyze. Nor do the bench scientists know precisely how or why the statisticians analyze the raw data as they do. A scientist does not always know the reasons behind a conclusion a collaborator drew; and if she were told, she might not fully appreciate their force (Hardwig, 1991).
Such epistemic interdependence yields significant benefits. It extends epistemic reach: More data can be gathered, more experiments run, more matters investigated, more factors considered, more perspectives accommodated than would be possible for a single researcher working alone. It stabilizes findings: Individuals are subject to bias, inattention, carelessness, and wishful thinking, as well as to limitations in background knowledge and scientific imagination. If the result of an investigation is the product of many minds, individual foibles can cancel out. What one team member overlooks, another may notice; what one slights, another may focus on; what one never learned, another may have studied in depth. It solidifies results: because of team members’ differences in expertise, the quality of evidence and range of interpretations available through teamwork is apt to be better than the quality and range available to an individual (Hardwig, 1991). Collectively they can advance understanding more than any one of them could do alone.
But epistemic interdependence creates vulnerability. To accept a conclusion is to be prepared to use it as a basis for cognitively serious inference and action, and to be prepared to give others to accept it as well. To the extent that one scientist does not know or understand what another did, she is in danger of accepting and basing her own investigations on unwarranted results. If she does, she may pursue fruitless lines of inquiry and overlook promising ones. If she coauthors a paper contaminated by a collaborator’s scientific misconduct, her professional reputation and livelihood are at risk. Epistemic interdependence thus requires trust. Scientists can reasonably depend on one another’s contributions only if they can be confident that those contributions are scientifically acceptable. Perhaps personal acquaintance grounds trust among a small number of investigators who work closely together. But the requisite epistemic interdependence extends across the scientific community, so personal ties are insufficient. Trust in one’s fellow scientists must be impersonal. A major function of the infrastructure of science is to secure the bases of impersonal trust. Scientific misconduct betrays that trust.
Misconduct
The most flagrant forms of scientific misconduct are fabrication of data, falsification of findings, suppression of results, and plagiarism. Fabrication and falsification misrepresent what was done; plagiarism misrepresents by whom it was done; suppression conceals what was found. The obvious moral objection to all such conduct is that it is dishonest. But what, if anything, does this have to do with its being bad science?
To fabricate data is to issue a report of an investigation that was not conducted (or not conducted for the cases reported). To falsify a finding is to report that an investigation had a result that it did not actually have. Both fabrication and falsification are lies. Thus, one might think that they subvert the epistemic goal by introducing infelicitous falsehoods into science, representing them as true. That would obviously be bad science. This explanation assumes that a contention grounded in falsification or fabrication is infelicitously false. That need not be so. Fabrication and falsification misrepresent the epistemic status of a claim; they purport that it has backing that it does not have. Such misrepresentations lack justification. But a statement that lacks justification may nevertheless be true or true enough. Fabricated data may be just what would have been found had the appropriate experiment been run; a falsified finding may be true or true enough even though no (or unacceptably little) evidence confirms it.
In fact, scientists who falsify or fabricate rarely intend to introduce infelicitous falsehoods into science. Because science is self-correcting, they could not reasonably expect to succeed, at least not in the long run. Rather, they seek to incorporate into science what they take to be a truth, but one for which they do not have, or do not yet have, evidence. This makes the problem trickier. If the conclusion they purport to have established is true, one might argue, our understanding of the phenomena under investigation is enriched rather than degraded by its acceptance. And if the perpetrator has good instincts about where the truth lies, his falsified or fabricated conclusion might well be true. If it is, what is the problem?
If truth is the end, and scientific research a mere means to it, falsifiers and fabricators with good instincts or good luck simply take shortcuts. They arrive at the end without deploying the standard means. They skip the intermediate steps of acquiring and evaluating empirical evidence. If all that matters is truth, how a truth was arrived at should make no difference. But mere truth is not what science is after. A science is not just a compilation of discrete bits of information about a topic that happens to be accurate; it is a comprehensive, systematic understanding of a range of phenomena, based on carefully acquired, scrupulously recorded, empirical evidence. The tie to evidence is crucial; for evidence supplies the grounding that underwrites epistemic standing. Conclusions backed by evidential support are conclusions scientists have reason to accept; conclusions, even true conclusions, without such support are conclusions they lack reason to accept.
This explains what makes the falsified or fabricated ‘finding’ scientifically objectionable. Although the alleged finding may be true, in the absence of adequate evidence one would be unwise to depend on it. It is not trustworthy. To accept a conclusion that lacks adequate backing is to take an epistemic risk. And to put other members of the community in a position where they are taking such a risk unawares is to fail in one’s obligations to them. That is what proffering a falsified or fabricated finding does. It constitutes a betrayal of science because scientists have undertaken a commitment to proffer only results that members of the realm of epistemic ends can, by their own collective lights, trust.
The counterpart to publishing reports of research that was not done is refraining from publishing reports of work that was done. Suppression of results thwarts science’s goal in much the way that falsification and fabrication do. Obviously a scientist is not obliged to publish the results of every failed experiment or every fruitless or frivolous line of thought. Doing so would create an information glut of no use to anyone. But if a well-designed, scrupulously conducted investigation was carried out to establish the truth of a scientifically significant hypothesis, the fact that it failed is something the scientific community deserves to know. Suppressing such a finding is in effect a lie of omission.6 It misleads the community, because suppressing the result unjustifiably implicates that the question it investigated is still open, that a particular approach to the question is still viable, or that a previously accepted finding has not been discredited. It implicitly invites investigators to pursue what is now known to be an unproductive line of inquiry or to rest their research on what has been found to be an untenable base.7 This suggests that the standard journal practice of refusing to publish null results is more problematic than it appears. The failure to disclose that a promising line of inquiry went nowhere can impede scientific progress. Had the null result of the Michelson–Morley experiment been suppressed, physics would have suffered.
Even the best science affords no guarantees. However scrupulously performed an experiment is, its result may be false. There is always some risk in accepting any finding. The standards of a scientific community constitute its consensus about what risks are worth taking. But whatever the standards, the goal would be subverted if science were permanently saddled with the errors of its forebears. Science therefore incorporates methods for rooting out received errors. One is that results that cannot be reproduced or that do not stand up under further testing and elaboration must be excised from science. But it is not enough to simply declare a previously accepted finding to be false or unwarranted. To learn as much as possible from their mistakes, scientists want to know what went wrong. Rather than simply deleting an erroneous conclusion, it is desirable to backtrack to the source of the error. This is central to the objectionability of plagiarism.
The content of a plagiarized report may be, and I suspect typically is, beyond reproach. The problem is that it is not the product of the scientist who purports to have produced it. Again, this is clearly dishonest. But what makes it bad science? The acceptability of a finding is independent of the identity of the scientist who generated it. Indeed, if the work is beyond reproach, any competent scientist could have done the research and written it up. This almost makes it look as though science should have no objection to plagiarism. Why should scientists care who did the research, so long as it satisfies their standards?
Giving credit where credit is due encourages good work. So an obvious objection to plagiarism is that failing to credit productive scientists while crediting unproductive ones threatens to drive the wrong scientists from the field and to direct research funding to lazy, inept, unscrupulous, or unproductive investigators. Plagiarism, then, is objectionable for the practical reason that it interferes with an effective incentive system by making it difficult to identify and reward those who are doing good work. This might be called the department of human resources objection to plagiarism.
Is there any epistemic reason to credit the individuals who actually did the research and to object when others try to steal their credit? That is, does the misappropriation of other people’s work itself damage the pursuit of science’s epistemic goal? An affirmative answer stems from fallibilism. No conclusion is certain. However scrupulous scientists are, however well tested their methods, and however well-honed their standards, conclusions still might be, and sometimes are, wrong. That being so, science incorporates mechanisms for detecting and correcting errors. The overwhelming majority of such errors are honest mistakes. When a mistake is discovered, one might think, the remedy is simply to excise it. Withdraw the paper and admit that there is currently no good reason to accept its conclusion. But if the research was responsibly done and satisfies current scientific standards, a question remains: what went wrong? Rather than merely deleting the error, it is desirable to track down its source. Was it due to sloppiness, bad luck, a mistaken assumption, or an unreliable method? Are the science’s standards too lax or their applications uneven? To maximally profit from mistakes that have previously passed muster, it is advisable go back and look at the raw data, the interpretations, and the analyses to find out where the misstep occurred. The aim is not only to excise the error but to correct the assumptions, methods, practices, or standards that led to it. Scientists do not want to keep making the same sort of mistake.
Plagiarism stymies backtracking to the source of an error. The scientist who purports to have done the research did not do it. So he cannot provide information about what was done and what went wrong. Fallibilism gives rise to a demand for corrigibility. Plagiarism unduly limits corrigibility; it undermines the self-correcting character of science.
The discussion so far has shown that the dishonesty embodied in falsification, fabrication, plagiarism, and suppression of important findings is not an accidental feature of practices whose scientific objectionability lies elsewhere. Such dishonesty thwarts science’s epistemic goal. The practices in question are bad science, then, precisely because they are dishonest. And their being dishonest makes them untrustworthy. An obligation to be trustworthy thus lies at the heart of science.
Disciplines differ in the sort of epistemic interdependence they display. Historians often work alone. Their books and scholarly papers are apt to be single authored. Even so, they depend on archivists, cartographers, and archeologists; and they build on what other historians have established. And history, like every discipline, is a realm of epistemic ends, whose commitments are jointly crafted and collectively endorsed because they foster the community’s ends. So historians, like scientists, are beholden to the norms and practices of their intellectual community and can engage in conduct that betrays their obligations to that community.
Plagiarism can occur in any discipline, and regardless of field, can impede backtracking to discover the source of errors. Falsification and fabrication exhibit different profiles in different disciplines. In history and archeology, for example, fabrication often takes the form of forgery. A document or artifact is represented as having a provenance it is known not to have; and on the basis of that provenance, is claimed to be an exemplar of a period, source, or region to which it does not belong. Falsification occurs when, for example, archeologists seed a site, planting authentic artifacts in locations where they were not in fact found. Since archeological understanding involves an appreciation not just of what is found, but also of where it is found, the arrangement of artifacts in the seeded site yields potentially misleading information about the civilization under study.
History faces problems of selection. To generate an understanding of an era, a historian has to decide what events to include and what descriptions of those events to provide. Although there is considerable latitude, such choices are not unbounded. Epistemic irresponsibility can consist in lying about, omitting mention of, or over- or underemphasizing specific aspects of events. Holocaust-denying histories of World War II are irresponsible because they intentionally misrepresent history. Other intellectually irresponsible histories of the period are more circumspect. They issue no denials but simply neglect to mention the Holocaust. Yet other histories distort through their choice of grain or terminology. A history that acknowledged the Holocaust and accurately reported the number of victims would still be misleading if it neglected to mention that the victims were victims of genocide. So would one that acknowledged the genocide but failed to identify its targets. Conceivably, highly specialized histories, for example, histories of the naval war in the Pacific, could justify such an omission on grounds of irrelevance, but any general history, and any history that purports to do justice to the war in Europe, could not justifiably omit mention of or downplay the importance of the ‘final solution’. Because the Holocaust is so important, omitting it would inevitably skew the understanding of any history of the period that failed to give it its due.
The issue of omissions, like the issue of suppression of results, is tricky. I picked an egregious omission to make the problem vivid. But historians face delicate decisions about what to include and how to describe it. Typically there is a range of acceptable solutions. My point is not to provide a rule for settling such questions, but to emphasize that scholars face obligations not just to tell the truth, but also not to mislead. Omissions and terminological sleights of hand can mislead. How and when they do so is not always obvious. The developing norms that govern a realm of epistemic ends delineate where the lines are drawn.
As in the case of the hard sciences, in the humanities and social sciences, moral and epistemic obligations fuse. What is wrong with forging a document or seeding a site or omitting or sanitizing an important fact is not only that it is in effect a lie, but that it subverts the ends of the discipline, leading its members to accept and purvey unwarranted claims. It thus betrays the trust of the other members of the community. Insofar as the miscreant accepts the norms of the community, it also constitutes a sort of self-sabotage.
Even scrupulously done, properly documented research that exhibits no hint of overt misconduct is suspect if the investigators have a conflict of interest (Lo & Fields, 2009). Where such conflicts occur, agents have an extradisciplinary interest that threatens to deflect them from the discipline’s goal. If, for example, a scientist has a financial stake in the outcome of a study or an ideological commitment to things turning out a certain way, he may be apt to weigh the evidence differently than an unbiased investigator would. Conflicts of interest arise at the institutional level as well. Since the pharmaceutical industry has a strong financial interest in the outcomes of clinical trials, there is a legitimate concern that the trials it conducts or the reports it generates might be biased in favor of finding new drugs effective.8 Commercial research in any area has a higher-order conflict of interest. The epistemic imperative requires that all members of a community of inquiry have free and equal access to the evidence for a claim. Any member of the community should be able to challenge a finding. But commercial enterprises have trade secrets. To protect those secrets, they do not make the details of their research publicly available. Whether the claims they make about their products are epistemically acceptable is open to question. Privatization of inquiry comes at considerable epistemic cost.
A conflict of interest exists whenever there is a danger of undue influence from an extradisciplinary interest. It does not require that anyone actually be unduly influenced. So the existence of a conflict of interest is compatible with complete absence of wrongdoing on anyone’s part. An investigator with a conflict of interest may in fact weigh the evidence exactly as an unbiased investigator would. Even so, the possibility of bias undermines his credibility.
This raises the question of why conflicts of interest are objectionable. One might think that it would be enough to bar the existence of undue influence. Why should the mere risk of undue influence be a problem? Here again the answer turns on trustworthiness. Undue influence by extradisciplinary interests corrupts the discipline and deflects it from its epistemic goal. If a scientist does not know whether another scientist was unduly influenced, she does not know whether his result is trustworthy. And if she harbors doubts about its trustworthiness, she will not and should not trust it. The risk that a scientist’s or institution’s extrascientific interest deflects from the goal provides grounds for doubt. Consciously or unconsciously, investigators may be influenced by factors that do not withstand scientific scrutiny. Given that risk, others ought not be confident that their results satisfy the standards of the scientific community. The mere risk of undue influence by extrascientific factors thus undermines confidence. It is enough to disincline others to accept assurances that the work is sound.
Epistemic Responsibility
Scientific misconduct thwarts the epistemic goal because it undermines collaboration or corrigibility. This suggests that besides abiding by the prohibition against specific forms of misconduct, members of a community of inquiry bear additional obligations to one another. Overt misconduct of the sort examined above is not the only threat to collaboration and corrigibility.
Science is subject to epistemic requirements that constrain experimental design, data gathering, analysis and assessment. Hypotheses should be formulated sharply enough to admit of empirical testing. Experiments should be designed to yield data capable of confirming or disconfirming the hypothesis under investigation. Irrelevant and obviously misleading evidence should be discredited. Data should be painstakingly gathered and carefully recorded in a timely fashion. They should be displayed in such a way that their significance and their limitations are manifest. Analyses should be rigorous, relevant, complete, and apt. Assessments should be judicious. Research reports should be grounded in a comprehensive, fair-minded, current understanding of the topic and the methods for investigating it. They should acknowledge the assumptions made, the limitations of the approach being used, and the potential threats to validity. Previously published relevant work should be cited. Such requirements are familiar; their satisfaction yields conclusions there is reason to accept.
So far, however, there is nothing overtly moral about the requirements or the benefits of satisfying them. The items on my list seem to be purely epistemological requirements. The moral element enters when we realize that in publishing a result, a scientist issues an open invitation to the community to rely on it and assures its members that they can do so with confidence. Results that justifiably inspire confidence must be epistemically responsible. In addition to intellectual honesty, competence and conscientiousness are mandatory. In publishing a result, a scientist implicates not only that it was arrived at honestly, but also that it satisfies the epistemic standards of the community. That is, she implicates that the work is competently done. Because scientists must depend on one another, the epistemic requirements are moral requirements; if scientists fail to satisfy these epistemic requirements, they fail in their moral obligations to the community.
Carelessness, fickleness, obtuseness, and bias, then, are not just epistemological failings; in the context of inquiry, they are also moral failings. Results infected by such faults are honest mistakes. Confirmation bias leads an investigator to overlook or underrate evidence that tells against a hypothesis she favors. She is not being dishonest; she genuinely believes that her evidence supports her conclusion. But inasmuch as she can, by being more rigorous, avoid confirmation bias, her mistake is a culpable error. The same holds for carelessly taking or recording data, overstating one’s results or understating their limitations, and so on. By inviting her colleagues to accept a conclusion that by their own lights they ought not accept, a scientist does them wrong.
Fickleness is different. An intellectually fickle investigator flits from one set of commitments to another without due regard for the underlying issues. One day she is a gradualist about evolution; the next day she accepts punctuated equilibrium. Her flaw is not that she changes her mind; it is that she does so cavalierly. She changes her mind too easily. Her pronouncements are unstable. Since they are so easily abandoned, her compatriots would be unwise to accept them on her say-so. The gullible investigator has a similar flaw. He believes whatever he hears, at least so long as the information is remotely credible. Perhaps he would not go so far as to believe that space aliens landed in Detroit, but he might accept the contention that because the gene for blondness is recessive, blondes are going extinct. If he spreads the word about the imminent extinction of blondes, he is intellectually irresponsible. For the information about how recessiveness operates is readily available and ought not in this context be ignored. Insofar as these flaws could be avoided by being more scrupulous in gathering evidence and justifying inferences, the agents are culpable. They are willfully ignorant. If they present their claims as scientifically backed, they do their fellows wrong.
In agreeing to work together, people engender obligations to one another—obligations whose content is determined by the objective they are jointly pursuing. Ceteris paribus, they owe it to one another to discharge those obligations. In inquiry, the goal is understanding. So the obligations are in the first instance epistemic; they pertain to generating and sustaining the understanding that inquiry seeks. But because inquiry is a joint epistemic venture, the epistemic obligations are also moral obligations. Investigators owe it to one another to satisfy the epistemic requirements, and to make it manifest that they do so.
This pattern is not peculiar to science, or even to inquiry. Members of other collectives, such as athletic teams and orchestras, are under similar obligations to one another. In the first instance, the obligations of a quarterback are athletic; those of a bassoonist, aesthetic. But the quarterback owes it to his teammates to discharge his athletic obligations well; he is something worse than a poor athlete if he carelessly or negligently lets down his side. And the bassoonist owes it to her fellow musicians to perform well; she is something worse than a poor musician if she neglects to learn her part or is cavalier about playing in tune. The commitments team members make to one another are akin to implicit promises; because the common knowledge of those commitments creates a basis for legitimate expectations, one does the others wrong if one does not abide by them.
Arguably, in inquiry there is a significant difference in scope. The members of a research team, like the members of a football team or an orchestra, consist of a limited number of identifiable individuals who have well-defined obligations to other people they know. But in publishing findings, a scientist effectively invites the entire scientific community to join her team; since she provides assurance that her findings satisfy their epistemic standards, she is under an obligation to proffer findings that they all can be confident of. Indeed, to the extent that the rest of the population depends on the findings of science, in publishing her results, a scientist offers her assurance that everyone can have confidence in them. The same holds for scholars in other disciplines. To publish a conclusion is to make that conclusion public. It is to invite readers or auditors to accept the conclusion with the assurance that it meets the relevant standards of acceptability.
A scientific community consists of practitioners who share a discipline. They are bound together, and think of themselves as bound together, by a network of commitments that govern their professional lives. They are joint legislating members of a realm of epistemic ends. They justify particular verdicts by reference to the rules and standards of their practices, and justify those practices by reference to the ends of their science. In performing this dual justificatory function, a scientific community is largely self-legislating. As I argued above, it makes the rules, constitutes the norms, and sets the standards that constrain and guide its activities. It determines what counts as an experiment, a confirming instance, a statistically significant result, a representative sample. Its norms and standards are not arbitrary. They are designed, and continually redesigned, with an end in view—namely developing and confirming a systematic, explanatory account of the phenomena, an account that is grounded in empirical evidence and that underwrites nontrivial inferences, arguments, and perhaps actions regarding those phenomena. Acceptability derives from satisfying the interlocking network of standards science sets for itself, standards that are grounded in an evolving, collective understanding of the scientific enterprise and of the difficulties to be encountered in pursuing its goals.
Although the standards apply throughout scientific practice, their general, impersonal character comes to the fore with publication. In publishing, a scientist undertakes an obligation to be worthy of her community’s trust. A critical question is what standards a work has to meet to vindicate the open invitation. Ideally, perhaps, we’d like to demand that the research report’s content be accurate: the claims that purport to be true are in fact true; those that purport to be felicitous falsehoods are in fact felicitous. But this is both too much and too little to ask. Science is fallible; it has no way to guarantee that findings are accurate. Moreover, as we have seen, mere accuracy is not enough. The scientist needs to convey her assurance that her research, whether the result is accurate or not, satisfies current, reflectively endorsed standards. Because those standards embody the best way the discipline has devised to generate and validate understanding in its field, results that satisfy those standards are currently acceptable.9
Assurance
To understand what a scientist proffers in publishing a paper, we need to distinguish between warrant and the assurance of warrant. Warrant is a first-order relation between evidence and conclusion. A conclusion is warranted when the evidence renders it sufficiently probable (as always, measured by current standards). A conclusion could be warranted even if there was reason to doubt that it was. This might, for example, be the case if the connection between the evidence and the conclusion obtains but is not obvious. An assurance of warrant is a second-order assessment of the strength of the connection between the evidence and the conclusion. Warrant, then, is a function of the force of evidence; the assurance of warrant, a function of the weight of evidence. The assurance of warrant is what grounds confidence in first-order assessments (Adler, 2002, 251–252).
To appreciate the difference, consider the following example. Bob takes a quarter from his pocket, looks it over and sees nothing odd about it. Concluding that it is a fair coin, he judges that the probability of its coming up heads when flipped is 0.5. In light of the prior probability that an arbitrarily chosen coin is fair and his cursory inspection of the coin, his conclusion is warranted. Now suppose he flips the coin 1,000 times and records the result of each flip. Not surprisingly, the coin comes up heads just about half the time. So he again concludes that the probability of its coming up heads is 0.5. Should we consider his epistemic situation unchanged? Should we think that he is epistemically no better off for having run the test? The test supplied no reason to alter his original probability assignment. That assignment was already warranted. There is little, if any, evident epistemic gain in his first-order assessment. Still, he is plainly better off. What he gains is not warrant, but a right to be confident. After the test, he is in a position to be more confident in the probability he assigns (Adler, 2002). The test result provides assurance that his first-order assignment is correct.
It is a principle of jurisprudence that justice not only must be done, it must be seen to be done. Something similar holds in science. Results not only must be warranted, they must be seen to be warranted. That requires that they be shown to be warranted. For scientists to be in a position to confidently rely on an investigator’s work, it must be manifest that the work satisfies community standards of acceptability. So rather than just issuing a press release announcing their findings, scientists publish a peer-reviewed paper that spells out (admittedly, rather tersely) their background assumptions, research design, measurements, analysis, and results. Such papers must be public and transparent. Publicity demands that results be available to the entire community; transparency demands that the reports provide sufficient detail that other members of the community can recognize why they ought to be accepted. They exemplify that the work they report satisfies the relevant standards.
I have been speaking as though the reason for making it manifest that the standards have been met is entirely other-directed. An individual scientist owes it to her colleagues to proffer only results that satisfy the standards they reflectively endorse. This is part of the story. But the satisfaction of those standards also gives the scientist herself reason for confidence in her findings. Because the standards function as a stay against carelessness, bias, and wishful thinking, they provide her with assurance that her work is worthy of her own reflective endorsement.
The scientific community sets standards whose satisfaction not only warrants conclusions, but exemplifies that they are warranted. Some requirements on acceptability—such as those concerning the size and constitution of the evidence class, the standards of statistical significance and power, the proper sort of analysis—bear directly on warrant. These are first-order requirements. Others, by making it manifest that results hold regardless of who generated them or why, pertain to the assurance of warrant. They mandate that findings be intersubjectively observable; experiments, reproducible; data, unambiguous; research reports, peer reviewed. If an item is intersubjectively observable, it does not matter who in particular observed it; any competent scientist could have done so.10 If an experiment is reproducible, the acceptability of its result does not turn on the character or intellectual endowments of the scientist who performed it; had any competent scientist done it, the result would have been the same. And if data are unambiguous, the identity of the interpreter is immaterial; all competent scientists would interpret the data in the same way.
One might think that these requirements bear only on warrant, not on assurances of warrant. If so, the fact that an experiment is reproducible affords only first-order evidence that its result is correct. Granted, if you ran the experiment a second time, you would get more first-order evidence for its conclusion. Rather than 64 data points, you would have 128. So the probability that the conclusion is correct would increase, at least slightly. But if the original experiment yielded a publishable result, it already provided enough evidence to warrant its conclusion. The additional evidence provided by the second run, although not unwelcome, would by no means be mandatory. It would not do much to increase the warrant of the already warranted conclusion. And if the original experiment can yield enough evidence to warrant the result, no more first-order evidence is needed. What does the reproducibility add?
Reproducibility obviates the need to take matters on faith. Doubters can go back and run the experiment for themselves. Similar points hold for the requirements that data be unambiguous and that phenomena be intersubjectively observable. When it makes no difference who generated a result, when any competent scientist could have done so, the personal foibles or intellectual idiosyncrasies of the investigator are irrelevant to the result’s acceptability. This indifference makes science impartial.
One might worry that by rejecting deliverances that are not intersubjectively observable, experiments that are not reproducible, and interpretations that are idiosyncratic, science throws out potentially valuable information. No doubt it does. But even though an irreproducible result, a quirky interpretation, or idiosyncratic observation may in fact be correct, one would be unwise to count on it and irresponsible to give others to count on it. Similarly for findings shrouded in secrecy. If a scientist cannot or will not publicly document what she did to achieve her result, she cannot responsibly invite others to accept her assurance that they can count on it. Without such documentation, they have no reason to believe that it satisfies standards they collectively endorse.
Peer Review
I have claimed that because of their interdependence, members of the scientific community have moral obligations to one another. One might agree that they are accountable to one another but still doubt that there is anything especially moral about that accountability. Given that their work is subject to peer review, it might seem that an individual scientist’s motivation for conforming to community standards is like a student’s motivation for not cheating when a proctor is looking over her shoulder or a driver’s for not speeding when a traffic cop is pointing a radar gun his way. The chances of getting caught are high enough and the disutility of getting caught great enough that enlightened self-interest provides sufficient reason to obey the rules. If so, there is no need to invoke distinctively moral obligations. But science is not Big Brother. It does not monitor every experiment to ensure that its standards are met. Although the scientific community collectively sets standards for how investigations are to be carried out, how evidence should be gathered, presented, and analyzed, and how research should be reported, peer review can do relatively little to enforce the standards.
To be sure, peer review provides a measure of quality control; but its scope and therefore its powers are limited. Its immediate objects are documents—research reports and grant proposals. Because reviewers are knowledgeable about their field, peer review is a good way to assess the originality, importance, and epistemic adequacy of the research those documents describe. It can reveal that, for example, an experiment did not adequately control for factors it ought to have controlled for or measure magnitudes it ought to have measured, since the authors spell out what is, or is to be, controlled for and measured. It can pinpoint gaps in presuppositions, or weaknesses in experimental design, since the authors describe their presuppositions and experimental design. It can reveal that the paper overstates or understates its findings, since the authors state exactly what they take the findings to be. It can tell whether the authors have taken appropriate note of previous relevant work.11 In doing all these things, peer review assumes that the research is accurately reflected in the documents. But peer review is typically ill-equipped to ensure that the research reported was actually done, that the results listed were actually found, or that the alleged authors actually wrote the paper. The most it can do to uncover fabrication or falsification is to assess plausibility, either of a single paper or of a body of work.12 The most it can do to uncover plagiarism is to rely on reviewers who know the literature well enough to recognize the original source. For obvious reasons, peer review is in no position to assess suppressed results. My point is not to disparage peer review. It is to note that the function of peer review is neither to check for nor to preempt the need for scientific integrity; it is rather to assess the quality of scientific work in a context that presupposes scientific integrity.
Fostering Integrity
If the requirements of science were mandates imposed from without, the limitations on peer review would open the door to a free rider problem. A scientist might recognize that although the moral/epistemic requirements should generally be satisfied, science would not totter if a few investigators cut corners, any more than the subway system is jeopardized if a few riders fail to pay their fares. So, she might think, in neither case does the threat to the institution provide a reason why she in particular should conform to the rules, provided that most people do. This would leave room for it to be rational to defect. But the situation is different for members of a realm of epistemic ends. Because they make the rules that bind them, they cannot violate those rules without violating their own principles. The problem is not just that a miscreant scientist could not justify her behavior to other members of the community. It is that, having reflectively endorsed the rules she is flouting, she could not justify her behavior to herself. This does not ensure that there will be no free riders, but it does ensure that free riding by legislating members of a realm of ends is not rational. In a realm of ends, a free rider’s behavior is inconsistent with her own principles.
Publicity and transparency foster scientific integrity. Because it is common knowledge that only work that at least ostensibly satisfies the scientific community’s requirements will be accorded scientific standing, practitioners have a strong incentive to do only work that at least ostensibly satisfies the requirements. Typically the easiest and most straightforward way to at least ostensibly satisfy the requirements is to actually satisfy them. So scientists have an incentive to produce work that satisfies the requirements. This hardly handles every case. But it may go some way toward explaining why scientific misconduct is not rampant.
Another incentive is this: Scientists recognize that the requirements are not arbitrary. They are instrumentally valuable because their satisfaction is conducive to realizing the epistemic goal. Inasmuch as scientists want to realize the goal, they have reason to employ the requisite means. As Williams notes, Watson and Crick wanted to discover the structure of DNA; they did not want merely to be credited with the discovery. No doubt they were personally ambitious. But “their goal [was] fame, above all fame and prestige in the scientific community, and that [would] come from the recognition that they have done good science” (Williams, 2002, 142). To realize their ambition, they had to satisfy biology’s standards.
An even deeper incentive lies in the recognition that satisfying the requirements is not merely conducive of the goal; it is at least partially constitutive of the goal. Being comprehensive, impartial, evidentially grounded, and reflectively endorsed are properties of the sort of understanding that science seeks. Such an understanding is subject to, and holds up under, critical scrutiny, where the grounds for criticism and bases for assessment have been tempered through a succession of increasingly refined and rigorous processes of self-correction. Moreover, since that understanding is open to extension, correction, and refinement, a reflectively endorsed finding is not considered fixed or final, but is an acceptable springboard for further inquiry. Scientists have an incentive to satisfy the requirements, then, because the satisfaction of those standards is built into scientific understanding. That is, in Kantian terms, they have an incentive to conduct their research in ways that respect, rather than merely conform to, the requirements.
Nevertheless, the mutual trust that I have argued sustains the scientific community may seem fragile, and relying on it may seem naive. Even if, qua scientists, they have reason to uphold the standards, scientists wear other hats as well. They may be entrepreneurs with a financial stake in the outcome of their experiments; they may be parents who need to feed their families; they may be ambitious neurotics whose sense of self depends on the (earned or unearned) respect of their peers. One might wonder, then, whether the incentives to cut corners outweigh incentives to behave with integrity.
If talented, amoral scientists emerged full-grown from the head of Zeus, with all the factual knowledge and skill needed to ply their trade, the incentives I identified might well be too weak. Normally, scientists do not attempt to replicate one another’s results; and when they do, an irreplicable result is apt to be construed as an honest mistake. So although there is enormous disutility in being found guilty of scientific misconduct, if his research reports are plausible, a culpable scientist can reasonably doubt that he will get caught. The worst he need realistically fear is that he will be found to be mistaken. An adroit, amoral scientist might readily recognize this and, being devoid of scruples, consider the risk worth taking.
Even if such a scientist granted that in general the best means to achieving the goal is to satisfy the requirements, this would not obviously provide him with an incentive to do so if he thought highly enough of his own intuitions. He might rationalize his falsifying or fabricating on the ground that although in most cases the requirements should be satisfied, his particular case constituted an exception. He has a good enough feel for the subject that he can just tell how the experiments would come out. I conceded earlier that if truth is the end and the scientific method a mere means, it may well be that a scientist with good intuitions (or luck) could achieve the end without employing the standard means. An amoral scientist may believe that the only disincentive to cutting corners is the possibility that his conclusion will be found to be false. But, he might think, his conclusion probably is true, and if it is found to be false, it is apt to be consigned to the realm of honest mistakes. Again, it might seem, the personal risk he takes is slight.
Science Education
Luckily scientists do not emerge full-grown from the head of Zeus. People become scientists by studying science. And the moral values I have discussed are instilled in the course of their education. Even if a brilliant, amoral senior scientist could readily contrive a plausible, fraudulent result, doing so is not easy. For a beginning student, the best way—very likely the only way—to ostensibly satisfy the requirements is to actually satisfy them. A novice is apt to get caught if he fakes, fabricates or plagiarizes his work. He is also apt to get caught if he is careless, inattentive, or lax. Learning to do scientific research is learning to do honest, truthful, careful research. It is not learning to do research, with honesty, truthfulness, and conscientiousness tacked on as afterthoughts. So, even if he starts out as amoral as his Zeus-born senior counterpart, the novice has a strong, purely self-interested incentive to conform his behavior to the requirements. He is simply unable to cut corners and produce plausible results. What he learns in learning to be a scientist is to satisfy the requirements. He does not learn that it is normally a good idea to conduct the experiments one purports to have conducted. He learns that scientific research consists in conducting those experiments.
As his education progresses he develops the habit of conforming to the requirements; doing so becomes second nature to him. In this respect, his moral education as a scientist is Aristotelian. He develops scientific integrity by behaving with integrity, whatever his initial motives for doing so. As he learns more about the subject he is studying, he gains respect for the requirements. He comes to recognize that their satisfaction is conducive to the epistemic goal, and eventually, that it is in part constitutive of the epistemic goal. That is, he learns what scientific understanding is. So the reason that the trustworthiness required of scientists is not excessively fragile and that the expectation that scientists will be on the whole trustworthy is not naive, is that the grounds for it are so deeply embedded in science that they cannot help but be inculcated in scientific education.
This suggests that the education future scientists standardly receive already fosters scientific integrity. It invites the idea that nothing more need be done. Sadly, that is too optimistic. Some scientists are guilty of misconduct. Some students have no clear sense of where the boundaries are drawn. How can science education foster integrity?
When something has become second nature, the agent automatically and unthinkingly does it. So most scientists probably do not even consider flouting the requirements. This is both good and bad. It is good in that they simply do the right thing; it is bad in that they may fail ever to consider why it is the right thing to do. If they do not know, for example, why data must be recorded in a timely fashion, or what exactly is problematic about conflicts of interest, they may be ill equipped to withstand temptation, to adjudicate competing claims when all the desiderata cannot be simultaneously satisfied, or to recognize what their obligations are when they detect misconduct. By highlighting aspects of a practice that had been tacit, scientists put themselves in a position to articulate the relevant factors, discover their basis and their strength, subject them to scrutiny, and consider whether they are worthy of reflective endorsement. And they put themselves in a position to deepen science education by sensitizing students to often unrecognized aspects of scientific practice that are critical to its success. So, in addition to forestalling explicit scientific misconduct, the NSF requirement may prompt a reorientation to the practice of scientific inquiry that can augment practitioners’ understanding of the discipline and its results.
Conclusion
Throughout this chapter, I have emphasized the obligations that members of the scientific community bear to one another, and emphasized the ways these obligations infuse the epistemic goals of science. Because scientists need to trust one another, they have devised institutional arrangements to secure the bases of trust. In publishing a paper, a scientist issues an open invitation to the community to accept her findings and use them as a springboard for further investigation. She gives her colleagues her assurance that the standards they collectively endorse have been satisfied.
My discussion may suggest that science is insular. But the fruits of science are used for more than further research, and they are used by nonscientists as well as scientists. We take medicines, drive cars, eat genetically engineered vegetables. We sit on panels and juries where we hear scientists testify as expert witnesses (see Brewer, 1998). In doing so, we rely on the deliverances of science. The open invitation the scientist issues thus is wide open; it is issued to the world at large, not just to the scientific community. In publishing a finding, speaking at a conference, or testifying in court, she conveys her assurance that anyone—layperson or expert—can rely on her results. The point is a delicate one. She does not assure that the results are true, only that they are scientifically well founded.
As I have indicated, these points generalize. Intellectual integrity is required in every discipline. Investigators are expected to be scrupulous and conscientious. They should adhere to the methods, canons of evidence, and norms of acceptability reflectively endorsed by their discipline; or they should publicly articulate and justify the reasons for challenging them. The methods, canons, norms, and epistemically acceptable reasons for challenging them can vary from one discipline to another. But their role in the disciplines is constant. Although the standards to be satisfied are set by the community of inquiry, the moral obligation to behave with integrity is utterly general.
Notes
1. This chapter was made possible in part through the support of a grant from the Spencer Foundation and in part through a grant from the Intellectual Humility Project of St. Louis University and the John Templeton Foundation. The opinions expressed here are those of the author and do not necessarily reflect the views of the Intellectual Humility Project, the John Templeton Foundation, or the Spencer Foundation.
2. As noted, the upper-case letters indicate that as Williams uses them ‘Sincerity’ and ‘Competence’ are quasi-technical terms that diverge slightly from their standard meanings. I follow his lead. The main deviation of ‘Well-Intentioned’ as I use the term is that it is role specific. An agent can be a Well-Intentioned auto mechanic or informant, while being Ill-Intentioned when it comes to paying all the taxes he owes.
3. Bernard Williams distinguishes between the ethical and the moral. Ethics is a broader category concerned with the question how should we live. Morality is concerned specifically with issues pertaining to rights, duties, and obligations. If we follow Williams, what the act calls ‘the ethical conduct of research’ is more properly called the ‘moral conduct of research’. For my purposes, it does not matter which term we use.
4. Kuhn (1970) denies that science is cumulative. In scientific revolutions, much that has been accepted is called into question. This does not undermine my contention that scientists build on one another’s findings. For even a revolutionary scientist draws on evidence, methods, and instruments that others contrived.
5. Although I doubt that this is possible, my point here is weaker. Science as we know it could not be so generated.
6. There are gray areas here. It may not be obvious whether a failed investigation rises to the level of significance to make it worth publishing. As a result, journals may be more reluctant to publish failed studies than successful ones. (I thank Jonathan Adler for this point.) But there are also cases where scientists and the institutions they work for suppress failed studies because they have a vested interest in a research program that is undermined by the failure, or because they consider the failure a fluke, or both.
7. It may of course do more. When a pharmaceutical company suppresses results that show a drug to be dangerous or ineffective, it illicitly conveys assurance that the drug is safe and effective or at least not known to be unsafe or ineffective. In emphasizing the epistemological consequences for the scientific community, I do not mean to deny that there are other serious consequences, not only for the scientific community but also for the public at large.
8. This is not just an ‘in principle’ worry. One study showed that pharmaceutical-industry-funded trials were far more likely to report positive outcomes (85.4%) than either government-funded trials (50%) or nonprofit-funded trials (71%). This does not demonstrate that the industry-funded trials were biased. Possibly pharmaceutical companies do not run clinical trials unless they are pretty sure that the drugs being tested are effective. But in light of the industry’s financial interest in the outcome, these statistics give one pause. See Bourgeois, Murthy, and Mandl (2010).
9. It might seem that at least the methodology section of an acceptable research paper needs to be true. Even if scientists cannot guarantee the truth of their results, they can, and therefore should, provide a true description of how the result was arrived at. But a true history of the path to the conclusion is not what the methodology section is supposed to provide. The history of investigation leading to a particular result is typically a circuitous path, with numerous false starts, missteps, and dead ends. How they actually arrived at the result—the precise meandering path they took—is of no interest. The question the methodology section is supposed to answer is not ‘How did we come by our result?’ but ‘Why should you accept our result?’ Much of the actual history of the investigation is irrelevant to the answer to the latter question. So scientists provide a sanitized, streamlined description of a method that is grounded in and justified by the investigation they carried out and that, if followed, would yield their result. Rather than a true description of what they did, it is a rational reconstruction. To vindicate the open invitation requires supplying assurance not that the conclusion is true but that, in light of current standards, the evidence warrants it. The methodology section is written to show how the evidence conveys warrant.
10. Competence is field based. The idea is that anyone with the appropriate knowledge and training in a specific branch of science could have done the work. It is not that a competent geneticist could have performed or interpreted an experiment in plasma physics, or even that the bench scientist could do the statistical analysis provided by another member of his team.
11. I thank Jonathan Adler for this point.
12. For example, Jan Schön’s misconduct was discovered when it was noticed that supposedly distinct experiments were reported to all have identical background noise. (See Goodstein, 2010, 98–100.)