CHAPTER 8

Cheating as an Infection

How We Catch the Dishonesty Germ

I spend a lot of my time giving talks around the world about the effects of irrational behavior. So naturally, I’m a very frequent flyer. One typical itinerary included flying from my home in North Carolina to New York City, then on to São Paulo, Brazil; Bogotá, Colombia; Zagreb, Croatia; San Diego, California; and back to North Carolina. A few days later I flew to Austin, Texas; New York City; Istanbul, Turkey; Cam-den, Maine; and finally (exhausted) back home. In the process of accumulating all those miles, I’ve sustained an endless number of insults and injuries while grinding through security checkpoints and attempting to retrieve lost baggage. But those pains have been nothing compared to the pain of getting sick while traveling, and I am always trying to minimize my chances of falling ill.

On one particular transatlantic flight, while I was preparing a talk to give the next day on conflicts of interest, my neighbor seemed to have a bad cold. Maybe it was his sickness, my fear of catching something in general, sleep deprivation, or just the random and amusing nature of free associations that made me wonder about the similarity between the germs my seatmate and I were passing back and forth and the recent spread of corporate dishonesty.

As I’ve mentioned, the collapse of Enron spiked my interest in the phenomenon of corporate cheating —and my interest continued to grow following the wave of scandals at Kmart, WorldCom, Tyco, Halliburton, Bristol-Myers Squibb, Freddie Mac, Fannie Mae, the financial crisis of 2008, and, of course, Bernard L. Madoff Investment Securities. From the sidelines, it seemed that the frequency of financial scandals was increasing. Was this due to improvements in the detection of dishonest and illegal behavior? Was it due to a deteriorating moral compass and an actual increase in dishonesty? Or was there also an infectious element to dishonesty that was getting a stronger hold on the corporate world?

Meanwhile, as my sniffling neighbor’s pile of used tissues grew, I began wondering whether someone could become infected with an “immorality bug.” If there was a real increase in societal dishonesty, could it be spreading like an infection, virus, or communicable bacteria, transmitted through mere observation or direct contact? Might there be a connection between this notion of infection and the continually unfolding story of deception and dishonesty that we have increasingly seen all around us? And if there were such a connection, would it be possible to detect such a “virus” early and prevent it from wreaking havoc?

To me, this was an intriguing possibility. Once I got home, I started reading up on bacteria, and I learned that we have innumerable bacteria in, on, and around our bodies. I also learned that as long as we have only a limited amount of the harmful bacteria, we manage rather well. But problems tend to arise when the number of bacteria becomes so great that it disturbs our natural balance or when a particularly bad strain of bacteria makes it through our bodies’ defenses.

To be fair, I am hardly the first to think of this connection. In the eighteenth and nineteenth centuries, prison reformers believed that criminals, like the ill, should be kept separated and in well-ventilated places in order to avoid contagion. Of course, I didn’t take the analogy between the spread of dishonesty and diseases as literally as my predecessors. Some sort of airborne miasma probably won’t transform people into criminals. But at the risk of overstretching the metaphor, I thought that the natural balance of social honesty could be upset, too, if we are put into close proximity to someone who is cheating. Perhaps observing dishonesty in people who are close to us might be more “infectious” than observing the same level of dishonesty in people who aren’t so close or influential in our lives. (Consider, for example, the catchphrase “I learned it by watching you” from the antidrug campaign of the 1980s: the ad warned that “Parents who use drugs have children who use drugs.”)

Keeping with the infection metaphor, I wondered about the intensity of exposure to cheating and how much dishonest behavior it might take to tilt the scale of our own actions. If we see a colleague walking out of the office supply room with a handful of pens, for example, do we immediately start thinking that it’s all right to follow in his footsteps and grab some office supplies ourselves? I suspect that this is not the case. Instead, much like our relationship with bacteria, there might be a slower and more subtle process of accretion: perhaps when we see someone cheat, a microscopic impression is left with us and we become ever so slightly more corrupt. Then, the next time we witness unethical behavior, our own morality erodes further, and we become more and more compromised as the number of immoral “germs” to which we are exposed increases.

A FEW YEARS ago I purchased a vending machine, thinking it would be an interesting tool for running experiments related to pricing and discounts. For a few weeks, Nina Mazar and I used it to see what would happen if we gave people a probabilistic discount instead of a fixed discount. Translated, that means that we set up the machine so that some candy slots were marked with a 30 percent discount off the regular price of $1, while other slots gave users a 70 percent chance of paying the full price of $1.00 and a 30 percent chance of getting all their money back (and therefore paying nothing). In case you are interested in the results of this experiment, we almost tripled sales by probabilistically giving people back their money. This probabilistic discounting is a story for another time, but the idea of people getting their money back gave us an idea for testing another path for cheating.

One morning, I had the machine moved near a classroom building at MIT and set the internal price of the machine to zero for each of the candies. On the outside, each candy allegedly cost 75 cents. But the moment students shelled out three quarters and made their selection, the machine served both the candy and the money. We also put a prominent sign on the machine with a number to call if the machine malfunctioned.

A research assistant sat within eyeshot of the machine and pretended to work on her laptop. But instead she recorded what people did when confronted with the surprise of free candy. After doing this for a while, she observed two types of behavior. First, people took approximately three pieces of candy. When they got their first candy together with their payment, most people checked to see whether it would happen again (which, of course, it did). And then many people decided to go for it a third time. But no one tried more often than that. People undoubtedly remembered a time when a vending machine ate their money without dispensing anything, so they probably felt as though this generous machine was evening out their vending-machine karma.

We also found that more than half of the people looked around for a friend, and when they saw someone they knew, they invited their friend over to partake in the sugar-laden boon. Of course, this was just an observational study, but it led me to suspect that when we do something questionable, the act of inviting our friends to join in can help us justify our own questionable behavior. After all, if our friends cross the ethical line with us, won’t that make our action seem more socially acceptable in our own eyes? Going to such lengths to justify our bad behavior might seem over the top, but we often take comfort when our actions fall in line with the social norms of those around us.

Infectious Cheating in Class

After my experience with the vending machine, I started observing the infectious nature of cheating in other places as well—including in my own classes. At the start of the semester a few years ago, I asked the five hundred undergraduate students in my behavioral economics class how many of them believed that they could listen carefully in class while using their computers for non-class-related activities (Facebook, Internet, e-mail, and so on). Thankfully, most of them indicated that they couldn’t really multitask very well (which is true). I then asked how many of them had enough self-control to avoid using their laptop for non-class-related activities if it was open in front of them. Almost no one raised a hand.

At that point I was conflicted between prohibiting laptops in the classroom (which are of course useful for taking notes) or allowing laptops but, to help the students fight their lack of self-control, adding some intervention. Being an optimist, I asked the students to raise their right hands and repeat after me, “I will never, never, never use my computer in this course for anything that is not class-related. I will not read or send e-mail; I will not use Facebook or other social networks; and I will not use the Internet to explore any non-class-related material during class time.”

The students repeated these words after me, and I was rather pleased with myself—for a while.

From time to time I show videos in class both to illustrate a point and to give the students a change in pace and attention. I usually take this time to walk to the back of the class and watch the videos with the students from there. Of course, standing at the back of the class also gives me a direct line of sight to the screens of the students’ laptops. During the first few weeks of the semester their screens shone only with class-related material. But as the semester drew on—like mushrooms after the rain—I noticed that every week more and more of the screens were opened to very familiar but non-class-related websites and that Facebook and e-mail programs were often front and center.

In retrospect, I think that the darkness that accompanied the videos was one of the culprits in the deterioration of the students’ promise. Once the class was in darkness and one student used his laptop for a non-class-related activity, even for just one minute, many of the other students, not just me, could see what he was doing. That most likely led more students to follow the same pattern of misbehavior. As I discovered, the honesty pledge was helpful in the beginning, but ultimately it was no match for the power of the emerging social norm that came from observing the misbehavior of others.*

One Bad Apple

My observations of on-campus cheating and my 30,000-foot musings about social infection were, of course, just speculations. To acquire a more informed view of the infectious nature of cheating, Francesca Gino, Shahar Ayal (a professor at the Interdisciplinary Center in Israel), and I decided to set up a few experiments at Carnegie Mellon University, where Francesca was visiting at the time. We set up the matrix task in the same general way I described earlier (although we used an easier version of the task), but with a few important differences. The first was that along with the worksheets containing the matrices, the experimenter handed out a manila envelope containing $10 worth of cash (eight $1 bills and four half-dollar coins) to each participant. This change in payment procedure meant that at the end of the experiment, the participants paid themselves and left behind their unearned cash.

In the control condition, in which there was no opportunity for cheating, a student who solved seven questions in the allotted time counted how many problems she solved correctly, withdrew the appropriate amount of money from the manila envelope, and placed the money in her wallet. Then the participant handed the worksheet and envelope with the unearned cash back to the experimenter, who checked the worksheet, counted the remaining money in the envelope, and sent the student away with her earnings. So far, so good.

In the shredder condition, the instructions were a bit different. In this condition the experimenter told the participants, “After you count your answers, head over to the shredder at the back of the room, shred your questionnaire, then walk back to your seat and take the amount of money you have earned from the manila envelope. After that, you are free to leave. On your way out, toss the envelope with the unearned money into the box by the door.” Then she told the participants to start on the test and began reading a thick book (to make it clear that no one was watching). After the five minutes were over, the experimenter announced that the time was up. The participants put down their pencils, counted the number of their correct answers, shredded their worksheets, walked back to their seat, paid themselves, and on their way out tossed their envelopes containing the leftover money into the box. Not surprisingly, we found that participants in the shredder condition claimed to have solved more matrices than those in the control condition.

These two conditions created the starting point from which we could test what we really wanted to look at: the social component of cheating. Next, we took the shredder condition (in which cheating was possible) and added a social element to it. What would happen if our participants could observe someone else—a Madoff in the making—cheating egregiously? Would it alter their level of cheating?

Imagine that you are a participant in our so-called Madoff condition. You’re seated at a desk, and the experimenter gives you and your fellow participants the instructions. “You may begin!” she announces. You dive into the problem set, trying to solve as many matrices as possible to maximize your earnings. About sixty seconds pass, and you’re still on the first question. The clock is ticking.

“I’ve finished!” a tall, skinny, blond-haired guy says as he stands up and looks at the experimenter. “What should I do now?”

“Impossible,” you think. “I haven’t even solved the first matrix!” You and everyone else stare at him in disbelief. Obviously, he’s cheated. Nobody could have completed all twenty matrices in less than sixty seconds.

“Go shred your worksheet,” the instructor tells him. The guy walks to the back of the room, shreds his worksheet, and then says, “I solved everything, so my envelope for the extra money is empty. What should I do with it?”

“If you don’t have money to return,” the experimenter replies, unfazed, “put the empty envelope in the box, and you are free to go.” The student thanks her, waves good-bye to everyone, and leaves the room smiling, having pocketed the entire amount. Having observed this episode, how do you react? Do you become outraged that the guy cheated and got away with it? Do you change your own moral behavior? Do you cheat less? More?

It may make you feel slightly better to know that the fellow who cheated so outrageously was an acting student named David, whom we hired to play this role. We wanted to see if observing David’s outrageous behavior would cause the real participants to follow his example, catching the “immorality virus,” so to speak, and start cheating more themselves.

Here’s what we found. In the Madoff condition, our participants claimed to have solved an average of fifteen out of twenty matrices, an additional eight matrices beyond the control condition, and an additional three matrices beyond the shredder condition. In short, those in the Madoff condition paid themselves for roughly double the number of answers they actually got right.

Here’s a quick summary:

 

THOSE RESULTS, THOUGH interesting, still don’t tell us why the participants in the Madoff condition were cheating more. Given David’s performance, participants could have made a quick calculation and said to themselves, “If he can cheat and get away with it, it must mean that I can do the same without any fear of getting caught.” If this were the case, David’s action would have changed participants’ cost-benefit analysis by clearly demonstrating that in this experiment, they could cheat and get away with it. (This is the SMORC perspective that we described in chapter 1, “Testing the Simple Model of Rational Crime.”)

A very different possibility is that David’s actions somehow signaled to the other participants in the room that this type of behavior was socially acceptable, or at least possible, among their peers. In many areas of life, we look to others to learn what behaviors are appropriate and inappropriate. Dishonesty may very well be one of the cases where the social norms that define acceptable behavior are not very clear, and the behavior of others—David, in this case—can shape our ideas about what’s right and wrong. From this perspective, the increased cheating we observed in the Madoff condition could be due not to a rational cost-benefit analysis, but rather to new information and mental revision of what is acceptable within the moral boundaries.

To examine which of the two possibilities better explains the increased cheating in the Madoff condition, we set up another experiment, with a different type of social-moral information. In the new setup, we wanted to see whether erasing any concern about being caught but without giving an enacted example of cheating would also cause participants to cheat more. We got David to work for us again, but this time he interjected a question as the experimenter was wrapping up the instructions. “Excuse me,” he said to the experimenter in a loud voice, “Given these instructions, can’t I just say I solved everything and walk away with all the cash? Is this okay?” After pausing for a few seconds, the experimenter answered, “You can do whatever you want.” For obvious reasons, we called this the question condition. Upon hearing this exchange, participants quickly understood that in this experiment they could cheat and get away with it. If you were a participant, would this understanding encourage you to cheat more? Would you conduct a quick cost-benefit analysis and figure that you could walk away with some unearned dough? After all, you heard the experimenter say, “Do whatever you want,” didn’t you?

Now let’s stop and consider how this version of the experiment can help us understand what happened in the Madoff condition. In the Madoff condition the participants were provided with a live example of cheating behavior, which provided them with two types of information: From a cost-benefit perspective, watching David walk out with all the money showed them that in this experiment there are no negative consequences to cheating. At the same time, David’s action provided them with a social cue that people just like them seem to be cheating in this experiment. Because the Madoff condition included both elements, we couldn’t tell if the increased cheating that followed was due to a reevaluation of the cost-benefit analysis, to the social cue, or to both.

This is where the question condition comes in handy. In this condition, only the first element (cost-benefit perspective) was present. When David asked the question and the experimenter confirmed that cheating was not only possible but also without a consequence, it became clear to the participants that cheating in this setup had no downside. And most important, the question condition changed the participants’ understanding of the consequence without giving them a live example and social cue of someone from their social group who cheated. If the amount of cheating in the question condition were the same as in the Madoff condition, we would conclude that what caused the increased level of cheating in both conditions was most likely the information that there was no consequence to cheating. On the other hand, if the amount of cheating in the question condition were much lower than in the Madoff condition, we would conclude that what caused the extra-high level of cheating in the Madoff condition was the social signal—the realization that people from the same social group find it acceptable to cheat in this situation.

What do you think happened? In the question condition, our participants claimed to have solved an average of ten matrices—about three more matrices than in the control condition (which means they did cheat) but by about two fewer matrices than in the shredder condition and by five fewer than in the Madoff condition. After observing the experimenter telling David that he could do what he wanted, cheating actually decreased. That was the opposite of what would have happened if our participants had engaged solely in a rational cost-benefit analysis. Moreover, this result suggests that when we become aware of the possibility of immoral behavior, we reflect on our own morality (similar to the Ten Commandments and the honor code experiments in chapter 2, “Fun with the Fudge Factor”). And as a consequence, we behave more honestly.

A Fashion Statement

Although those results were promising, we still wanted to get more direct support and evidence for the idea that cheating might be socially contagious. So we decided to go into the fashion business. Well, sort of.

The structure of our next experiment was the same as in the Madoff condition: our actor stood up a few seconds into the experiment and announced that he had solved everything and so forth. But this time there was one fashion-related difference: the actor wore a University of Pittsburgh sweatshirt.

Let me explain. Pittsburgh has two world-class universities, the University of Pittsburgh (UPitt) and Carnegie Mellon University (CMU). Like many institutions of higher learning that are in close proximity, these two have a long-standing rivalry. This competitive spirit was just what we needed to further test our cheating-as-a-social-contagion hypothesis.

We conducted all of these experiments at Carnegie Mellon University, and all our participants were Carnegie Mellon students. In the basic Madoff condition, David had worn just a plain T-shirt and jeans and had therefore been assumed to be a Carnegie Mellon student, just like all the other participants. But in our new condition, which we named the “outsider-Madoff condition,” David wore a blue-and-gold UPitt sweatshirt. This signaled to the other students that he was an outsider—a UPitt student—and not part of their social group; in fact, he belonged to a rival group.

The logic of this condition was similar to the logic of the question condition. We reasoned that if the increased cheating we observed in the Madoff condition was due to the realization that if David could cheat and get away with it, so could the other participants, and it would not matter if David was dressed as a CMU or a UPitt student. After all, the information that there were no negative consequences to egregious cheating was the same regardless of his outfit. On the other hand, if the increase in cheating in the Madoff condition was due to an emerging social norm that revealed to our participants that cheating was acceptable in their social group, this influence would operate only when our actor was part of their in-group (a Carnegie Mellon student) and not when he was a member of another, rival group (a UPitt student). The crucial element in this design, therefore, was the social link connecting David to the other participants: when he was dressed in a UPitt sweatshirt, would the CMU students continue to play copycat, or would they resist his influence?

To recap the results so far, here’s what we saw: When cheating was possible in the shredder condition but not publicized by David, students claimed to have solved, on average, twelve matrices—five more than they did in the control condition. When David stood up wearing regular CMU attire in the Madoff condition, the participants claimed to have solved about fifteen matrices. When David asked a question about the possibility of cheating and he was assured that it was possible, participants claimed to have solved only ten matrices. And finally, in the outsider-Madoff condition (when David wore a UPitt sweatshirt), the students observing him cheat, claimed to have solved only nine matrices. They still cheated relative to the control condition (by about two matrices), but they cheated by about six fewer matrices than when David was assumed to be a part of their CMU social group.

Here’s how our results looked:

 

Together, these results show not only that cheating is common but that it is infectious and can be increased by observing the bad behavior of others around us. Specifically, it seems that the social forces around us work in two different ways: When the cheater is part of our social group, we identify with that person and, as a consequence, feel that cheating is more socially acceptable. But when the person cheating is an outsider, it is harder to justify our misbehavior, and we become more ethical out of a desire to distance ourselves from that immoral person and from that other (much less moral) out-group.

More generally, these results show how crucial other people are in defining acceptable boundaries for our own behavior, including cheating. As long as we see other members of our own social groups behaving in ways that are outside the acceptable range, it’s likely that we too will recalibrate our internal moral compass and adopt their behavior as a model for our own. And if the member of our in-group happens to be an authority figure—a parent, boss, teacher, or someone else we respect—chances are even higher that we’ll be dragged along.

In with the In-Crowd

It’s one thing to get riled up about a bunch of college students cheating their university out of a few dollars (although even this cheating accumulates quickly); it’s quite another when cheating is institutionalized on a larger scale. When a few insiders deviate from the norm, they infect those around them, who in turn infect those around them, and so on—which is what I suspect occurred at Enron in 2001, on Wall Street leading up to 2008, and in many other cases.

One can easily imagine the following scenario: A well-known banker named Bob at Giantbank engages in shady dealings—overpricing some financial products, delaying reporting losses until the next year, and so on—and in the process he makes boatloads of money. Other bankers at Giantbank hear about what Bob is up to. They go out to lunch and, over their martinis and steaks, discuss what Bob is doing. In the next booth, some folks from Hugebank overhear them. Word gets around.

In a relatively short time, it is clear to many other bankers that Bob isn’t the only person to fudge some numbers. Moreover, they consider him as part of their in-group. To them, fudging the numbers now becomes accepted behavior, at least within the realm of “staying competitive” and “maximizing shareholder value.”*

Similarly, consider this scenario: one bank uses its government bailout money to pay out dividends to its shareholders (or maybe the bank just keeps the cash instead of lending it). Soon, the CEOs of other banks start viewing this as appropriate behavior. It is an easy process, a slippery slope. And it’s the kind of thing that happens all around us every day.

BANKING, OF COURSE, is not the only place this unfortunate kind of escalation takes place. You can find it anywhere, including governing bodies such as the U.S. Congress. One example of deteriorating social norms in the U.S. legislative halls involves political action committees (PACs). About thirty years ago, these groups were established as a way for members of Congress to raise money for their party and fellow lawmakers to use during difficult election battles. The money comes primarily from lobbyists, corporations, and special-interest groups, and the amounts they give are not restricted to the same degree as contributions to individual candidates. Aside from being taxed and having to be reported to the FEC, few restrictions are placed on the use of PAC money.

As you might imagine, members of Congress have gotten into the habit of using their PAC funds for a gamut of non-election-related activities—from babysitting bills to bar tabs, Colorado ski trips, and so on. What’s more, less than half of the millions of dollars raised by PACs has gone to politicians actually running in elections; the rest is commonly put toward different perks: fund-raising, overhead, staff, and other expenses. As Steve Henn of the NPR show Marketplace put it, “PACs put the fun in fundraising.”1

To deal with the misuse of PAC money, the very first law that Congress passed after the 2006 congressional election was intended to limit the discretionary spending of Congress members, forcing them to publicly disclose how they spent their PAC money. However, and somewhat predictably from our perspective, the legislation seemed to have no effect. Just a few weeks after passing the law, the congressmen were behaving as irresponsibly as they had before; some spent the PAC money going to strip clubs, blowing thousands of dollars on parties, and generally conducting themselves without a semblance of accountability.

How can this be? Very simple. Over time, as congressmen have witnessed fellow politicians using PAC funds in dubious ways, their collective social norm has taken a turn for the worse. Little by little, it’s been established that PACs can be used for all kinds of personal and “professional” activities—and now the misuse of PAC funds is as common as suits and ties in the nation’s capital. As Pete Sessions (a Republican congressman from Texas) responded when he was questioned about dropping several grand at the Forty Deuce in Las Vegas, “It’s hard for me to know what is normal or regular anymore.”2

You might suspect, given the polarization in Congress, that such negative social influences would be contained within parties. You might think that if a Democrat breaks the rules, his behavior would influence only other Democrats and that bad behavior by Republicans would influence only Republicans. But my own (limited) experience in Washington, D.C., suggests that away from the watchful eye of the media, the social practices of Democrats and Republicans (however disparate their ideologies) are much closer than we think. This creates the conditions under which the unethical behavior of any congressman can extend beyond party lines and influence other members, regardless of their affiliation.

ESSAY MILLS

In case you’re unfamiliar with them, essay mills are companies whose sole purpose is to generate essays for high school and college students (in exchange for a fee, of course). Sure, they claim that the papers are intended to help the students write their own original papers, but with names such as eCheat.com, their actual purpose is pretty clear. (By the way, the tagline of eCheat.com was at one point “It’s Not Cheating, It’s Collaborating.”)

Professors, in general, worry about essay mills and their impact on learning. But without any personal experience using essay mills and without any idea about what they really do or how good they are, it is hard to know how worried we should be. So Aline Grüneisen (the lab manager of my research center at Duke University) and I decided to check out some of the most popular essay mills. We ordered a set of typical college term papers from a few of the companies, and the topic of the paper we chose was (surprise!) “Cheating.”

Here is the task that we outsourced to the essay mills:

 

When and why do people cheat? Consider the social circumstances involved in dishonesty, and provide a thoughtful response to the topic of cheating. Address various forms of cheating (personal, at work, etc.) and how each of these can be rationalized by a social culture of cheating.

 

We requested a twelve-page term paper for a university-level social psychology class, using fifteen references, formatted in American Psychological Association (APA) style, to be completed in two weeks. This was, to our minds, a pretty basic and conventional request. The essay mills charged us from $150 to $216 per paper in advance.

Two weeks later, what we received would best be described as gibberish. A few of the papers attempted to mimic APA style, but none achieved it without glaring errors. The citations were sloppy and the reference lists abominable—including outdated and unknown sources, many of which were online news stories, editorial posts, or blogs, and some that were simply broken links. In terms of the quality of the writing itself, the authors of all of the papers seemed to have a tenuous grasp of the English language and the structure of a basic essay. Paragraphs jumped clumsily from one topic to another and often collapsed into list form, counting off various forms of cheating or providing a long stream of examples that were never explained or connected to the thesis of the paper. Of the many literary affronts, we found the following gems:

 

Cheating by healers. Healing is different. There is harmless healing, when healers-cheaters and wizards offer omens, lapels, damage to withdraw, the husband-wife back and stuff. We read in the newspaper and just smile. But these days fewer people believe in wizards.

 

If the large allowance of study undertook on scholar betraying is any suggestion of academia and professors’ powerful yearn to decrease scholar betraying, it appeared expected these mind-set would component into the creation of their school room guidelines.

 

By trusting blindfold only in stable love, loyalty, responsibility and honesty the partners assimilate with the credulous and naïve persons of the past.

 

The future generation must learn for historical mistakes and develop the sense of pride and responsibility for its actions.

 

At that point we were rather relieved, figuring that the day had not yet arrived when students can submit papers from essay mills and get good grades. Moreover, we concluded that if students did try to buy a paper from an essay mill, just like us, they would feel they had wasted their money and wouldn’t try it again.

But the story does not end there. We submitted the essays we purchased to WriteCheck.com, a website that inspects papers for plagiarism, and found that half of the papers we received were largely copied from existing works. We decided to take action and contacted the essay mills to request our money back. Despite the solid proof from WriteCheck.com, the essay mills insisted that they had not plagiarized anything. One company even threatened us with litigation and claimed that they would get in touch with the dean’s office at Duke to alert him to the fact that I had submitted work that was not mine. Needless to say, we never received that refund . . .

The bottom line? Professors shouldn’t worry too much about essay mills, at least for now. The technological revolution has not yet solved this particular challenge for students, and they still have no other option but to write their own papers (or maybe cheat the old-fashioned way and use a paper from a student who took the class during a previous semester).

But I do worry about the existence of essay mills and the signal that they send to our students—that is, the institutional acceptance of cheating, not only while they are in school but after they graduate.

How to Regain Our Ethical Health?

The idea that dishonesty can be transmitted from person to person via social contagion suggests that we need to take a different approach to curbing dishonesty. In general, we tend to view minor infractions as just that: trivial and inconsequential. Peccadilloes may be relatively insignificant in and of themselves, but when they accumulate within a person, across many people, and in groups, they can send a signal that it’s all right to misbehave on a larger scale. From this perspective, it’s important to realize that the effects of individual transgressions can go beyond a singular dishonest act. Passed from person to person, dishonesty has a slow, creeping, socially erosive effect. As the “virus” mutates and spreads from one person to another, a new, less ethical code of conduct develops. And although it is subtle and gradual, the final outcome can be disastrous. This is the real cost of even minor instances of cheating and the reason we need to be more vigilant in our efforts to curb even small infractions.

So what can we do about it? One hint may lie in the Broken Windows Theory, which was the basis of a 1982 Atlantic article by George Kelling and James Wilson. Kelling and Wilson proposed a critical component of keeping order in dangerous neighborhoods, and it wasn’t just putting more police on the beat. They argued that if people in a run-down area of town see a building with a few broken, long-unrepaired windows, they will be tempted to break even more windows and create further damage to the building and its surroundings, creating a blight effect. Based on the Broken Windows Theory, they suggested a simple strategy for preventing vandalism: fix problems when they are small. If you repair each broken window (or other misbehaviors) immediately, other potential offenders are going to be much less likely to misbehave.

Although the Broken Windows Theory has been difficult to prove or refute, its logic is compelling. It suggests that we should not excuse, overlook, or forgive small crimes, because doing so can make matters worse. This is especially important for those in the spotlight: politicians, public servants, celebrities, and CEOs. It might seem unfair to hold them to higher standards, but if we take seriously the idea that publicly observed behavior has a broader impact on those viewing the behavior, this means that their misbehavior can have greater downstream consequences for society at large. In contrast to this view, it seems that celebrities are too often rewarded with lighter punishments for their crimes than the rest of the population, which might suggest to the public that these crimes and misdemeanors are not all that bad.

THE GOOD NEWS is that we can also take advantage of the positive side of moral contagion by publicizing the individuals who stand up to corruption. For example, Sherron Watkins of Enron, Coleen Rowley of the FBI, and Cynthia Cooper of WorldCom are great examples of individuals who stood up to internal misconduct in their own organizations, and in 2002 Time magazine selected them as People of the Year.

Acts of honesty are incredibly important for our sense of social morality. And although they are unlikely to make the same sensational news, if we understand social contagion, we must also recognize the importance of publicly promoting outstanding moral acts. With more salient and vivid examples of commendable behavior, we might be able to improve what society views as acceptable and unacceptable behaviors, and ultimately improve our actions.