11
Questioning Authority
I know of no safe depository of the ultimate power of the society but the people themselves; and if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion.
When psychologists have explored the relationship between individuals and authority figures, they have found that it can be disturbingly easy for false experts to manipulate the thinking and behavior of others. One of the classic experiments in this regard was conducted in 1974 by Stanley Milgram, who tried to see how far people would go in following orders given by a seemingly authoritative scientist. The subjects of Milgram’s research were taken into a modern laboratory and told that they would be helping conduct an experiment that involved administering electric shocks to see how punishment affected the learning process. The subjects were seated at a machine called a “shock generator,” marked with a series of switches ranging from “slight shock” to “severe shock.” Another person was designated as a “learner” and was hooked up to receive a jolt each time he gave the wrong answer on a test. A third individual, the “scientist,” stood over the experiment giving instructions and supervision. Unbeknownst to the real subjects of the experiment, both the “learner” and the “scientist” were actors, and no actual electricity was used. As each fake shock was administered, the “learner” would cry out in pain. If the subject administering the shocks hesitated, the “scientist” would say something like, “Although the shocks may be painful, there is no permanent tissue damage, so please go on,” or “It is absolutely essential that you continue.” The result was that many subjects continued to administer shocks, even when the “learner” claimed heart trouble, cried out, or pleaded to be set free. “With numbing regularity,” Milgram observed, “good people were seen to knuckle under the demands of authority and perform actions that were callous and severe. Men who are in everyday life responsible and decent were seduced by the trappings of authority, by the control of their perceptions, and by the uncritical acceptance of the experimenter’s definition of the situation, into performing harsh acts.”
2
In another famous experiment, known as the “Doctor Fox Lecture,” a distinguished-looking actor was hired to give a meaningless lecture, titled “Mathematical Game Theory as Applied to Physical Education.” The talk, deliberately filled with “double talk, neologisms, non sequiturs, and contradictory statements,” was delivered before three audiences composed of psychiatrists, social workers, psychologists, educators, and educational administrators, many of whom held advanced degrees. After each session, audiences received a questionnaire asking them to evaluate the speaker. None of the audience members saw through the lecture as a hoax, and most reported that they were favorably impressed with the speaker’s expertise.
3
The rich and powerful seem to be no better at seeing through bogus experts than anyone else. In September 1999, the Wall Street Journal announced the arrest of Martin A. Armstrong, charged with bilking Japanese investors out of $950 million. “For decades,” the Journal reported, “Armstrong sold himself to investors as expert on anything of precious value, from coins minted by the Egyptian pharaohs to turn-of-the-century U.S. stamps, not to mention current-day markets for stocks, bonds, commodities and currencies. Now, Mr. Armstrong . . . stands accused in a federal indictment of using this market-wizard image to conduct one of the most common frauds in the history of finance: making big promises to investors that he couldn’t deliver.”
Armstrong’s “self-confident forecasting style” had made him a hit at conferences in which he addressed hundreds of Japanese corporate chieftains. Even as his currency deals were losing hundreds of millions of their dollars, “Armstrong continued to confidently sell himself as a forecaster of market trends, often in language in which he mocks others’ mistakes,” the
Journal noted. “Mr. Armstrong’s reams of investing treatises, many posted on his website, range from the monetary history of Persia to the ‘Panic cycle in global capital flows.’ The historical data maintained by his Princeton Economic Institute has been used by many media outlets. The ‘first and most important rule about investing is to “Know what you are buying and why!” ’ he warned in a July 1997 report. . . . He wasn’t shy about promotion, jumping at the chance to have his picture taken with heavyweights in any market in which he was playing. Princeton Economics’ website, which is filled with Mr. Armstrong’s essays on the market, shows a photo of Mr. Armstrong with former United Kingdom Prime Minister Margaret Thatcher at one of the firm’s conferences in 1996.”
4
It is tempting to look at these examples and despair. If people are this easily duped, how can anyone hope to expert-proof themselves? The answer, of course, is that no one can, but there are some things we can all do to improve our chances.
Recognizing Propaganda
Between World Wars I and II, the rise of the public relations industry in the United States and the growing use of propaganda by fascist and communist governments prompted a group of social scientists and journalists to found a remarkable organization called the Institute for Propaganda Analysis. The IPA published a periodic newsletter that examined and exposed manipulative practices by advertisers, businesses, governments, and other organizations. Fearlessly eclectic, it hewed to no party lines and focused its energies on studying the ways that propaganda could be used to manipulate emotions. It is best known for identifying several basic types of rhetorical tricks used by propagandists:
1. Name-calling. This technique, in its crudest form, involves the use of insult words. Newt Gingrich, the former Speaker of the U.S. House of Representatives, is reported to have used this technique very deliberately, circulating a list of negative words and phrases that Republicans were instructed to use when speaking about their political opponents—words such as “betray,” “corruption,” “decay,” “failure,” “hypocrisy,” “radical,” “permissive,” and “waste.” The term “junk science,” which we discussed in Chapter 9, is an obvious use of this same strategy. When name-calling is used, the IPA recommended that people should ask themselves the following questions: What does the name mean? Does the idea in question have a legitimate connection with the real meaning of the name? Is an idea that serves my best interests being dismissed through giving it a name I don’t like?
2. Glittering generalities. This technique is a reverse form of name-calling. Instead of insults, it uses words that generate strong positive emotions—words like “democracy,” “patriotism,” “motherhood,” “science,” “progress,” “prosperity.” Politicians love to speak in these terms. Newt Gingrich advised Republicans to use words such as “caring,” “children,” “choice,” “commitment,” “common sense,” “dream,” “duty,” “empowerment,” “freedom,” and “hard work” when talking about themselves and their own programs. Democrats, of course, use the same strategy. Think, for example, of President Clinton’s talk of “the future,” “growing the economy,” or his campaign slogan: “I still believe in a place called Hope.”
3. Euphemisms are another type of word game. Rather than attempt to associate positive or negative connotations, euphemisms merely try to obscure the meaning of what is being talked about by replacing plain English with deliberately vague jargon. Rutgers University professor William Lutz has written several books about this strategy, most recently Doublespeak Defined. Examples include the use of the term “strategic misrepresentations” as a euphemism for “lies,” or the term “employee transition” as a substitute for “getting fired.” Euphemisms have also transformed ordinary sewage sludge into “regulated organic nutrients” that don’t stink but merely “exceed the odor threshold.”
4. Transfer is described by the IPA as “a device by which the propagandist carries over the authority, sanction, and prestige of something we respect and revere to something he would have us accept. For example, most of us respect and revere our church and our nation. If the propagandist succeeds in getting church or nation to approve a campaign in behalf of some program, he thereby transfers its authority, sanction, and prestige to that program. Thus, we may accept something which otherwise we might reject.” In 1998, the American Council on Science and Health convened what it called a “blue-ribbon committee” of scientists to issue a report on health risks associated with phthalates, a class of chemical additives used in soft vinyl children’s toys. People familiar with ACSH’s record on other issues were not at all surprised when the blue-ribbon committee concluded that phthalates were safe. The committee’s real purpose, after all, was to transfer the prestige of science onto the chemicals that ACSH was defending.
5. Testimonial is a specific type of transfer device in which admired individuals give their endorsement to an idea, product, or cause. Cereal companies put the pictures of famous athletes on their cereal boxes, politicians seek out the support of popular actors, and activist groups invite celebrities to speak at their rallies. Sometimes testimonials are transparently obvious. Whenever they are used, however, the IPA recommends asking questions such as the following: Why should we regard this person (or organization or publication) as a source of trustworthy information on the subject in question? What does the idea amount to on its own merits, without the benefit of the testimonial?
6. Plain folks. This device attempts to prove that the speaker is “of the people.” Even a geeky multibillionaire like Bill Gates tries to convey the impression that he’s just a regular guy who enjoys fast food and popular movies. Politicians also use the “plain folks” device to excess: George Bush insisting he eats pork rinds; Hillary Clinton slipping into a southern accent. Virtually every member of the U.S. Senate is a millionaire, but you wouldn’t know it from the way they present themselves.
7. Bandwagon. This device attempts to persuade you that everyone else supports an idea, so you should support it too. Sometimes opinion polls are contrived for this very purpose, such as the so-called “Pepsi Challenge,” which claimed that most people preferred the taste of Pepsi over Coca-Cola. “The propagandist hires a hall, rents radio stations, fills a great stadium, marches a million or at least a lot of men in a parade,” the IPA observed. “He employs symbols, colors, music, movement, all the dramatic arts. He gets us to write letters, to send telegrams, to contribute to his cause. He appeals to the desire, common to most of us, to follow the crowd.”
8. Fear. This device attempts to reach you at the level of one of your most primitive and compelling emotions. Politicians use it when they talk about crime and claim to be advocates for law and order. Environmentalists use it when they talk about pollution-related cancer, and their opponents use fear when they claim that effective environmental regulations will destroy the economy and eliminate jobs. Fear can lead people to do things they would never otherwise consider. Few people believe that war is a good thing, for example, but most people can be convinced to support a specific war if they believe that they are fighting an enemy who is cruel, inhuman, and bent on destroying all that they hold dear.
The IPA disbanded at the beginning of World War II, and its analysis does not include some of the propaganda devices that came to light in later years, such as the “big lie,” based on Nazi propaganda minister Joseph Goebbels’s observation that “the bigger the lie, the more people will believe it.” Another device, which the IPA did not mention but which is increasingly common today, is the tactic of “information glut”—jamming the public with so many statistics and other information that people simply give up in despair at the idea of trying to sort it all out.
To get an idea of how sophisticated modern propaganda has become, compare the IPA’s list of propaganda techniques with another list—the 12 points that consultant Peter Sandman advises his clients to bear in mind when attempting to minimize public outrage over health risks. Like the IPA’s list, Sandman is primarily interested in emotional factors that influence the public rather than what he and his clients consider the “rational, real” issues related to risk and public harm. His points, however, bear little surface similarity to the points on IPA’s list:
1. Voluntary vs. coerced. Sandman observes that people are less likely to become outraged over risks that they voluntarily assume than over risks that are imposed upon them against their will. “Consider,” he suggests, “the difference between getting pushed down a mountain on slippery sticks and deciding to go skiing.”
2. Natural vs. industrial. People tend to trust what can be promoted as natural: organic food or natural means of pest control.
3. Familiar vs. exotic. “Exotic, high-tech facilities provoke more outrage than familiar risks (your home, your car, your jar of peanut butter),” Sandman observes.
4. Not memorable vs. memorable. If you want to minimize outrage, not memorable is preferable. “A memorable accident—Love Canal, Bhopal, Times Beach—makes the risk easier to imagine,” Sandman explains. A memorable symbol or image can do the same thing. This is why evidence of genetically modified crops harming colorful Monarch butterflies prompted more concern than similar evidence of harm to other insects.
5. Not dreaded vs. dreaded. For example, diseases like cancer, AIDS, plague, and tuberculosis create a great deal more public concern than others, such as heart disease.
6. Chronic vs. catastrophic. Thousands of people are killed each year in highway accidents, but rarely in large groups. Plane accidents are much rarer and cause fewer deaths, but because they can cause large fatalities, air travel is much more widely feared than car travel.
7. Knowable vs. unknowable. People tend to be less apprehensive about risks that are known and measurable than about risks that cannot be measured. The unknowable aspects of some risks make them more upsetting.
8. Individually controlled vs. controlled by others. Individuals can decide whether they smoke cigarettes, exercise, or drive cars. They often can’t decide whether a factory emits pollution in their community.
9. Fair vs. unfair. “People who must endure greater risks than their neighbors, without access to greater benefits, are naturally outraged,” Sandman says, “especially if the rationale for so burdening them looks more like politics than science.”
10. Morally irrelevant vs. morally relevant. Arguing that a risk is small will fall on deaf ears if creating the risk is morally wrong in the first place. “Imagine a police chief insisting that an occasional child molester is an ‘acceptable risk,’ ” Sandman says.
11. Trustworthy sources vs. untrustworthy sources. The “third party technique,” which we have discussed throughout this book, is a PR strategy built around the effort to put industry messages in the mouths of seemingly trustworthy sources.
12. Responsive process vs. unresponsive process. “Does the agency come across as trustworthy or dishonest, concerned or arrogant?” Sandman asks. “Does it tell the community what’s going on before the real decisions are made? Does it listen and respond to community concerns?”
At his best, Sandman is advising companies to listen to the public and respond to its concerns. In practice, however, his advice often lends itself to manipulation. One way that industry and government bodies try to make it appear that their activities are being accepted “voluntarily” rather than “coerced,” for example, is to create so-called community advisory panels (CAPs) to seek the advice of people who live where their facilities are located. One of Sandman’s clients, the U.S. Department of Energy, used this tactic in trying to overcome the objections of Nevada residents over the DOE’s efforts to establish a national dump site for high-level nuclear waste at Yucca Mountain, Nevada. “The Secretary of Energy announced that there would be a ‘citizen advisory panel’ to discuss the Yucca Mountain project,” recall Judy Treichel and Steve Frishman, who have led the state’s campaign to block the project. “However, the real purpose of the panel was to invite opponents of the site such as ourselves to draft standards that would make the Yucca Mountain program acceptable. We were also invited to workshops in which government, industry and public representatives were supposed to ‘prioritize your values.’ Then we were supposed to ‘trade off’our values in order to reach an acceptable compromise. Our response was to ‘just say no.’ We were then told that we were being ‘unreasonable.’ ”
5
What both the IPA’s list and Peter Sandman’s 12 points have in common is that they focus on emotional issues rather than the public’s rational concerns. This is indeed a pattern that is common to propagandists in general. The modern-day propagandists who work in advertising and public relations can tell you endless stories that “prove” how easily news and public opinion can be manipulated by irrational appeals. This is just the way people are, they say. This is how the media works. And indeed, only someone who is blind to history would deny that emotional and irrational appeals have frequently succeeded in manipulating the public. This, however, is only a partial truth about human nature. People are complicated creatures with multifaceted personalities. The poet Ezra Pound, for example, was simultaneously a sensitive artist and a vulgar, anti-Semitic shill for the Nazis. A lot of the way we behave depends upon which parts of our personality express themselves. If you appeal to someone’s better nature, you will get a different result than if you appeal to the same person’s worst impulses. In a world full of propaganda, it is hardly surprising that some of the worst appeals succeed. What propagandists can’t tell you, however, is whether and to what degree the public’s irrationality is a self-fulfilling prophecy of their own creation. That is a question that perhaps you can answer better than they can, by learning to tell the difference between communication strategies that treat you like a child and strategies that treat you like an adult.
Growing Up Guided
The difference between the world of a child and the world of an adult can largely be described in terms of control, competence, and responsibility. When you were a child, you had little control over decisions that affected you. You were expected to eat what you were given, go to school at the assigned time, go to sleep at a designated bedtime, and so forth. Adults made the decisions because it was assumed that you lacked the capacity to decide for yourself. Even the decisions you did make were not necessarily binding, and it was your parents, not you, who were responsible for the consequences of your mistakes.
As an adult, you are responsible for all these decisions and more. The responsibilities of adults in fact extend beyond their actual areas of competence, which explains a lot about the way the world works. If you want to build an addition to your home, you hire a contractor. To take care of your health, you hire a physician; for legal matters, an attorney. You buy shoes from a company with expertise in manufacturing footwear. In all of these situations, the fact that you yourself lack expertise is not much of a problem, because you know what you want, and the expert’s job is simply to fulfill your wishes. In the words of the philosopher Georg Hegel, “We do not need to be shoemakers to know if the shoes fit, and just as little have we any need to be professional to acquire knowledge of matters of universal interest.”
With regard to decisions about public issues, expertise in terms of skill, knowledge, or experience is often less important than basic questions of values. Is abortion wrong? Is it moral to deny medical care to a child whose parents have no health insurance? Should murderers be put to death? Is it acceptable to perform medical experiments on human beings without their consent? There are no scientific answers to these questions, or thousands more like them. They can only be answered by asking ourselves what we believe and what we value. In addressing these questions, finding knowledgeable experts is actually less important than finding experts who share our values. This doesn’t mean that knowledge is unimportant. Knowledge matters, whether you are deciding about abortion or hiring someone to remodel your kitchen. But the contractors who remodel your kitchen don’t get to tell you what color to paint the walls or whether you should have wood versus linoleum floors. Their advice is limited to letting you know how much each option will cost. In a democracy, that’s the kind of deference we should expect from experts on public policy. And a contractor who spends a lot of time studying ways to minimize your outrage is probably not someone you really want to hire.
When hiring a contractor, you can turn to a state licensing board or the Better Business Bureau to see if someone has valid credentials and a reputation for doing honest work. There is no such system for accrediting public policy experts. However, if someone makes claims of a scientific nature you can ask what kind of education, licensing, and other credentials they possess in the field for which they are claiming expertise. It is also worth asking how experts rank among their peers, although you should bear in mind that every profession has its blind spots and tends to “circle the wagons” against outside criticisms. To judge from the literature of the American Medical Association, for example, you would think that malpractice lawsuits are a bigger problem than actual medical malpractice. As a rule of thumb, you should assume that specialists in any field are given to underestimating harm for which their own profession is responsible.
Expertise is justifiably linked in the public’s mind to talent, skill, education, and experience. There are also a number of stereotypical attributes that are unjustifiably linked to expertise, and it is important to avoid relying on them. These stereotypes include age, wealth, maleness, whiteness, self-confidence, credentials, specialization, and techno-elitism. When evaluating a speaker’s message, it is worth asking yourself if you are giving him extra points for having gray hair, a deep voice, an impressive-sounding degree, and a distinguished-looking business suit.
Scientific Uncertainties
Our society’s esteem for science actually tends to encourage the very unscientific notion that science is a source of infallible truths. In fact, all science is uncertain to some degree. Nature is complex, and research is difficult. The most that science can tell us about a given question is that there is a strong probability that such-and-such an answer is true. To understand scientific information, therefore, it helps to understand something about the statistical techniques that scientists use to quantify uncertainty. One of the classic journalistic textbooks on the subject is News and Numbers: A Guide to Reporting Statistical Claims and Controversies in Health and Other Fields, by the late Victor Cohn, a former science editor at the Washington Post.
Scientists live with uncertainty by measuring probability. An accepted numerical expression is the
P value, a statistical calculation of the probability that a given result could have occurred just by chance. A
P value of .05 or less—the conventionally accepted cutoff for “statistical significance”—means there are probably only five or fewer chances in 100 that a result reported in a scientific study could have happened by chance alone. When studying health risks, statistical significance is often impossible to achieve. If something kills one in 1,000 people, you would actually have to study several thousand people in order to achieve a
P value of .05 or less, and even then the possibility of other confounding factors might call your result into question. “A condition that affects one person in hundreds of thousands may never be recognized or associated with a particular cause,” Cohn says. “It is probable and perhaps inevitable that a large yet scattered number of environmentally or industrially caused illnesses remain forever undetected as environmental illnesses, because they remain only a fraction of the vastly greater normal case load.”
6
If you find any of these concepts difficult to grasp, you can take comfort in the fact that you are not alone. “Every major study of statistical presentations in the medical literature has found very high error rates, even among the best journals,” says Thomas Lang, medical editing manager at the Cleveland Clinic Foundation and coauthor of How to Report Statistics in Medicine: Annotated Guidelines for Authors, Editors, and Reviewers. “Many of those errors were serious enough to call the authors’ findings into question.”
There are some specific guidelines to consider when evaluating scientific information. Cohn recommends that when someone tells you they’ve done a study you should ask, “What kind? How confident can you be in the results? Were there any possible flaws in the study?” The last question is particularly important, he says, because the answer may tell you whether you are dealing with an honest investigator or a salesperson who is trying to convince you of a particular point of view. “An honest researcher will almost always report flaws,” Cohn says. “A dishonest one may claim perfection.” Other questions to ask include:
• What kind of study protocol was used? Is enough information offered to satisfy you that the research method is sound in its design and that its conclusions are reliable?
• Why was the study performed?
• What is the study’s statistical significance and margin for error?
• Was it submitted to independent peer review? Has it been published in a reputable scientific journal? (Bear in mind, however, that authors can pay to have scientific findings published, even in some peer-reviewed journals.)
• Are the results consistent with the results from other studies performed by other researchers?
• Is there a consensus among people in the same field?
• Who disagrees with you, and why?
Asking some of these questions may seem daunting. Scientific studies are laden with jargon of the trade that makes it difficult for outsiders to understand—words like “chi-square,” “allele,” “epizootic,” and so forth. Don’t let the language put you off. Often you can find a friendly scientist at your local university who is willing to translate things into plain English. University scientists are trained and paid to be educators, and many of them are happy to assist an intelligent, motivated person with questions. Above all, don’t be afraid to ask, and don’t let the incomprehensible stuff intimidate you. If someone wants you to believe something, the burden of proof should be on them to explain it to you in language that you can understand. If something is too complicated to explain, maybe it’s also too complicated to be safe.
The Precautionary Principle
Given the uncertainties inherent to science (and to all human endeavors), we are strong believers in the importance of the precautionary principle, which we discussed in Chapter 6. Throughout this book, we have also stressed the importance of democracy in making decisions about technology and its impact upon people’s lives. The reason that democracy matters in science and scientifically influenced policy is precisely that uncertainty exists and that different people reach different conclusions about important issues. Debate and compromise are the processes through which people resolve these differences. When a new technology is introduced, such as nuclear power or genetic engineering, some people will focus entirely on the potential benefits of the new technology while ignoring the dangers. Others will focus on the dangers and ignore the potential benefits, while other people fill in the continuum of opinion between these two poles. In an ideal decision-making process, the interplay of debate over differing views will hold the “reckless innovators” in check but enable beneficial innovations to move forward after the concerns of the “fearmongers” have been thoroughly vetted in scientific and public forums. This process may slow the pace of introduction of new technologies, which indeed is part of the point to having a democratic decision-making process.
By training and enculturation, most experts in the employ of government and industry are technophiles, skilled and enthusiastic about the deployment of technologies that possess increasingly awesome power. Like the Sorcerer’s Apprentice, they are enchanted with the possibilities of this power, but often lack the wisdom necessary to perceive its dangers. It was a government expert, Atomic Energy Commission chairman Lewis L. Strauss, who promised the National Association of Science Writers in 1954 that atomic energy would bring “electrical energy too cheap to meter” within the space of a single generation.
7 Turn to the back issues of
Popular Science magazine, and you will find other prophecies so bold, so optimistic, and so wrong that you would be better off turning for insight to the Psychic Friends Network. If these prophecies had been correct, we should by now be jet-packing to work, living in bubble-domed cities beneath the ocean, colonizing the moon and Mars. The cure to cancer, like prosperity, is always said to be just around the corner, yet somehow we never actually
turn that corner. Predictions regarding computers are notorious for their rhetorical excess. “In from three to five years, we will have a machine with the general intelligence of an average human being,” MIT computer scientist Marvin Minsky predicted in 1970. “I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months, it will be at a genius level, and a few months after that, its power will be incalculable.”
8 Expert predictions of this sort have been appearing regularly ever since, although the day when computers will be able to grease your car (let alone read Shakespeare) keeps getting pushed back.
The views of these techno-optimists deserve to be part of the decision-making process, but they should not be allowed to crowd out the views and concerns of the skeptics—the people who are likely to experience the harmful effects of new technologies and who deserve to play a role in deciding when and how they should be introduced. Just as war is too important to leave to the generals, science and technology are too important to leave in the hands of the experts.
Opponents of the precautionary principle have caricatured it as a rule that “demands precautionary action even in the absence of evidence that a health or environmental hazard exists” and says “if we don’t know something we mustn’t wait for studies to give answers.” This is not at all its intent. It is a guide for policy decisions in cases where knowledge is incomplete regarding risks that are serious or irreversible and that are unproven but plausible in the light of existing scientific knowledge. No one is suggesting that the precautionary principle should be invoked regarding purely fanciful risks. There are legitimate debates over whether a risk is plausible enough to warrant the precautionary principle. There are also reasonable debates over how to implement the precautionary principle. However, groups that seek to discredit the principle itself as “unscientific” are engaged in propaganda, not science.
Follow the Money
When you hire a contractor or an attorney, they work for you because you are the one who pays for their services. The PR experts who work behind the scenes and the visible experts who appear on the public stage to “educate” you about various issues are not working for you. They answer to a client whose interests and values may even run contrary to your own. Experts don’t appear out of nowhere. They work for someone, and if they are trying to influence the outcome of issues that affect you, then you deserve to know who is paying their bills.
Not everyone agrees with this position. Jeff Stier is the associate director of the American Council on Science and Health (ACSH), which we described in Chapter 9. Stier goes so far as to claim that “today’s conventional wisdom in favor of disclosing corporate funding of research is a ‘new McCarthyism.’ ” Standards of public disclosure, he says, should mirror the standards followed in a court of law, where “evidence is admissible only if the probative value of that evidence exceeds its prejudicial effect.” To disclose funding, he says, can have a “prejudicial effect” if it “unfairly taints studies that are scientifically solid.” Rather than judging a study by its funding source, he says, you should simply ask whether its “hypothesis, methodology and conclusion” measure up to “rigorous scientific standards.”
9 When we asked him for a list of ACSH’s corporate and foundation donors, he used these arguments to justify his refusal. With all due respect, we think Stier’s argument is an excuse to avoid scrutiny. Even in a court of law, expert witnesses are required to disclose what they are being paid for their testimony.
Some people, including the editors of leading scientific journals, raise more subtle questions about funding disclosure. The problem, they say, is knowing where to draw the line. If someone received a small grant 20 years ago from a pharmaceutical company to study a specific drug, should they have to disclose that fact whenever they comment about an entirely different drug manufactured by the same company? And what about
non-financial factors that create bias? Nonprofit organizations also gain something by publishing their concerns. They may have an ideological ax to grind, and publicity may even bring indirect financial benefits by helping attract new members and contributions. Elizabeth Whelan of ACSH made these points during a letter exchange with Ned Groth of the Consumers Union. “You seem to believe that while commercial agendas are suspect, ideological agendas are not,” Whelan complained. “This is a purely specious distinction. . . . A foundation’s pursuit of an ideological agenda—perhaps one characterized by a desire for social change, redistribution of income, expanded regulatory control over the private sector, and general promotion of a coercive utopia—must be viewed with at least as much skepticism and suspicion as a corporation’s pursuit of legitimate commercial interests.”
10
There is a certain amount of truth to Whelan’s line of reasoning. Nevertheless, corporate funding is particularly important to track, for the following reasons:
• Corporations are consistently driven by a clear and self-evident bias—namely, the desire to maximize profits, whereas assessing “ideological bias” in nonprofit foundations is itself subjective and ideological.
• Even if money doesn’t always create bias, it is a leading indicator of bias. Some nonprofit groups receive their money from the public at large or from a broad sector of the public. Consumers Union, for example, receives the majority of its funding from consumers who join in order to receive its publication, Consumer Reports. Groups such as ACSH receive a large percentage of their money from major corporations. Elizabeth Whelan may believe every word she says about the safety of pesticides, and perhaps she would have ended up believing the same things even if she had never received a dollar from the chemical and food industries. Nevertheless, the funding differences between Consumers Union and ACSH offer a fairly clear indication of whose interests are served by each organization.
• The money that corporations pour into influencing public policy is huge compared to the expenditures of nonprofit organizations. In 1998, for example, environmental organizations spent a total of $4.7 million on lobbying Congress. The sum total for all single-issue ideological groups combined—pro-choice advocates, anti-abortionists, human rights groups, feminists, consumer organizations, senior citizens, and a variety of other groups—was $76.2 million. By contrast, the agribusiness industry alone spent $119.3 million, and the lobbying expenditures of all industries combined added up to $1.2
billion. These numbers are just lobbying money and do not include campaign contributions, “soft money,” or any of the other ways that corporations buy political influence. Of course, no one is truly immune from ideological bias. As a practical matter, however, the biases you need to worry about the most are the biases held by people who have the money and power to influence government policies that affect your life.
11
The simplest way to find out who is funding an organization is simply to ask. Request an annual report or list of institutional donors. Don’t just ask
who is paying the bills. Ask
how much money is involved. Spin doctors have mastered the art of the “nondenial denial.” Remember the strategy that Philip Morris used to conceal its role as the creator and primary founder of The Advancement of Sound Science Coalition: “We will not deny being a corporate member/sponsor, will not specify dollars, and will refer them to the TASSC ‘800-’ number.”
12 The strategy of admitting to being a sponsor while refusing to specify dollar amounts was designed to deflect questions while avoiding outright lies that could embarrass the company if its funding role was later exposed.
Even if an organization itself doesn’t disclose its funding, sometimes the information is available from other sources. Examine the interests and affiliations of the organization’s board of directors. If the organization refuses to make any of this information publicly available or hedges its answers, that in itself is cause for suspicion.
The Devil in the Details
In addition to examining someone’s funding sources, you can also learn a lot about them by asking what positions they have taken in the past on specific issues. Pay attention to nuances. Industry front groups like to portray themselves as moderate and representing the “middle ground.” Watch for words like “sensible,” “responsible,” and “sound” in organization names. Just as the true mission of The Advancement of Sound Science Coalition was to stigmatize science that inconvenienced its sponsors, a group called “Citizens for Sound Environmental Policy” is likely to be in the business of trying to discredit genuine environmentalists. Industry-sponsored organizations frequently adopt misleading names. Examples have included the Foundation for Clean Air Progress, the National Environmental Policy Institute, the National Wilderness Institute, the Science and Environmental Policy Project, the Council for Solid Waste Solutions, Citizens for Sensible Control of Acid Rain, and the Alliance for Responsible CFC Policy.
13
Be especially skeptical of “think tanks,” which have proliferated in recent years as a way of generating self-serving scholarship to serve the advocacy goals of industry. Rather than centers for research and analysis, many of today’s think tanks are little more than public relations fronts, usually headquartered in state or national seats of government.
Washington Post columnist Joel Achenbach says, “We’ve got think tanks the way other towns have firehouses. This is a thoughtful town. A friend of mine worked at a think tank temporarily and the director told him when he entered, ‘We are white men between the ages of 50 and 55, and we have no place else to go.’ ”
14
Funded by big business and major foundations, think tanks devise and promote policies that shape the lives of everyday Americans: Social Security privatization, tax and investment laws, regulation of everything from oil to the Internet. They supply experts to testify on Capitol Hill, write articles for the op-ed pages of newspapers, and appear as TV commentators. They advise presidential aspirants and lead orientation seminars to train incoming members of Congress.
Think tanks have a decided political leaning. There are twice as many conservative think tanks as liberal ones, and the conservative ones generally have more money. This is no accident, as one of the important functions of think tanks is to provide a backdoor way for wealthy business interests to promote their ideas. “Modern think tanks are nonprofit, tax-exempt, political idea factories where donations can be as big as the donor’s checkbook and are seldom publicized,” notes Tom Brazaitis, writing for the
Cleveland Plain Dealer. “Technology companies give to think tanks that promote open access to the internet. Wall Street firms donate to think tanks that espouse private investment of retirement funds.” So much money now flows in, that the top 20 conservative think tanks now spend more money than all of the “soft money” contributions to the Republican party.
15
A think tank’s resident experts carry titles such as “senior fellow” or “adjunct scholar,” but this does not necessarily mean that they even possess an academic degree in their area of claimed expertise. Elsewhere in this book we have criticized the ways that outside funding can corrupt the integrity of academic institutions. The same corrupting influences affect think tanks, only more so. Think tanks are like universities minus the students and minus the systems of peer review and other mechanisms that academia uses to promote diversity of thought. Real academics are expected to conduct their research first and draw their conclusions second, but this process is reversed at most policy-driven think tanks. As economist Jonathan Rowe has observed, the term “think” tanks is a misnomer. His comment was directed at the conservative Heritage Foundation, but it applies equally well to many other think tanks, regardless of ideology: “They don’t think; they justify.”
Demand Accountability
One of the reasons that life in the information age has become such a welter of conflicting claims is that journalists have failed to live up to their responsibilities. Reporters are supposed to be one rung up from the average citizen on the information ladder, and they have a responsibility to verify the credentials and reliability of their sources. When they allow their reportage to be leavened with propaganda, they cheapen and degrade their product just as surely as a baker who adds sawdust to his flour. If you see a news story that fails to identify the background, credentials, and potential bias or conflicts of interest of a cited authority, complain. Send a letter, make a phone call.
The scientific press is expected to meet a higher standard of accountability than the general press. When it fails to meet this standard, the harm is multiplied, because general news reporters often repeat information that appears in scientific journals, using even less fact-checking than they would apply to information from other sources. In December 1999, for example, the
British Medical Journal published a “study” claiming that shaken (not stirred) martinis have beneficial anti-oxidant properties. The so-called study was part of the
BMJ’s annual joke issue. It accompanied other similarly humorous papers examining the effects of “too much sax” on jazz musicians, the frequency of swearing by surgeons, and the question of whether young women named Sharon are more likely to contract sexually transmitted diseases. To drive home the point that this was all tongue-in-cheek, the
BMJ’s martini study made frequent pointed references to James Bond, commenting that “the well known fictional secret agent . . . not only is astute in matters of clandestine affairs at a personal and international level but may also possess insights of interest to medical science. . . . 007’s profound state of health may be due, at least in part, to compliant bartenders.” Notwithstanding these efforts to clue in the clueless, wire services including Reuters, Knight-Ridder, the Associated Press, UPI, and Scripps Howard all distributed stories on the martini’s new-found power to ward off cancer and heart disease. Reports on the “anti-aging oomph” of shaken martinis appeared as straight-faced news in more than 100 publications, including the
New York Times, Houston Chronicle, London Financial Times, Chicago Sun-Times, Milwaukee Journal Sentinel, Seattle Times, Forbes magazine, and, of course,
Playboy.16
Not only does the media fail to adequately investigate the information it reports, often it fails even to disclose information that is readily available. Take, for example, the thousands of video news releases (VNRs) that are incorporated into television news broadcasts. TV news directors certainly know who supplies their VNRs, and it would be very easy to place small subtitles at the bottom of the screen stating where they came from—for example, “Footage supplied by Pfizer Pharmaceutical.” This is almost never done, mainly because the stations themselves realize that it would be embarrassing if people found out how much of their so-called news is actually canned material supplied by PR firms. It can only be hoped that as the public becomes better educated about the use of VNRs and other public relations tactics, pressure will be brought to bear upon the media to reform itself.
Inviting Public Participation
The slogan “question authority” first arose during the radical movements of the 1960s. It contains a great deal of wisdom, but it is inadequate. We need authorities in our lives—people we can trust to fix our cars and computers, to assist us when we become sick, to help us understand and better manage our world. The question really is what kind of relationship we should have with authorities. Should it be a relationship in which the experts regard the rest of us as “a herd to be led,” in the words of Edward Bernays? Or should it be a relationship in which the experts regard themselves as servants of the public? The issue is not whether authorities should exist, but how to make them accountable.
One approach to addressing this problem has been developed by the Loka Institute, an organization based in Amherst, Massachusetts, that has been working since 1987 to promote ways that grassroots citizens and workers can become involved in the scientific process and technological decision-making. It has been studying a type of citizens’ panel called a “consensus conference.” Sometimes referred to as a “policy jury” or a “citizens’ jury,” a consensus conference is similar in some ways to the randomly selected juries used in U.S. courtrooms, except that instead of judging criminal cases, they attempt to reach verdicts on matters of public policy. To organize a consensus conference around a particular topic, advertisements are published seeking local “lay volunteer participants” who are chosen to reflect the demographic makeup of the community and who lack significant prior knowledge or involvement in the topic at hand. The final panel might consist of about 15 people, including home-makers, office and factory workers, and university-educated professionals. The participants engage in a process of study, discussion, and consultation with technical experts that culminates in a public forum and the production of a report summarizing the panel’s conclusions about the topic at hand.
The use of consensus conferences was pioneered in Denmark and is now being widely adopted in Europe as a process for giving ordinary citizens a real chance to make their voices heard in debates on technology policy. “Not only are laypeople elevated to positions of preeminence, but a carefully planned program of reading and discussion culminating in a forum open to the public ensures that they become well-informed prior to rendering judgment,” says Loka Institute director Richard Sclove. “Both the forum and the subsequent judgment, written up in a formal report, become a focus of intense national attention—usually at a time when the issue at hand is due to come before Parliament. Though consensus conferences are hardly meant to dictate public policy, they do give legislators some sense of where the people who elected them might stand on important questions. They can also help industry steer clear of new products or processes that are likely to spark public opposition.”
17
The Loka Institute also advocates increased funding for “community-based research” that is initiated and often carried out in collaboration with civic, grassroots, and workers groups. “This research differs from the bulk of the research and development conducted in the United States, most of which—at a cost of over $200 billion per year—is performed in response to business, military, or government needs or in pursuit of academic interests.”
18 In 1994, Sclove notes, the Pepsi company announced plans to spend $50 million—approximately five times as much as the total annual U.S. investment in community-based research—to reinvent its Doritos-brand tortilla chips, intensifying the flavor on the outer surface, rounding the chip’s corners, and redesigning the package. “A society that can afford $50 million to reinvent the Doritos chip can do better than $10 million for community-based research,” he says.
If “community-based research” sounds like some pie-in-the-sky idea, Sclove points out that it is already a common practice in Holland, where the Dutch have developed a network of “science shops” that respond to some 2,000 annual research requests. Other science shops have been established in Austria, the Czech Republic, Denmark, England, Germany, Malaysia, Northern Ireland, and Romania, as well as in the United States. In Highlander, Tennessee, for example, a local community group worked with university researchers to conduct health surveys and videotaped waste dumping by a local tanning company that was polluting the town’s drinking water. In New York City, high school students collected and analyzed data on diesel exhaust exposure and lung function among their fellow students, coauthoring an article that was published in the July 1999 issue of the peer-reviewed Journal of Public Health.
These are only a couple of examples of how increased democracy and citizen participation could be brought to bear upon the scientific and policymaking process. The obstacles to doing this are not technical or economic; they are social and political. Society’s failure to incorporate citizen participation into the scientific process reflects our assumption that scientific topics are too complex for the average citizen. In 1992, however, a study conducted by John Doble and Amy Richardson of the Public Agenda Foundation, a nonprofit organization founded by opinion pollster Daniel Yankelovich, found that even people who don’t normally pay attention to scientific issues can do a good job of making science-related policy decisions. Doble and Richardson recruited a representative cross-section of 402 people from different parts of the United States to participate. They were given short, balanced presentations about two technically complex issues—global warming and solid waste disposal—and were then asked to discuss and decide what they thought would be the best policy solutions for dealing with those issues. Doble and Richardson also polled 418 leading U.S. scientists regarding the same issues. By and large, they found, the lay participants in the study made the same policy choices as the scientists. With regard to global warming, for example, both groups favored more spending on mass transit, higher fuel-efficiency standards for cars, tax incentives to encourage energy conservation, and programs to plant trees. “Our conclusion from this exercise is that the public as a whole—not just those who are attentive to science—can intelligently assess scientifically complex issues, even when experts are uncertain,” Doble and Richardson stated.
19
Even when the two groups made different policy choices, Doble added, the differences “seemed to stem not from different scientific understanding but from different value judgements.” For example, scientists “understood very clearly that nuclear power does not contribute to the global warming problem, and felt that the country needs to build more nuclear power plants by a very large margin. Sixty-eight percent of the scientists said that.” By contrast, only 36 percent of the nonscientists favored construction of nuclear power plants, but “in the discussion group, when people talked about the issue, it became clear that their concerns were not technical, they were managerial. . . . They didn’t trust the energy companies, they didn’t trust the utilities, they didn’t trust the government regulators, they didn’t trust the boards that oversee all this stuff, they didn’t trust those groups to manage the technology safely.” They understood the technical issues reasonably well, in other words, but for the public at large, those weren’t the most important issues.
20
Activate Yourself
In understanding the hold that experts have on our lives, we should consider the role that we ourselves play as consumers of information. Most propaganda is designed to influence people who are not very active or informed about the topic at hand. There is a reason for this strategy. Propagandists know that active, informed people are likely to already hold strong opinions that cannot be easily swayed. The people who are most easily manipulated are those who have not studied a subject much and are therefore susceptible to any argument that sounds plausible.
Of course, there is no way that anyone can be active and informed about every issue under the sun. The world is too complex for that, and our lives are too busy. However, each of us can choose those issues that move us most deeply and devote some time to them. Activism enriches our lives in multiple ways. It brings us into personal contact with other people who are informed, passionate, and altruistic in their commitment to help make the world a better place. These are good friends to have, and often they are better sources of information than the experts whose names appear in the newspapers or on television. Activism, in our opinion, is not just a civic duty. It is a path to enlightenment.
This book has largely been a catalogue of disturbing trends and failures to live up to the promise of an informed, democratic society. It is important to remember that these are not universal trends. We have described failures in the way the news media does its job, but there are also enterprising, committed journalists who take seriously their responsibility to serve as the public’s eyes and ears. In addition to reporters, there are activist congressional aides, government whistle-blowers, public-interest groups, and even trial lawyers who actively investigate and challenge the official doctrines of government and industry. Maude De Victor, for example, was a 23-year counselor with the U.S. Veterans Authority when she noted a pattern of illness among army veterans who had been exposed to Agent Orange. She brought it to the attention of CBS news correspondent Bill Kurtis, whose resulting exposé earned him a coveted Peabody Award and three Emmys. De Victor herself was rewarded by being fired, black-listed, and banned from full-time government work, but she is a heroine to people who long for government accountability and a world free of chemical toxins.
Activists and whistle-blowers come from all walks of life. Emelda West, a great-grandmother in her 70s, helped campaign against toxic releases in low-income communities in Louisiana. In Pensacola, Florida, Margaret Williams heads Citizens Against Toxic Exposure, a group formed in 1991 to battle the Environmental Protection Agency’s digging on a toxic site near her home. When residents—most of them elderly and not well-off financially—began suffering eye and skin irritations and breathing problems, she quickly learned about the poisonous effects of dioxin. Although her group lost the battle to stop the digging, it recently persuaded the federal government to pay for the relocation of all 358 families.
Terri Swearingen had activism thrust upon her in 1982. “I was pregnant with our one and only child,” she recalls. “That’s when I first learned of plans to build one of the world’s largest toxic waste incinerators in my community. When they began site preparation to begin building the incinerator in 1990, my life changed forever.”
21
The incinerator, owned by a company called Waste Technologies Industries (WTI), was sited in East Liverpool, Ohio, just across the border from her home in West Virginia. It was situated in a floodplain, with homes nearby and an elementary school just 400 yards away. Worse yet, it was located in a valley that experiences frequent air inversions, which trap the air and prevent the escape of pollution. In short, it is about the worst place you could imagine building a giant hazardous waste facility that emits dioxins, acid gases like hydrogen chloride, and heavy metals, including mercury, lead, and chromium.
“I’m a registered nurse,” Swearingen says, “so I’ve actually seen the effects of lead poisoning in young children and the types of behavioral or developmental problems that it produces. One of the first things I learned about WTI was that the government was going to let them emit 4.7 tons of lead annually. I thought, how can the government do this? How can they let them emit lead? Lead never breaks down. It never degrades. It just accumulates. When you know a little bit about the effects of lead, the rest is just common sense. That’s all you need to know to realize that they should never even consider building this thing next to a school.”
When Swearingen first began trying to fight the incinerator, she says she was “at ground zero.” She picked up Rush to Burn, a 1989 book about waste incineration by reporters at Newsday magazine. “I read the book twice and highlighted sections with the people involved. Then I just started calling them up and asking for help. They said, ‘You’re going to have to deal with this yourself.’ ”
She learned to tap the expertise of people such as Paul Connett, a professor of chemistry at St. Lawrence University whom Swearingen calls “our secret weapon.” Connett helped translate complex scientific data into information that the community could understand. Other advice came from Herbert Needleman, the University of Pittsburgh researcher who has studied the neurotoxicology of lead in children, and David Ozonoff, chairman of Boston University’s School of Public Health. To help challenge a risk assessment from the U.S. Environmental Protection Agency, she called on EPA whistle-blower Hugh Kaufman.
Swearingen herself led more than 20 civil disobedience protests against the incinerator and even testified before the U.S. Congress. She helped make the incinerator such a high-profile issue that in 1992 Al Gore, then a candidate for vice president, promised to stop the project if he were elected. “The very idea of putting WTI in a floodplain, you know it’s just unbelievable to me,” Gore said. “For the safety and health of local residents rightfully concerned about the impact of this incinerator on their families and their future, a thorough investigation is urgently needed.”
22
Like many politicians’ promises, this one turned out to be worth less than the air in which it vibrated. Once in office, Gore backed down—not surprisingly, since Little Rock investment banker Jackson Stephens, the Clinton-Gore campaign’s biggest financial backer, was involved in financing the incinerator.
But even though the WTI incinerator was not stopped, it became a turning point against the construction of new incinerators. Swearingen’s dogged protests—including her willingness to get arrested for the cause—gained enough attention to prompt Ohio Governor George Voinovich to halt future incinerator construction. The day after she was jailed for a demonstration in front of the White House, the Clinton administration declared a national moratorium on new incinerator construction and revised its rules to require stricter limits on the release of dioxin and heavy metals. In April 1997, she received the prestigious Goldman Environmental Prize in recognition of her leadership.
“I am not a scientist or a Ph.D,” Swearingen said upon accepting the award. “I am a nurse and a housewife, but my most important credential is that I am a mother. . . . We know what is at stake. We have been forced to educate ourselves, and the final exam represents our children’s future. ...Because of this, we approach the problem with common sense and with passion. We don’t buy into the notion that all it takes is better regulations and standards, better air pollution control devices and more bells and whistles. We don’t believe that technology will solve all of our problems. We know that we must get to the front end of the problems, and that prevention is what is needed.”
23
She recalls talking about WTI recently with a 14-year-old girl. Upon learning that the incinerator was located next to a school, the girl blurted out, “But that wouldn’t take any research to know it’s wrong!” Swearingen marvels at a teenager’s ability to grasp in a single sentence the point that eluded the EPA in its four-year, 4,000-page risk assessment.
“We have to reappraise what expertise is and who qualifies as an expert,” Swearingen says. “There are the experts who are working in the corporate interest, who often serve to obscure the obvious and challenge common sense; and there are experts and non-experts who are working in the public interest. From my experience, I am distrusting more and more the professional experts, not because they are not clever, but because they do not ask the right questions. And that’s the difference between being clever and being wise. Einstein said, ‘A clever person solves a problem; a wise person avoids it.’ . . . Citizens who are working in this arena—people who are battling to stop new dump sites or incinerator proposals, people who are risking their lives to prevent the destruction of rain forests or working to ban the industrial uses of chlorine and PVC plastics—are often labeled obstructionists and anti-progress. But we actually represent progress—not technological progress, but social progress. We have become the real experts, not because of our title or the university we attended, but because we have been threatened and we have a different way of seeing the world.”
24