15

Antipsychotics

Few discoveries in the history of science triggered anything like the welling of euphoria in the Western world that greeted the arrival of antipsychotic medication in 1954. Certainly none reached as deeply into the wells of human terror and desire as psychoactive drugs, with their promise of identity restored and protected, the Self insulated from demonic colonization.

Cold War–fighting, Red-Scared, bomb-shelter-digging, beatnik-averse America hungered for some good news about humanity’s prospects. What better news than that madness was about to be wiped out? Newspapers, magazines, and television wolfed down the great helpings of sugary press releases served up by factotums of the once-staid, newly Delphian drug companies. They raced to trumpet the latest breakthrough, quote the most utopian promises of corporate lab scientists. The Freudian high priests with their dreary old theories of “mind-cure” found themselves blindsided at the height of their hard-won prestige—blindsided by chemicals!

The very notion—Sanity in a bottle! Peace of mind in a popped pill!—so perfectly fit postwar America’s marketing-conditioned faith in E-Z solutions via consumer products that the wonder-drug blitzkrieg was complete almost as soon as it started. The companies’ onslaught of grandiose claims at first paralyzed the usual gatekeepers of the public interest. Who, after all, had the scientific savvy necessary to analyze and challenge them? Not the press, that great consortium of laymen reporting to laymen. Not public officials, mostly science-illiterate and lobbyist-friendly. Not the academy, generally lethargic in shifting its sights from dusty tomes toward civic affairs (unless, as events often proved, certain monetary considerations were proffered). And not consulting psychiatrists, far too many of whom were only too happy to reinvent themselves as pill prescribers as they saw their talk-therapy clientele abandon the couch for the nearest Rexall.

As a preamble to exploring the deeply flawed rise of Big Pharma, a rise built largely upon avarice, profiteering, deceptive and even false marketing, bribery, and even—as its profits soared beyond imagination—a willingness to settle multimillion-dollar lawsuits out of court and proceed on to further perfidies as a preamble to all this, it is fair to acknowledge that the hope of the wonder drugs has not been completely misguided. Prescription antipsychotics, the good ones, anyway, have enabled millions of schizophrenic patients to experience quantum improvements in their lives. For them, the medications have restored cognition, suspended hallucinations, including “voices” inside the head, and enabled control over destructive irrational impulses. They have made it possible for mentally ill victims and their loved ones to resume communication, a gift beyond value. They have allowed uncounted sufferers to return to the workplace.

Modifications on the antipsychotic compound, known generally as psychotropics, have been effective in stabilizing the mood swings of bipolar sufferers. A host of antidepressant and antianxiety products, distinct from psychotropics* in their chemical makeup and their lesser potency, treat the complaints of “the worried well,” those one-in-four patients who crowd doctors’ appointment schedules with nothing especially awful to report. Whether the “worried well” actually need such sustenance is a question that has always triggered doubt. Lately, the doubt has been buttressed by new observations of mice. Laboratory mice, that is, not the other variety. Those are supervised by marketing experts.

Psychoactive drugs have impelled doctors and scientists to make their historic break with Freudian orthodoxy, shifting from “the mind” to the brain. As the new drugs were sweeping the world and turning pharmaceutical companies into financial empires, scientists remained ignorant as to what made them work. Lobbied and prodded by drug company salesmen—themselves as clueless as anyone—a great many doctors and psychiatrists dropped their initial skepticism and came to assume that antipsychotics cured chronic mental illness. They did not. They temporarily repressed its symptoms. When the patient stopped taking these meds, the symptoms came crashing back, sometimes with fatal results.

The antipsychotic revolution proved to have arrived with side effects. Within a few years of their appearance in the marketplace, and irrespective of their sustained bonanza of profits and popular prestige, these “miracle drugs” stumbled into quagmire after quagmire.

Deinstitutionalization was only the first. Antipsychotics, exuberantly sold to President Kennedy by his Joint Commission as “moral treatment in pill form,” proved utterly useless—as we have seen—in stemming the catastrophe that ensued. The half-century that followed saw an ongoing morality play starring pharmaceutical corporations, the government, a cadre of outraged watchdog writers and journalists, and a grassroots oppositional movement that hardened into an ideological force. Even as they created a market valued beyond $70 billion, the corporate purveyors of antipsychotic drugs have been sued by class-action groups, found guilty in courts of cover-ups and false claims, denounced by civil libertarians, and castigated as evil by victims of schizophrenia who believe that the use of such drugs made them worse instead of better.

No one really set out to find a chemical solution for irrational human behavior. Most intellectuals of the early twentieth century, including scientists, assumed that the Viennese master and his followers had that area covered. The origins of psychopharmacology lay, as they have with so many seminal discoveries, within a search for something else—something entirely different. Scientists have a term for this kind of sublime inadvertence: “serendipity.”

In the mid-1930s, certain French scientists were toiling along what they hoped was the path to a workable antihistamine. Something was needed to neutralize the problematic neurotransmitter histamine. This component in the immune system combats pathogens taken into the body via food and breathing. But the chemical can transform itself into part of the problem, a protopathogen that triggers allergies, such as hives, and damages the heart and smooth muscles. In extreme cases, it can cause the often fatal allergic reaction known as anaphylactic shock.

A series of breakthroughs that took antihistamine research into an unforeseen and revolutionary direction commenced in the late 1930s at the Pasteur Institute in Paris. Among the scientists was a brilliant young Swiss-born scientist with the bony features and intense gaze that called to mind Sherlock Holmes. His name was Daniel Bovet. In 1937, the thirty-year-old Bovet and his twenty-three-year-old colleague Anne-Marie Staub, who had been thinking about leaving biochemistry to become a nun and treat lepers, were trying to develop a “selective antagonist” of histamine that could be safely ingested by human beings. They synthesized an antihistamine with the self-explanatory name thymoxyethyldiethylamine.*

Among the substances whose properties aroused Bovet’s curiosity was a powerful chemical distilled from ergot, a fungus that grows on rye. Other scientists were examining it as well, and it was synthesized in the following year as lysergic acid—the first step along the trail toward LSD. Bovet was the first to extract from it an important truth: that even simple molecules can touch off powerful mood changes and perceptions. As a 2007 review of his career noted, his observations “exerted a marked influence in the field of psychopharmacology, and in particular psychedelic drugs… Bovet’s work helped to shape scientific thought regarding psychoactive drugs that are used in therapy today.”1

Bovet and Staub’s discovery of thymoxyethyldiethylamine set the stage for the use of synthetic drugs in brain therapy. It took time. Refinements of this compound, also (and thankfully) known as 929F, would consume nearly fifteen years before the resulting wonder drug stood revealed. Bovet himself conducted more than three hundred thousand experiments in the four years after discovering 929F to find the right formula.2 His efforts earned him recognition as the founder of psychopharmacology and, in 1957, the Nobel Prize in Medicine.

Bovet and Staub’s new synthetic chemical might have pointed the way only toward a good sneeze medicine, had it not caught the attention of a hovering Parisian pharmaceutical company, Rhône-Poulenc. Rhône-Poulenc, formed just nine years earlier, was interested in the pharmaceutical uses of synthetic textiles. The company formed a collaborative partnership with Pasteur. A chemist named Paul Charpentier incorporated Bovet’s findings into his research toward a usable antihistamine. Within a short time, Charpentier perfected the compound promethazine, which went on the market as Phenergan. This drug not only acted against allergies, it produced a sedative effect as well. Thus it clearly was working upon the central nervous system—changing behavior. But its side effects in some patients—convulsions, increased heartbeat, fatigue, fever—made it a risky bet.

The relay baton was passed on to Henri Laborit, a dashing, dark-pompadoured physician, artist, movie actor (as himself), and wartime man of action. This summer of 1949 found Laborit, at age thirty-five, serving as a naval neurosurgeon at the Bizerte Naval Hospital at the port in Tunis. He was searching for some new medication that would reduce postoperative shock in victims of severe war wounds. He would soon light the spark to the combustible chain of research begun by Bovet and blast the psychotropic era into existence.

Laborit opened a package one day and found samples of Charpentier’s new mixture from Rhône-Poulenc. Laborit took the compound into the lab, tested its properties, and intuited that its sedative powers offered possibilities beyond curing infection and runny noses. It produced hypothermia, or what Laborit called “artificial hibernation”—a kind of drug-induced anesthesia. Returning to Paris, he asked Charpentier for additional compounds that might increase the potency and reduce the side effects. In one of those tests, on December 11, 1950, Charpentier added a chlorine atom to the promazine molecule and produced chlorpromazine (CPZ).

Laborit continued his lab tests before trying it out on a living person. In February 1952, he and others administered some trial doses at the Val-de-Grâce military hospital in Paris. The doses, he reported, did not automatically put his patients to sleep. But they did produce an indifference to what was going on around them, and to their own pain.

In that year, Laborit recommended CPZ for use on patients in emotional or mental pain, though he still saw it mainly as an anesthetic. Rhône-Poulenc made it available by prescription in France as Largactil (“large in action”). In that same year the small, flailing American firm Smith, Kline & French (SKF) took a chance on buying license rights for the substance in the United States.

The thing was that still, nobody knew exactly what to do with it. Smith Kline invited Laborit to come to America and show off its “artificial hibernation” capacities to surgeons. This time, his magic went missing. His experimental dogs kept dying after their doses. Laborit went back home. He turned to writing and produced more than twenty books on science and evolutionary psychology. His ideas about free will and memory attracted the attention of Alain Resnais, a pioneer in the French school of New Wave filmmaking. He played himself in Resnais’s mold-breaking 1980 movie Mon oncle d’Amérique, starring Gérard Depardieu. He faced the camera at intervals in this story of three characters groping for their destiny yet conditioned by their past, their lives implicitly compared to white rats in a lab cage. The film won the Grand Prize at Cannes. Laborit died in 1995.

As for Smith Kline, it began to think it had bought the rights to something like a failed high school chemistry experiment. Before cutting its losses, the little company decided to invite one last French scientist to come over and see what he could do. This was Pierre Deniker. Deniker was a colleague of Laborit’s and one of the first to understand the true value of chlorpromazine. With his square, white crew-cut head, dark brows, and tight little grin, he called to mind a small-town police chief. Unlike Laborit, who had the appearance of a man who might at any moment don a beret and drag a woman across a cabaret stage, Deniker projected a businesslike demeanor. Like Laborit, he possessed a brilliant mind.

Deniker had already proved his knack for selling the new compound. Early in 1952, Rhône-Poulenc dispatched him to the Sainte-Anne Psychiatric Hospital in Paris, where he wowed the staff with a confident air. As recounted by the psychological historians Steve D. Brown and Paul Stenner, Deniker arrived one morning at Sainte-Anne, “from where the mentally ill, whom the Paris police have picked up the night before, are distributed. ‘How many do you want for the clinic this morning?’ the charge nurse asks… Normally these patients were not ‘particularly welcome’ on the wards… Nonetheless, Deniker… tells her he will take them all. He says to the nurse, ‘We have found a trick that works.’”3

It did work. Not that day, but within a week or so, the “mentally ill” subjects (none of whom, of course, had been clinically diagnosed for schizophrenia; nor, for that matter, been cured of it by Deniker) had improved dramatically in their behavior. Shackles and restraints were no longer necessary. They were recognizably “normal” again. Or nearly so. Or so it seemed.

Soon after that, Deniker was taking his “trick” on the road in the United States, at hospitals and academies along the eastern seaboard. Again, he coaxed dubious professionals into listening to his claims and then witnessing the remarkable calm that descended upon their patients within several days. Demonstrating that he intuited where the real power lay, Deniker also looked into state mental institutions, dosing selected inmates and persuading their administrators that the wonder drug could radically diminish their overcrowded wards. Again, the results were amazing. The wardens, eager to impress state legislatures, sent the word along.

In 1954 CPZ appeared as an American prescription drug, approved by the FDA and licensed and distributed by Smith, Kline & French (today, GlaxoSmithKline). Its proprietary name was Thorazine.

Nothing like this could have been dreamed, or even conceptualized, by those generations of luckless inmates at Bedlam, or their keepers. “Technological solutions to mental disorders,” as it has been described, was at the threshold of history.

Antipsychotics work as suppressants. Another generic name for them is “neuroleptics,” because they function by producing neurolepsis in the brain through blocking certain transmissions. Neurolepsis is a condition of emotional quiescence, indifference to surroundings, and the tamping down of “psychomotor” function—that is, the effect on physical movement of thought impulses. These results are quite similar to the symptoms of negative schizophrenia.

The target of most neuroleptics is the neurotransmitter dopamine. More specifically, the target is one of dopamine’s five receptors, D2, “the primary site of action for all antipsychotics,” in the phrasing of one paper.4 Receptors are proteins that bind to neurotransmitters and move chemical information along through neural pathways. Dopamine, through its receptors, influences nearly every function of the body, regulating the flow of information from anatomical areas to the brain. Its signals enable us to pay attention, to learn, and to remember. It regulates bodily movement and augments immune systems. Its release enables people to experience pleasure from food, sex, recreation, and the abstract series of vibrations known as music. Dopamine is the Self’s Dr. Jekyll.

Except when it is transformed into Mr. Hyde. This can occur when its balance is disrupted—when there is either too much or too little of it. Stress is a leading cause of dopamine oversupply. Lack of sleep is another. Drugs, even prescription drugs and certainly “recreational” ones, are common causes. The consequences can be heightened anxiety, paranoia, adrenaline rushes, and hyperactivity—at its extreme, tardive dyskinesia, or uncontrollable grimacing and tongue-thrusting and other movements in the lower face. Given the right genes (that is, the wrong genes), all of this can produce schizophrenia.

Stress, a truly insidious state, can also lower the dopamine supply; as can obesity, a bad diet, and too much alcohol intake. Lower dopamine can result in Parkinson’s disease, depression, too much sleep, a lowered libido, aggressive behavior, and a lack of concentration, among other things. Antipsychotics are aimed at oversupply; they are “antagonists” to it.

Another important neurotransmitter is serotonin, discovered in 1948 by the Italian pharmacologist Vittorio Erspamer. Serotonin is known as “the happy chemical”: “Happy” because the molecule is believed to safeguard the balance of mood in much the same way as dopamine. That, and because it is the leading receptor for LSD.

Unlike dopamine, serotonin is manufactured in two distinct regions of the body, and the chemicals from each do not cross the other’s boundaries. Up to 90 percent of the chemical is located in the gastrointestinal system, where it regulates bowel movements, aids the formation of blood clots that result from wounds, and hastens the expulsion of toxic food and drink via vomiting and diarrhea. Serotonin’s smaller but at least equally vital production point is the hypothalamus, deep inside the limbic system of the brain—the delicate collection of many tiny structures that govern human emotion and memory. From this region, serotonin patrols the same territory as dopamine: mood, appetite, memory, sleep, and libido.

In the constantly pruning and regenerating adolescent brain, serotonin is believed to be a critical gatekeeper of thought and behavior. If there is not enough of it, the chemical is thought to be incapable of inhibiting impulses to anger, aggression, anxiety, panic, fear, and depression. Raising serotonin levels is the goal of the many antidepressant medications originally known as “mother’s little helpers,” the tranquilizers that arrived and proliferated on a parallel trajectory with antipsychotics. Miltown, introduced in 1955, was the first, working its way into thirty-six million prescriptions in two years. It was followed by Prozac, Zoloft, Paxil, and the rest. These medications are known collectively as SSRIs—selective serotonin reuptake inhibitors. “Reuptake” refers to the absorption of neurotransmitters by the same nerves that released them. SSRIs block this process and leave more serotonin available for action.

This summary comprises the accepted understanding of how antipsychotics work—accepted at least for a while. After all, it was only in 1975 that the Canadian researcher Philip Seeman discovered the D2 receptor. Before that, no one really knew how the new wonder drugs worked. They just worked. More or less.

And it appears that no one knows still.

Sales of Thorazine exploded on impact. Two million patients regarded as mentally disturbed ingested it by prescription in the first eight months. Smith, Kline & French, not a firm to let a single golden egg go uncracked, set an enduring pharmaceutical template by marketing its product as if it were a—product. It launched a barrage of print advertising in 1954 depicting Thorazine as the answer for just about anything that could ail a body short of dishpan hands: Arthritis. “Acute” alcoholism. “Severe” asthma. “Severe” bursitis. Behavior disorders in children. (It would be years before state and federal agencies started cracking down on indiscriminate and ethically reckless claims that psychotherapeutic drugs could help children, whose delicate neurochemistry is far more vulnerable to chemical distortion than adults’.) Menopausal anxiety. Gastrointestinal disorders. Psoriasis. Nausea and vomiting. Senile agitation. And, brushing once more against the border of ethical responsibility, even cancer.

These display advertisements set the stage for decades of charges that Big Pharma manipulated its customers’ expectations via weaselly marketing techniques. Nearly all of them avoided direct claims that Thorazine would cure the complaint in question, including schizophrenia. They insinuated cure, but they were really selling relief—relief from symptoms. The modifying adjectives acute and severe served subtle notice of this, as did the small print that preceded the big scare words. The display ad for Thorazine as the enemy of cancer, for example, was presented as follows:

Among the small handful of ads that did make absolute claims was the one that trumpeted “Another dramatic use of ‘Thorazine’”: in eight out of ten patients, it stopped hiccups.6

By 1955 the drug had rocketed around the Western world, trailing clouds of money: to Switzerland, England, Canada, Germany, Hungary, Latin America, Australia, and the USSR. Within a year Thorazine increased the company’s sales volume by a third. SKF net sales increased from $53 million in 1953 to $347 million in 1970. In 1957, Deniker, Laborit, and their colleague Heinz Lehmann, who had supervised the first CPZ experiments in North America, shared the prestigious Albert Lasker Award for medical research.* The age of “miracle drugs” was under way.

Psychiatrists began to survive by transitioning from “talk therapy” with their patients to being de facto agents of the drug industry. Their classic roles as methodical counselors of troubled people, their time-consuming search to locate the causes of those troubles in traumas suffered earlier in life—these pursuits evaporated, to be replaced by medical expertise. This was by way of prescribing proper medications to control symptoms of discontent by altering brain function. “Brain disease” replaced “the unconscious” as a near-consensus truth. (Or, as the pioneering child psychiatrist Leon Eisenberg joked, American psychiatry shifted from brainlessness to mindlessness.) And once that ambiguous truth was settled or at least agreed upon, commerce’s doors swung open to admit the stampede of new pharma-entrepreneurs.

Haloperidol (Haldol) was synthesized in Belgium in 1958; lithium, an alkali metal with multiple adaptive uses in weapon-grade nuclear fusion, airplane-engine grease, ceramics, optics, polyester clothing, and air purification, was adapted and put on the market for use against bipolarity in 1970. The list increased steadily: Mellaril, Prolixin, Navane. The number of antipsychotics available today, in widely varying degrees of effectiveness and public awareness, approaches fifty. And then came clozapine, a watershed drug in several ways.

Clozapine appeared a few years after Haldol. It was developed in the early 1960s by the Swiss company Sandoz and promoted as a drug that succeeded with patients who seemed unreachable by Thorazine and the other early meds. Further, its developers declared it to be effective against suicidal tendencies in patients. A serotonin antagonist, clozapine could block both dopamine and serotonin receptors, which multiplied its control of overflow. But not long after its debut, clozapine disappeared. It did not reappear for a decade. It was pulled because of the problem that has bedeviled the “wonder drug” makers virtually from the outset; that has cost them millions of dollars in fines and settlements and the stigma of scandal even as their revenues soared into the billions; that has done damage to many patients, and has never been eradicated. The problem was side effects.

Clozapine’s side effects, often virulent, were detected early by the corporate scientists who discovered it. These included seizures, constipation, weight gain, and, rarely, sudden death. None of those scientists or their employers said a word about these hazards. After all, the percentage of mental patients who experienced these ailments was low. And profits were high (though not as high as they soon would be). Not until researchers began linking it to a dangerous and sometimes fatal white blood cell depletion called agranulocytosis did Sandoz squeeze its corporate eyes shut and pluck it off the market.

Clozapine did not regain a place in the market until 1972, when it was sold in several European countries as Clozaril. Now—finally—it was carefully marketed as a medication to be administered only when patients’ symptoms showed resistance to other antipsychotics, or when the patients began to talk seriously about suicide. It remained, and remains, a dangerous potion for many people. This problem was addressed not via adjusting its components, but via advertising—beneficial advertising, for a change: when it finally appeared in America in 1993 (as clozapine), its packaging carried five serious, or “box,” warnings, including warnings for agranulocytosis.7 The FDA pledged that its use would be carefully monitored.

Clozapine in ways good and bad was the narcoleptic drug of the future. It was the first of the “atypical” or “second-generation” antipsychotics that began to appear on psychiatrists’ prescription lists in the 1970s. Drugmakers claimed that the new drugs were more versatile and bore less harmful side effects. This latter was contrary, of course, to the evidence of clozapine itself. But as we shall shortly see, exaggerated and outright false claims were already fast becoming the lingua franca of the brave new drug-world. The really ugly days lay ahead.

To open the dossier on the behavior of American and European pharmaceutical giants over the past quarter-century is to confront a fortified casino of riches and debauchery. Accounts of documented piratical atrocities gather, thicken, and expand like locusts blotting out the sun: corruption, criminality, contempt for public safety, buy-offs, payoffs, kickbacks, and the overarching pollution of medical integrity. All wrought by what seems a virulent new genetic strain of greed; a strain impervious to public disclosure, to staggering court fines, to continual calls to personal conscience and civic accountability. A sobering artifact of this scandal-beyond-scandal is the liberty that reformers feel to tar Big Pharma with analogies to the underworld, via use of such comparative terms as “organized crime” and “the Mafia.”

These accusers are part of a new generation of watchdogs. They have taken up the work of those who exposed the overcrowding and inhumanity of state psychiatric hospitals more than forty years earlier. They are medical journalists and unspecialized journalists; PhDs, psychiatrists, and doctors repulsed by the malfeasance they have witnessed firsthand; they are current and former psychiatric patients. They share outrage over the mockery of public trust as it has unfolded and enlarged in plain sight—their plain sight, at least.

Their published output keeps growing. From the beginning of the twenty-first century alone, books regularly hit the stores bearing such titles as Selling Sickness; Big Pharma (2006); Big Pharma (2015); The Big Pharma Conspiracy; Bad Pharma; Pharmageddon; Bad Science; The Truth About the Drug Companies; Overdiagnosed; Overdosed; Overdosed America; How We Do Harm; On the Take; Know Your Chances; Taking the Medicine; Death by Medicine; Our Daily Meds; Drugs, Power, and Politics; Pill Pushers; Poison Pills; and others.

The sheer tonnage of all this can leave one feeling that a vital membrane in the social fabric is tearing open under pressure; a membrane that already bears the weight of big banks and finance institutions; a membrane that has held us back from decadence.

The list of companies shamed yet undeterred by stupendous fines reads like an inventory of vial labels on the shelves behind the bathroom mirror: Johnson & Johnson, Pfizer, GlaxoSmithKline, Abbott Laboratories, and several others. Many of their product names are even more familiar: among them Risperdal, Bextra, Geodon, Zyvox, Lyrica, Abilify, Wellbutrin, Paxil, Advair, Zocor, Oxycontin. Their perfidies, less so: off-label promotion (marketing a drug for uses not approved by the Food and Drug Administration), kickbacks, failure to disclose safety data, Medicare fraud, making false and misleading claims, and bribery, among others.

No category of medication, or medication user, has escaped the consequences of this collapse of medical and professional ethics. Certainly not the suffering and misinformed consumers of these products, the sick, the depressed, and the “worried well.” These are the ones who have paid the price, or the ransom: paid in dollars and often in the well-being of their minds and bodies. In some cases, they paid with their lives. The most defenseless category of all the victims, as usual, has been the mentally ill.

How did it happen? How could an industry with roots so deep in the venerated healing arts have turned so feckless, so grotesque? Where have you gone, Anne-Marie Staub?

It happened via a chain of causality. The most volatile element in this chain was a December 1980 bipartisan vote in Congress. The vote went virtually unremarked in the punditry unleashed by the election of Ronald Reagan to the presidency. The vote was an amendment to the patent law.

Patents are rarely brought up in topical conversation, yet industries and state economies can rise and fall on them. The 1790 Patent Act was designed to financially protect the inventor of “any useful art” from profiteering by imitators. In 1967, the concept of “intellectual property”—roughly, the ideas of inventors that lead to new products—was given legal force by the World Intellectual Property Organization, an agency of the United Nations.* This and later patent-law refinements have led to a rapid product expansion of high technology and biomedicine, and have underscored the point of view that patent protection drives the US economy.

This particular patent act amendment was sponsored by the Democratic senator Birch Bayh and the Republican Bob Dole of Kansas. President Jimmy Carter, freshly unseated by Ronald Reagan, signed the new measure into law. Its intention, in simplest terms, was to stanch the US industrial drain to the Third World by creating new domestic industries almost from scratch. Among the most important of these would be the industries related to biotechnology.

To that end, the Bayh-Dole Act reversed decades of policy regarding ownership of inventions financed by federal funding. In a word, it privatized them. Previously, inventors (typically salaried research scientists from universities and nonprofit institutions) were required to turn over the rights to whatever they produced to the federal government. Now, these innovators, along with those from small businesses, could patent their own discoveries and take them to the marketplace: to the pharmaceutical companies, as a prime example. In 1979, universities secured 264 patents for research discoveries. By 2002, that number had increased to 3,291. In 2014, it stood at 42,584 in the biotechnology industry alone—an increase of almost three thousand from the previous year. In 2012, American universities earned $2.6 billion from patent royalties, according to the Association of University Technology Managers.

But holding a patent and realizing a profit from it are two different things. In recent years, this fact has begun to threaten the momentum and even the viability of the bonanza created on the good intentions of Bayh-Dole—not to mention the companies’ widely known competitive “rush to the market” to be first with a new patent. Litigation costs surrounding patent deals inevitably have skyrocketed, as have so-called “transaction costs”—the various obligations one incurs in putting any patent (or product) into play. High-stakes lawsuits have challenged the legality of so-called “me-too” drugs: copies of established brands, the molecular structures of which are altered just enough to justify a new patent and new riches. As a result, patent-holders began cautiously withholding their products from licensure. The useful scientific knowledge that some of these patents hold is withheld from society—wasted. The unrealized economic value of this waste to the American economy has been estimated at $1 trillion annually, or a 5 percent reduction in potential GDP.8*

This commodification of research had other consequences. It thrust universities and nonprofits into the hard-eyed venture-capital world. No longer would “serendipity” be permitted to work its happenstance magic, as it did with penicillin. No longer would time stand still for “pure” research—unpurposed, intuitive, trial-and-error experimentation that sometimes consumed decades of a scientist’s life. From now on, applied science (in which the result was expected or intended at the outset) would rule.

And in the bargain, the American public would now be required to pay commercial prices for the results of this public-funded research—antipsychotic and antidepressant drugs included—having already paid, through taxes, the costs accrued by public universities and nonprofits in developing them. In vernacular, this is known as paying twice.

Bayh-Dole was just the second of two little-noticed landmark procedures in 1980 that cleared the way for legal entrée into the human brain. The previous June, the Supreme Court had ruled (by only a 5-to-4 majority) that a living microorganism, modified by man into a useful substance, is eligible for patent. That ruling marked the legal foundation for the transformational biotech industry.*

None of this is to imply that the University and Small Business Patent Procedures Act was intentionally insidious. Its sponsors saw it as a job-creator, a conduit for creative cooperation between industry and academia that would lead to proliferating new products, and that was in many other ways an unshackling of Yankee know-how for the betterment of society. And to varying degrees, it worked. But like a narcoleptic drug rushed to market without enough testing, it carried destructive side effects.

It worked for a circumscribed sector of the American economy: the once prim-and-dutiful pharmaceutical industry that now cavorted on a permanent gusher of cash; for the university research bio-and neurochemists who suddenly discovered that they could do well by doing good, and do sensationally by doing even a little bit more good; for the tech PhDs on campus who used public funding to conceive and sell increasingly miraculous computer-enabled machines and processes: brain-computer interfaces, high-resolution microscopes, neuron-controlling optogenetics; the CRISPR gene-editor; even DNA versions of their filament-and-molded-plastic selves. It worked for the insane and disease-ridden patients who found relief in a scattering of genuinely “breakout” medications. For others, it didn’t work so well. Especially those consumers of needed medications for ailments mental and physical. They, too, paid twice.

It took a few years, given the public’s obliviousness to Bayh-Dole, for consumers to notice that they were paying twice (and then twicer and twicer and twicer). The consumers didn’t like this. But they paid. And paid. And paid, even as prices for drugs rose and many of the elderly recipients responded by cutting down on or doing without some of the prescribed meds in their regimens. Or all of the prescribed meds in their regimens.

The consumers paid and paid, even as Big Pharma’s rationale for constantly raising its prices—the “cost of research” involved in making the drugs—grew ever more hollow, given that the companies now were harvesting research from academia, or farming it out to other public entities for negotiable prices. (Their own, in-house research, as mentioned, tended increasingly toward the “me-too” micro-altering of patented compounds already on the market.)

And the consumers paid and paid, even as their anger at last rose. Some tried to develop grassroots strategies to combat the rising prices. Americans who lived a reasonable driving distance from the Canadian border, for instance, would cross that border to shop at pharmacies where they could buy their medications at a tenth of the US prices. Often they would make the trip in chartered buses. When the people on the buses were well along in years, as they usually were, a member of their state’s congressional delegation would sometimes ride along as an escort. (Vermont senator Bernie Sanders pioneered this practice in 1999.)

Here is a brief index to how much they paid, and how much the government paid: In the act’s first year, 1980, the government spent $55.5 billion (in 2000 dollars) for research and development. Sales of prescription drugs in that year stood at $11.8 billion.9 In 1997 the figure was $71.8 billion in pharmaceutical sales; federal funding totaled $137 billion, but a large proportion of that went to the Strategic Defense Initiative and other military programs. In 2014, consumer spending rose to $374 billion,10 with federal funding back down to $133.7 billion. That same year, the US Food and Drug Administration approved forty-one new pharmaceuticals, a record; and in 2015, the Wall Street Journal, citing research by the firm Evaluate Pharma, reported that global prescription drug sales are projected to grow by nearly 5 percent annually and reach $987 billion by 2020.11 The reason, in the insouciant words of the Journal reporter, is “largely attributed to a crop of new medicines for hard-to-treat illnesses such as cancer and fewer patents expiring on big-selling drugs” (emphasis added). In November 2015, Intercontinental Marketing Services Health projected the sales number at $1.4 trillion.12

That the rising flow of riches in the pharmaceutical industry might bring trouble in its wake—trouble in the form of litigation—apparently did not trouble the giddy pharma-entrepreneurs of the Reagan years and beyond. (And as the years went on and litigation spread and court settlements and fines seemed to add zero after zero to their totals, it grew clear that the entrepreneurs didn’t really care. Their sales figures were adding even more zeros.) The international catastrophe of thalidomide should have been recognized for the dreadful omen that it was, but for some reason this did not happen. The German-made drug, introduced in Europe in 1957 as a completely safe antidote to morning sickness and soon a global phenomenon, caused multiple thousands of birth defects in children of women who trustingly bought it—such as flipper-like arms and the absence of toes, legs, and ears. Roughly half the cases were fatal. Some of the litigation continues today. (Thalidomide was never approved for sale in the United States, yet the makers sent samples to American doctors, who passed them along to their unsuspecting patients, with the inevitable results.)

The first serious American-made hint of bad consequences arrived with clozapine, whose makers flirted with legal reprisal before voluntarily withdrawing their new atypical drug. But it was the debut of another “atypical” drug, Risperdal (risperidone), that introduced Big Pharma to Big Lawsuit.

Risperdal went on the market in 1994, a product of Janssen, itself a subsidiary of the pharma giant Johnson & Johnson, the largest marketer of medications in the world. The drug was sold as a treatment for bipolar disorder and schizophrenia. Its makers assured the public of its safety based on three internal trials before submitting it to the FDA for review. Somehow, the trials failed to demonstrate that the drug could result in fever, muscle stiffening, irregular heartbeat, trembling, fainting—and the dangerous disorder tardive dyskinesia.

These complaints caught the attention of consumer protection agencies in thirty-six states between 1993 and 2004.13 The state’s full catalog of complaints would have made a Gilded Age railroad baron recommend prayer and penitence. They included allegations that J&J paid kickbacks to the charmingly named Omnicare Inc., the largest nursing home pharmacy in America, to prescribe Risperdal to the generally clueless seniors—many of them suffering from dementia. (Risperdal’s packaging includes a “black box” warning with this language: “Elderly patients with dementia-related psychosis treated with antipsychotic drugs are at an increased risk of death.” Risperdal of course is an antipsychotic. Caveat emptor.) The kickbacks included money, offers of paid vacations for doctors’ trips, and “lucrative consulting agreements” to prescribe Risperdal to more patients.14 Johnson & Johnson decided to contest these charges and lost. Omnicare, for its part, agreed to pay $98 million to resolve claims that it accepted this booty and thus violated the False Claims Act.15

J&J later reached separate settlements with Texas in 2012 and Montana in 2014 for $158 million and $5.9 million, respectively. Settlements with other states added up to $181 million.

The dollar amounts of these state trial costs and settlements might lead a reasonable observer to conclude that they taught Big Pharma a lesson it would not soon forget. The reasonable observer is invited to read on, preferably while seated or holding on to a firm object. Not long after the turn of this century, settlements and verdicts against drug companies began to roll out from federal court trials on a monetary scale that obliterated the state-level penalties and threatened to obliterate the very notion of “scale” itself.

The first and largest of these decisions did not directly involve an antipsychotic drug, though the company in question had one of those (Saphris*) on the market. It is of interest because it highlights the entry of the Department of Justice into the arena, and because it opens a view into the realm of Big Pharma as it balances its growing flood tide of revenue against its ethical responsibility to public safety.

In 2007, the global health-care company Merck, with headquarters in New Jersey, agreed to pay $4.85 billion to settle lawsuits filed in relation to its medication Vioxx. The substance had been introduced in 1999 as an antidote to pain brought on by rheumatoid arthritis, then pulled off the market in 2004. The lawsuits numbered about twenty-seven thousand and covered forty-seven thousand sets of plaintiffs.16 They had been filed by Vioxx users or their relatives who claimed that the medication had resulted in heart attacks, many of them fatal. The Justice Department’s investigations turned up documents that Merck researchers had been aware of these risks, but that the company did not report them. In 2011, Merck agreed to pay an additional $426 million to the federal government, $202 million to state Medicaid agencies, and $321 million in criminal charges, to round out the litigation.

Merck had claimed revenue of $2.5 billion in 2003. That was big money in those days.

In September 2009, Pfizer settled with the government for $2.3 billion in a similar deal. This one involved Pfizer’s painkiller Bextra, which is no longer on the market. One of the plaintiffs in the case, a former Pfizer sales representative named John Kopchinski, told the New York Times, “The whole culture of Pfizer is driven by sales, and if you didn’t sell drugs illegally, you were not seen as a team player.”

In 2010, another subsidiary of Johnson & Johnson* agreed to a settlement of more than $81 million in civil and criminal penalties to resolve allegations in a federal suit that involved its drug Topamax. Once again, the charge was “off-label” marketing. Once again, the specific wrongdoings were more impersonally amoral than legal language can communicate, and once again, the prey of the predators were the nation’s mentally ill. The FDA had approved Topamax as an anticonvulsant drug. The Department of Justice charged that the makers illegally went beyond that by promoting Topamax for psychiatric problems.17

May of 2012 saw another stunning outlay. GlaxoSmithKline, spawn of the company that had stumbled onto Thorazine more than half a century earlier, agreed to pay $3 billion in criminal penalties for illegal promotion of its antidepressants Paxil and Wellbutrin and the diabetes drug Avandia. The FDA found that GSK had not warned that children and adolescents taking Paxil showed increased tendencies toward suicide, and that pregnant women taking Paxil were more likely to have autistic babies. Wellbutrin, approved for depressive disorder, was promoted off-label for remedies such as weight loss, the treatment of sexual dysfunction, substance addictions, and attention-deficit/hyperactivity disorder (ADHD), among other off-label uses. As for Avandia, FDA clinical studies showed that it increased the risk of heart attack by 43 percent—and double that after a year of treatment.18

That same year, Abbott Laboratories drew $1.5 billion in criminal and civil fines after pleading guilty to misbranding Depakote, approved by the FDA to combat epileptic seizures and bipolar mania. Abbott admitted to recruiting a specialized sales force to peddle the drug to nursing homes as a control for agitation among the demented patients, and as a companion drug with antipsychotics to combat schizophrenia. Neither use had been shown to work in clinical tests; each was accompanied by side effects.19

In November 2013, Johnson & Johnson paid the federal piper for its fraudulent and dangerous marketing of Risperdal (one of Kevin’s medications). The price was $2.2 billion in criminal and civil fines. The charges against J&J replicated the earlier ones raised by the states and added one or two new ones; for instance: In its original review of Risperdal, the FDA had withheld approval for Janssen, a subsidiary of Johnson & Johnson, to market the drug for children. Janssen did so anyway. In fact, Johnson & Johnson, it was alleged in court, directed its sales force to promote Risperdal to children’s doctors. Parents of male children who used the drug began reporting cases of gynecomastia, a swelling of breast tissue caused by imbalances of estrogen and testosterone.

These federal court victories over Big Pharma and its depredations might not have been possible without substantial impetus from that figure of dubious reputation in popular opinion, the whistle-blower. These were not malcontented cranks, as the stereotype has it, but educated professionals, men and women who typically had held high positions within the pharmaceutical industry. John Kopchinski had been a sales representative for Pfizer. They took risks—perhaps not life-threatening, yet scary enough for people unused to cloak-and-dagger intrigue. Most of them wore hidden recorders—wires—to company meetings to capture incriminating policy conversations. The resulting transcripts proved devastating. But the potential rewards—ahh, the rewards! The rewards offer another glimpse into the surreal levels of money flowing into Big Pharma from around the world.

John Kopchinski was awarded more than $50 million in whistle-blower fees for his role in exposing Pfizer. The six former employees who testified against Johnson & Johnson and its companies shared $102 million from the Department of Justice settlement.

The whistle-blower rewards, of course, paled before the largesse that poured into federal coffers from these cases. The Department of Justice announced in 2015 that it had realized more than $3.5 billion in settlements and judgments in the fiscal year ending September 30. This marked the fourth consecutive year that the department had exceeded that figure, and it brought recoveries since 2009 to a total of $26.4 billion.20

It does not seem to have mattered. Not a penny. Not a word.

Not even, to give a specific example, the announcement in 2010 by the reformist group Public Citizen that Big Pharma had become the biggest defrauder of the federal government, surpassing the defense industry. This did not generate national media discussion. Certainly not on the level of vitriol aimed at the evils of Obamacare.

None of it seems to matter to Big Pharma’s CEOs. Their names rarely appear in the press or on television, unless they are being honored as humanitarians by some myopic or bought civic organization. Their names appear even more rarely in news of the big settlements: corporate individuals are almost never held liable for even the worst company crimes. And the largest penalties, the ones reaching into the billions, scarcely match the value of a few weeks’ revenues.

“It’s just a cost of doing business,” one pharmaceutical analyst remarked of the cash penalties, and added, “until a pharmaceutical executive does a perp walk.”21

Looking back at it all from nearly forty years as a doctor and medical researcher, and, before that, a marketing manager in the pharmaceutical industry, the Danish author Peter Gotzsche voiced the inevitable analogy. “Much of what the drug industry does fulfills the criteria for organized crime in U.S. law,” Dr. Gotzsche observed. “And they behave in many ways like the mafia does; they corrupt everyone they can corrupt, they have bought every type of person, even including ministers of health in some countries.”22

The perp walk will not likely happen soon. Nor are drug barons likely to burst into tears of epiphany at being compared to dangerous criminals. They and their companies have long since risen to take their place above the clouds of true accountability, alongside the banks and financial institutions, the firearm manufacturers, the tobacco industry, and the other global denizens of Too Big to Fail, Too Big to Nail.

And why should they not? No one cares about crazy people.