Chapter 4

The Language of Quality

FEBRUARY 25, 2000

New Iberia, Louisiana

Jose Hernandez sniffed the air. The forty-three-year-old FDA investigator stepped deeper into the K&K Seafoods crab-processing plant, which looked none too appetizing. His mind clicked over the regulations enshrined in the U.S. Food, Drug, and Cosmetic Act. Gazing inside the plant, he could practically see the relevant pages: the fish and fishery products “Hazard Analysis Critical Control Point” plan, Title 21, Code of Federal Regulations, part 123, 6 (b).

But his nose signaled the most trouble. What was that odor? It reminded him of his Labrador, Livy, after a bath—that sodden, damp-wool smell of a dog soaked to the skin. Not a good sign in a seafood facility that was supposed to be operating under good manufacturing practices. He doubted the plant was safe for consumers.

Hernandez, balding with a dark mustache, glasses, and a runner’s physique, was the FDA’s resident-in-charge of the Lafayette, Louisiana, office, a four-man outpost. His job, as a badge-carrying FDA investigator, was to inspect the seafood manufacturing plants and small medical centers in the area. Hernandez got paid $45,000 a year, a salary on which he was supporting his wife and four children. The agency had no laptops, so Hernandez took handwritten notes while inspecting, then signed up for time to use the single desktop computer at the office to type them up. He wore coveralls and plastic boots while inspecting seafood plants.

It would not be everyone’s choice of jobs, but Hernandez thrived in it, and he’d begun to earn a reputation as one of the FDA’s smarter, more intuitive, and more energetic investigators. He lived in a rambling 5,000-square-foot house, which he was painstakingly renovating. He’d learned expert carpentry skills from his grandfather, who’d helped raise him in Puerto Rico. Hernandez graduated from the Inter American University in San Juan and began at the FDA as a generalist investigator in 1987. Though he didn’t have fancy graduate degrees, he had a mechanical mind. He knew how things were supposed to fit together and could tell when they didn’t. He could readily organize facts and remember them. He also had an uncanny sense for when something was wrong.

To relax, Hernandez worked on his house and, whenever possible, took his kids camping. But his mind was never at rest. It ranged continually over Title 21 of the Code of Federal Regulations of the Federal Food, Drug, and Cosmetic Act. He knew the regulations almost encyclopedically, but also kept them close at hand to reread. They were his scripture. “The guy preaching the Mass for thirty years always goes back [to the Book],” he said. “I never tried to answer from memory. I never tried to guess. You can never charge anybody with anything unless there’s a regulation.”

He mulled over patterns—the visible workflow in a plant versus the invisible machinations, the relationship between what he saw and what the regulations stipulated. During everyday activities, like drinking water from a plastic bottle, he’d recite the regulations to himself: 21 CFR165.110 was for bottled water. The container holding the water was regulated differently than the water itself (21 CFR1250.40). To him, the inspections were puzzles, and he was always trying to find the missing pieces.

Under FDA regulations, he only had to show his badge and any manufacturing facility regulated by his agency had to allow him full access to the plant and grounds. He never gave advance notice—none was required. Refusal to admit an FDA investigator could lead to a plant being shut down. He would stay as long as he felt was needed for a thorough review. That could mean one day or two weeks. He began each inspection by driving around the perimeter, taking in the broadest view. He thought of it as first looking through the wide lens of a camera. He could then zero in on the important stuff. When it came to K&K Seafoods, he knew exactly what to do. He needed to return when least expected, and presumably least wanted. Nighttime was when they cooked the crabs. “If you want to build a case,” he said, “you have to be there when things are happening.”

He went home for dinner. Once the kids were asleep, he returned to the plant at 9:00 p.m., and an unhappy-looking employee let him in. This time the wet-dog smell was even stronger. Hernandez followed it to the back of the plant, where he found a small kitchen and a pan on the stove with pieces of meat in it—dog meat. He headed out to the plant floor, where a man cooking the live crabs was chewing as he worked. He’d caught the employee red-handed, though the violation itself, 21 CFR110.10 (b)(8), was rather understated: no eating allowed in areas where food is being processed.

The FDA’s regulations were narrow and specific. It made no difference whether the man was chewing a cracker or a dog haunch. What Hernandez thought about it made no difference either. He couldn’t impose deeper sanctions for something particularly repulsive.

On the face of it, the K&K crabmeat plant was a noisome place that might have raised anyone’s suspicions. But Hernandez had a gift not just for aggressively following up obvious clues but for seeing beneath the surface of even pristine-looking manufacturing plants. He’d proven this during his inspection at the Sherman Pharmaceuticals plant in Abita Springs, Louisiana, which made eye lubricants for contact lenses and prescription eye solutions. In 1994, he’d arrived, with two trainees, seven months after the plant had emerged unblemished from an inspection.

His domain was plant and grounds. So as usual, Hernandez began with the grounds, working his way from the outside in. He wandered into the woods ringing the plant. In the distance he noticed a smoldering pile, as though for a barbecue. He directed one of the trainees to find a stick and poke around in the embers. They uncovered a pile of charred medicine that the company was burning. But why? The investigators were able to spot lot numbers on the partially burned containers. The medicine, as it turned out, had not yet expired. “You’re not going to destroy product that is actually sound, so what happened to the rest?” Hernandez wondered. As it turned out, the company was burning medicine that had been returned because of contamination. Instead of investigating the cause and reporting it to the FDA, as required, the company chose to try to destroy the evidence. Hernandez detailed his findings in an inspection form called a 483.

The FDA’s investigators codify their findings in three ways: No Action Indicated (NAI) means that the plant passes muster; Voluntary Action Indicated (VAI) means that the plant is expected to correct deficiencies; and Official Action Indicated (OAI), the most serious designation, means that the plant has committed major violations and must take corrective actions or face penalties. Under Hernandez’s watchful eye, both K&K and Sherman Pharmaceuticals received an OAI, putting them at risk of even greater sanctions.

In 1995, the FDA imposed on Sherman Pharmaceuticals its most stringent penalty, a so-called Application Integrity Policy (AIP)—one of only about a dozen such restrictions imposed by the FDA. This placed the plant under strict monitoring and required it to prove that it was not committing fraud. Sherman Pharmaceuticals went out of business shortly afterward. Hernandez never felt any sympathy toward the company—or any other, for that matter. It was not his job to take things lightly or to look the other way.

The Food and Drug Administration serves one of the most important functions of any government agency. Its job is to safeguard public health by ensuring that our food, drugs, medical devices, pet food, and veterinary supplies are safe for consumption and use. In doing this, the FDA regulates about one-fifth of the U.S. economy—essentially, most of the products Americans are exposed to and consume. It operates from a sprawling headquarters in Silver Spring, Maryland, and has a workforce of over 17,000 employees, twenty satellite offices around the country, and seven offices overseas.

Whatever one may think of regulators—heroic public servants or pests with clipboards counting the number of times workers wash their hands—there is no doubt that in the world’s estimation, the FDA is viewed as the gold standard. If you hold its regulators up against those from most other countries, it’s like comparing “the latest model Boeing to an old bicycle,” said a senior health specialist for the World Bank.

Part of the FDA’s vaunted reputation comes from its approach. It does not just regulate with a checklist or scrutinize the final product. Instead, it employs a complex, risk-based system and scrutinizes the manufacturing process. Under FDA standards, if the process is compromised, the product is considered compromised too.

The FDA requires companies to investigate themselves under a review system called Corrective Action and Preventive Action. The drug company Merck was famous for doing this and discarding drug batches if it had the slightest concern about their quality. “You have to look to know the truth, and you have to have people who know how to look,” a former FDA investigator explained. “[And] unless agencies start looking, companies don’t look.”

Inspector Hernandez’s methods might seem simple: sniffing, looking, poking with a stick. But he was armed with concepts and regulations that had evolved over more than a century, related to both drug and food safety, which developed in sync. Today a manufacturing plant must disclose and investigate quality problems, rather than simply burn bad drugs in the woods. Workers can’t eat dog meat (or anything else, for that matter) as they work on canning crab, since a manufacturing plant must control its environment against contaminants. The concepts of control, transparency, and consistency fall under current good manufacturing practices (cGMP), the elaborate architecture of regulations that govern the processing of food and the manufacturing of medicine.

Such regulations did not exist at the dawn of the twentieth century. The phrase “good manufacturing practices,” now ubiquitous in facilities around the world, made its debut in a 1962 amendment to the U.S. Food, Drug, and Cosmetic Act. For today’s drug manufacturers, cGMP is understood to be the minimum requirements a manufacturer must follow to ensure that each dose of a drug is identical, safe, and effective and contains what its packaging says it does. Those requirements evolved after a centuries-long debate over how to best guarantee the safety of food and drugs.

The medicine men of the Middle Ages were among the first to promote the idea that the quality of a drug depends on how it’s made. In 1025, the Persian philosopher Ibn Sina penned an encyclopedia called the Canon of Medicine in which he laid out seven rules for testing new concoctions. He warned experimenters that changing the condition of a substance—heating honey, say, or storing your St. John’s wort next to rat poison—could change the effect of a treatment.

Medieval rulers recognized the perils of inconsistency and the temptation for food and drug sellers to cheat their customers by replacing edible or healing ingredients with poor substitutes. In the mid-thirteenth century, an English law known as the Assize of Bread prohibited bakers from cutting their products with inedible fillers such as sawdust and hemp. In the sixteenth century, cities throughout Europe began publishing standardized recipes for drugs, known as pharmacopoeias. In 1820, eleven American doctors met in Washington, D.C., to write the first national pharmacopoeia, which, according to its preface, was meant to rid the country of the “evil of irregularity and uncertainty in the preparation of medicine.”

The same year, a German chemist named Frederick Accum published a controversial book with a mouthful of a title: A Treatise on Adulterations of Food, and Culinary Poisons. Exhibiting the Fraudulent Sophistications of Bread, Beer, Wine, Spirituous Liquors, Tea, Coffee, Cream, Confectionery, Vinegar, Mustard, Pepper, Cheese, Olive Oil, Pickles, and Other Articles Employed in the Domestic Economy. And Methods of Detecting Them. Accum railed against factories’ use of preservatives and other additives in packaged foods, such as olive oil laced with lead and beer spiked with opium. Widely read throughout Europe and the United States, Accum’s treatise brought the issue of food safety and the need for oversight to the public’s attention. In the United States, it wasn’t until 1862 that a small government office, the Division of Chemistry, began investigating food adulteration, with a staff housed in the basement of the Department of Agriculture—a fledgling effort that would later morph into the FDA.

In 1883, a square-jawed, meticulous doctor from the Indiana frontier, Harvey Wiley, took over the division. At thirty-seven, Wiley was known as the “Crusading Chemist” for his single-minded pursuit of food safety. He rallied Congress, without success, to introduce a series of anti-adulteration bills in the 1880s and ’90s. By 1902, his patience worn out, Wiley recruited twelve healthy young men and fed them common food preservatives such as borax, formaldehyde, and salicylic, sulfurous, and benzoic acids. The diners clutched their stomachs and retched in their chairs. The extraordinary experiment became a national sensation. Wiley called it the “hygienic table trials,” while the press named it the “Poison Squad.” Outrage fueled the movement for improved food quality.

Meanwhile, officials at the Laboratory of Hygiene of the Marine Hospital Service (later the National Institutes of Health) grappled with a different public health crisis. In 1901, an epidemic of diphtheria—a sometimes fatal bacterial disease—broke out in St. Louis. The disease was cured by injecting victims with an antitoxin serum, which was produced in the blood of horses. That October, a five-year-old patient who had gotten a shot of the antitoxin began to show strange symptoms: her face and throat contorted in painful spasms, and within weeks she was dead. The antitoxin intended to cure her diphtheria had actually given her tetanus. Officials traced the contamination to a retired milk-wagon horse named Jim, who had come down with tetanus some weeks before.

Though the St. Louis Health Department learned that the horse was sick in early October and consequently shot him, department officials had bled Jim twice before his death—in August and in late September. The August blood was clean, but there wasn’t enough blood to fill all of the vials. The officials topped off the remaining vials with the September batch, but failed to update the labels. As a result, some bottles marked “August” contained September’s tetanus-tainted blood, which killed thirteen children.

In response, Congress passed the Biologics Control Act in 1902, also known as the “Virus-Toxin Law.” It required producers to follow strict labeling standards and to hire scientists to supervise their operations. The law also authorized the Laboratory of Hygiene to regulate the biologics industry through inspections.

By then, journalists had begun exposing troubling practices in the food and drug industries. In 1905, an eleven-part series in Colliers’ Weekly, “The Great American Fraud,” shocked Americans by exposing “cough remedies,” “soothing syrups,” and “catarrhal powders” as being worthless and deadly. In June 1906, Congress finally passed the legislation that the chemist Harvey Wiley had spent decades lobbying for. The Food and Drug Act, or “Wiley Act,” banned dangerous additives in foods and prohibited manufacturers from making “false or misleading” statements and from selling misbranded and adulterated drugs. Additionally, medicines sold under a name listed in the United States Pharmacopoeia had to meet the published standards of strength, quality, and purity. As impressive as the law was for its time, it was marred by loopholes. It allowed harmful substances such as morphine in products, so long as they were disclosed on the label. And although the law made fraudulent claims a crime, it fell on the government to prove that a salesman intended to deceive his customers. Swindlers easily avoided prosecution by insisting that they believed in their fake remedies.

The FDA formally began in 1930. In 1933, its officials created an exhibition of hazardous food and medical products, which they displayed to Congress and at public events. The collection included an eyelash dye that blinded women, a topical hair remover with rat poison in it that caused paralysis, and a radium-based tonic called Radithor that was said to restore one’s sex drive but in reality caused deadly radium poisoning. The press called the exhibit “The American Chamber of Horrors.”

Several years later, Congress proposed a new food and drug act, but only after another tragedy nudged the bill into law. In 1937, 107 people, many of them children, died from taking a liquid antibiotic called Elixir Sulfanilamide. They died excruciating deaths. One grieving mother wrote to President Franklin D. Roosevelt of her daughter’s painful end: “We can see her little body tossing to and fro and hear that little voice screaming with pain. It is my plea that you will take steps to prevent such sales of drugs that will take little lives and leave such suffering behind and such a bleak outlook on the future as I have tonight.”

Sulfanilamide effectively treated streptococcal infections. Since its discovery in 1932, doctors had administered the drug in tablets and powders. But in 1937, a chief pharmacist at S. E. Massengill Company came up with a formula for a children’s syrup that called for dissolving the drug in diethylene glycol—a sweet and, as it turned out, deadly poison that would be used decades later as an ingredient in antifreeze. When FDA agents investigated Massengill’s plant, they were amazed to find that “the so-called ‘control’ laboratory merely checked the ‘elixir’ for appearance, flavor, and fragrance,” but not for toxicity. As one FDA agent reported, “Apparently, they just throw drugs together, and if they don’t explode they are placed on sale.” Spooked by the disaster, Congress finally passed the Food, Drug, and Cosmetic Act in 1938, which authorized the secretary of agriculture to approve new drugs before they could be marketed. A company hoping to sell its concoction had to submit an application that described the drug’s ingredients and production process, as well as submit safety studies to convince the secretary that its manufacturing methods, facilities, and controls were adequate.

But what could be deemed “adequate”? That question came into sharp relief between December 1940 and March 1941, when nearly three hundred people fell into comas or died from taking antibiotic sulfathiazole tablets, made by the Winthrop Chemical Company in New York. On its FDA application, Winthrop claimed to have “adequate” controls. But a batch had been contaminated with as much as triple the typical dose of Luminal, a barbiturate antiseizure drug. Patients who swallowed the tainted antibiotics unknowingly overdosed on the barbiturate. In its investigation, the FDA learned that the company assembled the antibiotics and the barbiturates in the same room and often swapped the tableting machines. The company couldn’t account for what came out of its tableting machines—because it had little idea what went into them.

In the wake of the crisis, FDA officials met with an industry consultant who told them that most drug makers in the United States lacked adequate controls, in part because there was no agreement on what a good control system should be. The FDA’s drug chief wrote a memo to his division, arguing that, going forward, “the mere perfunctory statement that adequate controls are employed will not be sufficient.”

But it was the specter of an averted tragedy that had the most profound effect. In 1960, the Cincinnati manufacturer William S. Merrell applied to the FDA to sell a drug called Kevadon, widely known as thalidomide. Introduced in Germany in 1956, thalidomide was being marketed to pregnant women throughout Europe, Canada, and South America as a sleeping pill and to treat morning sickness. In the United States, the Merrell company had begun to distribute samples to doctors, but the drug was not yet commercially available. Frances Kelsey, an FDA medical officer, was assigned to review the application. She could have rubber-stamped it, but the company’s limited safety studies gave her pause. She questioned company officials about how the drug worked in the body, but they refused to answer. Instead, they complained to her superiors and pressured her to approve the drug. Kelsey refused.

By the winter of 1961, it was clear that she’d made the right decision. A growing number of physicians abroad were linking thalidomide to babies born with severely deformed limbs, such as shrunken legs and flipper-like arms. More than ten thousand mothers who had taken the drug gave birth to disabled children. Kelsey was hailed as a hero. Because of her refusal to capitulate, American patients were spared the worst, with only seventeen cases of birth defects linked to the samples. The near-miss once again galvanized Congress, and in 1962 it passed an update to the Food, Drug, and Cosmetic Act known as the Kefauver-Harris Amendment, which required applicants to prove that their drugs were not only safe but effective, disclose potential side effects on the package, and report adverse events to the FDA. Most significant, the amendment redefined what it meant for a drug to be adulterated. Products made in plants where the process didn’t conform to “current good manufacturing practices” were deemed tainted.

That was a seismic shift. The manufacturing process became the key to quality, as it is today. This new definition gave the FDA the power to enforce good manufacturing standards. But the question remained: what should they be?

In late 1962, a group of FDA investigators met to hash out a first draft of these practices. The new regulations, published the following year, established new categories for standards in the “processing, packing, and holding of drugs.” Each “critical step” of the manufacturing process had to be “performed by a competent and responsible individual.” Workers were required to keep a detailed “batch-production record” for each drug lot, which included a copy of the master formula and documentation of each manufacturing step. As the new regulations rolled out, manufacturers struggled to comply. Drug recalls rose.

In 1966, the agency undertook a major survey of the most clinically important and popular drugs on the U.S. market. Of 4,600 samples tested, 8 percent were either more or less potent than they were supposed to be. The FDA decided that the best way to get manufacturers up to speed was through rigorous inspections. In 1968, the agency launched an intensive three-year blitz. Investigators showed up unannounced and camped out at scores of companies, sometimes for as long as a year. They badgered. They educated. They collaborated. They bullied. If a manufacturer couldn’t—or wouldn’t—comply with investigators’ demands, they were put out of business. The effort effectively launched the FDA’s modern-day inspection program.

In the decades-long journey to improve quality, the pivotal shift was from product to process. No longer could drug makers simply wait until after a drug was made to test for passing results, a hallmark of bad manufacturing. Perhaps at the end, you could test a few pills from a batch, but a million pills? That was impossible. Quality had to be built into the process, by documenting and testing each result along the way.

This practice, known as “process validation,” gained currency in the late 1980s. The data from each manufacturing step became the essential road map. The acronym ALCOA stipulated that data had to be “attributable, legible, contemporaneously recorded, original or a true copy, and accurate.”

As Kevin Kolar, formerly Mylan’s vice president of technical support, explained, a finished drug cannot be separated from the data created in the process of making it. “One without the other is not a product. . . . If it’s not documented, it didn’t happen. Meticulous attention to detail. That’s your business, your entire business.”

As the years passed, it became evident that Jose Hernandez, in Louisiana, was destined for something more complex than dog-meat detection. By the year 2000, manufacturing began to move offshore. Over the next eight years, the number of drug products made overseas for the U.S. market doubled. By 2005, for the first time, foreign manufacturing sites regulated by the FDA exceeded those in the United States.

The FDA was growing desperate for investigators willing to travel overseas. Hernandez volunteered and began doing inspections in Japan, Austria, Germany, India, and China, in quick succession. By 2003, he had joined the overseas inspectorate, a small cadre based in the United States but dedicated to visiting foreign plants. The work was exhausting and difficult. He kept his green government-issued notebook by his bedside, jotting down observations even as they came to him in half-sleep. He had little respect for his supervisors, whom he found to be more concerned with office politics than public health. His sense of commitment to the American consumer kept him energized.

The FDA knew that the best way to keep drug plants compliant was for investigators to show up unannounced, when they were least expected and wanted. So long as a drug plant remained fearful of a surprise visit, it would be more likely to follow good manufacturing practices. But the dynamic of the inspections in the international realm was completely different. No longer could Hernandez simply walk in, show his badge, and conduct an inspection. Instead, the FDA notified foreign plants of upcoming inspections months in advance. The plants then issued a formal invitation, which the FDA’s investigators used to secure travel visas. This system of advance notice was not legally required, nor was it the best way to run an inspection. But as the FDA scrambled to deal with a growing backlog of foreign inspections, advance notice became the jury-rigged solution to a host of challenges. It helped to ensure that the right employees from the plant were available during an inspection and served as a diplomatic gesture to foreign governments. Under this system, however, the foreign inspections were not a candid assessment of a plant’s true condition, but more of a staged event.

The plants coordinated the investigators’ trips and arranged their local travel. “The element of surprise is out the door,” said Hernandez. This made him even more reliant on his instincts and on everything he’d learned over the years. As he found himself in remote countries with languages he did not understand, he kept returning to the idea of “plant and grounds.” It became a kind of mantra. To him, it meant a “broad scope of thinking.”

In this way, he came to believe that—despite the different languages, the different cultures, the different time zones—quality was a language all to itself. And he was certainly fluent in that. Facilities either had controls or they didn’t, and he could look, sniff, or poke to figure it out. For example, he often couldn’t read what was written in the records themselves. Instead, he studied the way the records looked. Were they smudge-free or were there fingerprints on the copier? Was one equivalent batch of records smaller than the other? Were records creased or frayed, and if not, why not? In this way, he discovered things that some of his colleagues missed. In one instance, an overseas drug company printed records on a heavy fiber paper. He discovered that the director of quality was having staff alter data by scraping words off the page with a sharp blade. In another instance, he inspected a Chinese factory. Before entering the sterile manufacturing area, managers there requested that he wash his hands with soap and then don a double set of gloves, which everyone there was required to do before entering. I am watching the show, he thought as they went inside. He then said to the manager, “If everyone has to put on double gloves, how come there are fingerprints on the doorknob inside?”

These clues were all pieces in the massive jigsaw puzzle of a manufacturing plant. But the puzzle now stretched across continents.

Globalization cast a shadow over a process that required transparency, making distance the biggest challenge to everything the FDA had learned about safety over the last 170 years. Dr. Patrick Lukulay, former vice president of global health impact programs at the United States Pharmacopeia (USP), explained: “The issue of globalization, that’s the issue of countries where you are not [there]. . . . You almost have to be on your toes, doing unannounced inspections, listening to whistleblowers. Regulation,” he asserted, “is a cat-and-mouse business.”