2

I’ll dispense with any lingering suspense over my well-being (thanks for your concern) and tell you now: The cheese did not make me sick. Nor did the venison, which Edward and I butchered on his kitchen table the next day, as Parliament-Funkadelic thumped from a pair of high-end stereo speakers he’d plucked from the trash a few years prior. They buzzed only a little. The strawberries turned out to be moldy beyond salvation (hey, a guy’s gotta have some standards); the salad dressing, I left to Edward. Last time I saw him, he exhibited his usual good cheer, so I’m assuming the dressing went down just fine and, perhaps more important, stayed there.

Was I a bit anxious when, a few days later, I placed a chèvrecrusted venison loin roast atop our dinner table before the expectant forks of my incredibly trusting family? Why yes, I was, although I’d already sampled each ingredient myself, emboldened by the recent acquisition of a health insurance policy with a modest deductible and the apparent good health of my dumpstering mentor. Following each sampling, I experienced a period that is most politely described as “heightened intestinal awareness,” during which I felt an almost supersensory connection to my digestive tract. This was not an awareness I intend to cultivate, but I’d argue that a certain degree of anxiety over consuming garbage is entirely appropriate. Heck, it could probably be counted as an evolutionary advantage, like opposable thumbs or the urge to procreate.

But lately, anxiety over whether our food is fit to eat hasn’t been confined to neophyte dumpster divers. The evidence of this is not difficult to find: It is splashed across the front pages of our newspapers, is wrapped into prime-time television specials, and in particular thrives in every nook and cranny of the Internet. Google “food safety” and you’ll get 150 million hits; that’s about 300 percent more than you’ll get for “terrorism” (43 million), and about 1,800 percent more than you’ll get if you search “Mick Jagger” (8.5 million).

This type of collective anxiety is too often based in anecdotes and assumptions, but in the case of foodborne illness, the numbers seem to justify our national unease. According to the Centers for Disease Control and Prevention (CDC), more than 200,000 Americans are sickened by food every day, and each year 325,000 of us will be hospitalized because we ate contaminated food. Most tragically, over the next 52 weeks, 5,194 of us will die from a foodborne condition. That means more Americans die every year from eating contaminated food than have been killed in Iraq since the outset of the war. The victims will be disproportionately young, because the young (and in particular, children under age 4) have immature immune systems that are poorly equipped to defend against invading bacteria.

There is no shortage of acute afflictions that can be caught by eating; the CDC counts more than 200 known afflictions that are readily transmitted via food. Having become accustomed to a steady diet of headlines recounting the dangers of illness by foodborne pathogenic bacteria, I’d long assumed that the vast majority of foodborne illness is bacterial in nature. But according to the CDC, which has built its assumptions almost entirely on a 1999 study authored by Paul Mead, a medical epidemiologist with the Foodborne and Diarrheal Diseases Branch of the CDC (wow, sounds like an uplifting job, eh?), bacterial agents account for only 30.2 percent of total foodborne illnesses, while viruses account for 67.2 percent, and parasites the remaining 2.6 percent. Still, when it comes to death by food-borne illness, bacteria reign supreme, with 71.7 percent of all mortalities by food contamination (viruses come second, with 21.2 percent, and parasites third, with 7.1 percent).

Now, it’s important to note that the Mead study is not without significant faults. Indeed, the oft-repeated assertion that foodborne diseases cause 76 million illnesses, 325,000 hospitalizations, and 5,194 deaths in the United States annually is no better than an educated guess; the actual tally may be worse. Most damning is the line near the end of the study that reads: “Unknown agents account for approximately 81 percent of food-borne illnesses and hospitalizations and 64 percent of deaths.” In other words, a significant majority of assumed illnesses, hospitalizations, and deaths are just that: assumed. Their numbers are merely extrapolated from estimates of all deaths by gastroenteritis of unknown cause. Indeed, the extrapolation accounts for 3,400 of the total study estimate of 5,194 deaths annually.

Confused yet? If not, wait ‘til you hear this: The category of estimated deaths of gastroenteritis of unknown cause is assumed to include all deaths from unknown foodborne agents. But, of course, some foodborne agents do not cause gastroenteritis. And at least some deaths attributed to gastroenteritis of unknown cause were caused by known agents that simply weren’t detected. Misdiagnosis, if you will.

What does this all mean regarding the widespread assumption of 76 million illnesses, 325,000 hospitalizations, and 5,194 deaths annually by foodborne agents? Are we talking even more suffering, or less? Actually, it’s probably both, because Mead’s study omits potential deaths from unknown foodborne agents that do not cause gastroenteritis, while it almost certainly overstates the number of deaths from unknown foodborne agents that do cause gastroenteritis. (I would be remiss not to acknowledge the work of Paul D. Frenzen, a U.S. Department of Agriculture demographer who came to these conclusions in 2004 after careful review of the Mead study.)

In any event, while we can debate endlessly how many people are actually getting sick and dying from foodborne illness, one fact is inescapable: The outbreaks are getting bigger. Reading outbreak reports spanning only a few decades is almost enough to make one nostalgic for the not-so-long-ago days when victims could be counted in the dozens or, sometimes, single numbers. In the major outbreak of 1963, two women died from botulism in canned tuna; in 1974, salmonella in unpasteurized cider sickened 200 in New Jersey. Even as recently as 1996, the big foodborne illness news was an outbreak of E. coli linked to apple juice; one toddler died and another 60 or so people were sickened.

Compare these tragic but relatively isolated outbreaks to the 21st-century outbreaks that have dominated headlines in recent years.

I could go on, but you get the picture. Over the past few decades, foodborne illness has shifted from being a fairly regionalized threat with the potential to sicken a handful of people in a single outbreak to a national hazard capable of felling thousands, if not tens of thousands, of consumers from a single point of contamination.

Now that I’ve got you good and scared, please allow me to make you downright terrified. Concurrent with the vastly increasing scale and scope of our nation’s foodborne-illness outbreaks, we have seen the rapid emergence of a number of particularly virulent bacteria.

I’m speaking primarily of Escherichia coli O157:H7, Listeria monocytogenes, Campylobacter jejuni, Cyclospora cayetanensis, and the increasing threat from the dirty half-dozen strains of non-O157 pathogenic E. coli, which have begun showing up—mostly in ground beef—with increasing frequency.

It should be said that it’s entirely possible that these strains have existed for years, if not centuries, and that the reason we’re starting to find them isn’t so much because they’re new but because we’ve never before looked. But it is also possible that at least some of these are new threats, the inevitable result of bacterial evolution. Because that is what bacteria do: They change. They shift. They evolve. And, like humans, they do this primarily to increase the likelihood of their survival. All of which can only make one wonder: What’s going to happen over the next 30 years?

Of course, the future has a sticky tendency to be unknowable, leaving us with the inexact sciences of extrapolation, prediction, and outright guesswork when it comes to considering how the issue of food safety will evolve. But like most views forward, the view of food safety’s future can be made clearer by something we can know with certainty: its history.

In the case of food safety, that’s a relatively recent history, because while it is safe to say that there has always been food-borne illness, it is also rather difficult to quantify. The gathering and dissemination of foodborne-illness data are relatively new phenomena, sparked in large part by an acute period of public unease following the 1906 release of Upton Sinclair’s seminal book The Jungle, which famously depicted the deplorable conditions in Chicago’s stockyards and meatpacking facilities.

As one of America’s original muckrakers, Sinclair had intended his tale to serve as an ode to the plight of these laborers, who worked 12-hour days and were frequently injured in an antiquated version of what is still the most dangerous factory job in our nation. But what really caught the public’s attention were the book’s passages regarding tuberculosis in beef and tales of men tumbling into industrial grinders and being packaged along with four-legged flesh: “As for the other men, who worked in tank-rooms full of steam, and in some of which there were open vats near the level of the floor, their peculiar trouble was that they fell into the vats; and when they were fished out, there was never enough of them left to be worth exhibiting—sometimes they would be overlooked for days, till all but the bones of them had gone out to the world as Durham’s Pure Leaf Lard!” The tale was fiction, but like most good fiction, it held an element of truth, and the public seemed to understand this. Still, Sinclair was famously but perhaps naively surprised by the reaction: “I aimed at the public’s heart, and by accident I hit it in the stomach,” he said at the time.

Accidental or not, it was an important hit, in no small part because it reached the height of the American political landscape. President Theodore Roosevelt was reportedly physically sickened after reading a copy of the book; on his recovery, he immediately called on Congress to pass the Pure Food and Drugs Act of 1906, which established the Food and Drug Administration (FDA) and instituted the United States’ first federal inspection standards for meat. In truth, the FDA had already existed as a science-based entity for nearly a half century; it was known as the Division of Chemistry and, at its outset, consisted of a single chemist operating under the purview of the US Department of Agriculture.

Prior to 1906, the Division of Chemistry had no regulatory powers, but in 1882 it came to be headed by Harvey Washington Wiley, a chemist and an MD possessing an almost obsessive fascination with the health effects of common food additives. Wiley applied this fascination to research into the adulteration and misbranding of food and drugs on the American market, and in 1887, he began the publication of a 10-part series called Foods and Food Adulterants. This was arguably the beginning of America’s public awareness of food safety. Wiley received a significant boost in 1902, when Congress appropriated $5,000 to the Division of Chemistry for research into the effects of preservatives on human volunteers. The “poison squad” studies, as they became known, drew widespread attention to the issue and to the fact that the food and drug industries operated with complete autonomy.

The passage of the Pure Food and Drugs Act was the beginning of widespread food-based regulation in America, although it’s interesting to note that the focus wasn’t so much on pathogenic bacteria as it was on adulterants that were added intentionally. One early upshot was a legal fracas known as United States v. Forty Barrels and Twenty Kegs of Coca-Cola, whereby the government attempted to outlaw the drink for its excessive caffeine content, describing it as a beverage that produced serious mental and motor deficits. The ruling went in Coca-Cola’s favor, but the publicity was enough to convince the manufacturer that perhaps a little less caffeine was a good idea after all.

Still, these were the early days of food-based regulation and the study of foodborne illness, and there wasn’t a huge quantity of data floating around. But it’s hard to imagine that just because we didn’t have the data, we didn’t have bacteria and illness. After all, the refrigerator didn’t come on the scene until the 1920s, and it was many years before it rose to ubiquity, in no small part because, as a nascent technology, electric refrigeration commanded quite the premium: One early commercial model, a 9-cubic-foot box that boasted a water-cooled compressor and wooden case, sold in 1922 for $714. To put that in context, consider that the very same year, a Model T Ford cost approximately $450. These days, the average price of a new car is $28,000, while a new fridge can be had for less than 500 bucks. This is bad if you want to drive somewhere, but very, very good if you like cold beer.

There’s little question that life before refrigeration held certain risks pertaining to the growth and dissemination of foodborne illness, but since the numbers don’t exist, we’ll have to be content with merely visualizing the woes wrought by rancid meats and week-old milk. Actually, given that in 1920 only 20 percent of American homes featured a flush toilet, perhaps it’s better that we don’t.

Beginning in 1925, just as electric refrigerators were redefining food storage (for the affluent, at least), the US Public Health Service published summaries of outbreaks of gastrointestinal illness attributed to milk; in 1938, the PHS added summaries of outbreaks from all foods to its misery list. But the level of detail and thoroughness of reporting were skimpy, and the annual summaries were primarily reactive. The most important result of these reports was the creation of the Standard Milk Ordinance (now known as the Pasteurized Milk Ordinance, or PMO), a voluntary regulatory model for the production, processing, packaging, and sale of milk and its ancillary products. Forty-six of the US states have adopted the PMO; California, Pennsylvania, New York, and Maryland have not, although they have passed laws that are similar in their stipulations and oversight. The PMO maintains that “only Grade A pasteurized, ultra-pasteurized, or aseptically processed milk and milk products shall be sold to the final consumer, to restaurants, soda fountains, grocery stores, or similar establishments.” (Soda fountains? Do those even exist anymore?)

In any event, the monitoring and dispensing of data relating to foodborne illness continued pretty much in this vein for a number of decades, though it slowly became a more collaborative venture, with states and disparate governmental agencies joining forces to deliver a more complete and nuanced accounting. But there was little sense of urgency among the citizenry; for the most part, foodborne illness still felt like something that happened to other people, and as such, there was little pressure exerted on our political leaders and thus on our nation’s regulatory bodies.

Part of this was due to the relatively isolated nature of foodborne-illness outbreaks in the still-developing interstate food system; in the first half of the 20th century, food production and distribution remained relatively localized affairs. Indeed, arguably the greatest foodborne-illness scare of this era was botulism; between 1899 and 1969, there were 1,696 cases attributed to 659 botulism outbreaks, 60 percent of which were transmitted by home-canned vegetables (botulism spores can survive temperatures as high as 250°F, making them impervious to most home canning methods, which top out around 240°F). This works out to about 2.5 cases per incidence, which seems almost laughable when compared to recent outbreaks.

Which is not to say that foodborne botulism is something to be taken lightly. Left untreated, its symptoms generally progress from relative mild afflictions like double vision and drooping of eyelids (heck, this happens to me anytime I drink more than two beers at a sitting), to weakness in the limbs, then to outright respiratory failure, resulting in death. Botulism is still around, to the tune of about 150 US cases annually, but only a handful of these are food related (you can also contract botulism through an open wound or by ingesting it independent of food; this is particularly common in infants). And while the bacterium is still likely to kill you if left to its own devices, advances in treatment have dropped the mortality rate from 90 percent to about 4 percent.

In 2007, eight people contracted botulism poisoning from canned foods produced by Castleberry’s Food Company. It was the first incidence of botulism poisoning in commercial canned foods in over 3 decades. All of the victims recovered.

In part because of the isolated nature of botulism outbreaks, in part because of the regional nature of the era’s media coverage of all foodborne-illness outbreaks (imagine: a world before Twitter and Facebook), and in part because the outbreaks never sickened that many people concurrently, getting acutely, seriously sick from food still felt like something that happened to other people.

It’s not hard to define the precise moment when the vast majority of Americans stopped viewing foodborne illness as someone else’s problem: January 13, 1993. That’s the day the Washington Department of Health (WDOH) was notified that an unusually high number of children had been admitted to Seattle-area hospitals with confirmed cases of hemolytic uremic syndrome (HUS). HUS is secondary to E. coli infection; it is often (but not always) an indicator of poisoning by a strain known as E. coli O157:H7. Contracting HUS is like winning the bad-luck lottery of foodborne illness; it carries the potential to completely shut down the kidneys. While 90 percent of those affected survive the condition’s acute phase, many are stricken with lifelong complications that can include high blood pressure, blindness, paralysis, and end-stage renal failure. The other 10 percent—children and the elderly, mostly—do not survive the first week or two. Death by HUS is a particularly unpleasant affair, being preceded by a period of intense vomiting, bloody diarrhea, and fluid accumulation in the tissues. Because patients with HUS cannot rid themselves of excess fluid and waste, they are not allowed to drink anything, even water, and suffer terrible thirst as their organs slowly shut down. I don’t know about you, but I can think of about 100 ways I’d rather die.

Apparently, the staff at WDOH felt the same and took this report very seriously, launching a full-blown epidemiologic investigation that quickly turned up one striking similarity: The patients had all visited Jack in the Box restaurants in the days prior to becoming ill. And what do people go to Jack in the Box restaurants for? I’ll give you a hint: It ain’t tofu smoothies and bean sprouts. No sir, they go for meat. Ground beef. Hamburger. In fact, each of the patients suffering from HUS had ordered a hamburger during their sojourn to Jack in the Box.

Ultimately, the outbreak strain of E. coli O157:H7 was traced to 11 lots of hamburger patties produced on November 29 and 30, 1992, by a California beef processing outfit called Vons Companies. Jack in the Box issued a recall, but the horse (or in this case, cow) had most definitely left the stable: The beef had already been distributed to franchises in California, Idaho, Nevada, and Washington, and only 20 percent of the implicated meat was recoverable. The rest had already been sold and, presumably, consumed. By the end of February 1993, 171 people who’d eaten at the 73 affected Jack in the Box restaurants had been hospitalized. Four had died; all were children. The origin of the offending E. coli O157:H7 was never definitively found.

The Jack in the Box incident was a major turning point for the gathering of foodborne-illness data in the United States. Shortly thereafter, the CDC launched two programs designed to bolster its ability to track and understand outbreaks. Being an agency of government, the CDC felt compelled to saddle these new programs with odd, futuristic-sounding titles that do little to explain the agenda: FoodNet and PulseNet (note the lack of proper spacing between words and never again doubt the CDC’s hipness). I won’t bore you with the gritty details; suffice to say that FoodNet was designed to bolster the agency’s grasp of the epidemiology of foodborne illness, while PulseNet focuses on the DNA fingerprinting of disease-causing bacteria. Shortly after the advent of FoodNet, reported outbreaks suddenly doubled; this is almost certainly a reflection of the program’s relative muscularity rather than a true increase in contamination.

The advent of PulseNet marked another crucial step forward in the monitoring of foodborne illness. The program is based on a technology known as pulsed-field gel electrophoresis(let’s just call it PFGE from here on out, shall we?), a method of DNA analysis that allows epidemiologists to “fingerprint” a specific outbreak’s pathogen. With PFGE, researchers can know if a rash of E. coli O157:H7 cases in say, Vermont, are connected to the outbreaks in Kansas and Florida. PFGE works because bacteria replicate by splitting in two and because every bacterium is built on a unique genetic makeup. These identifying markers and the technology to interpret them don’t do much for anyone who’s already eaten the wrong hamburger or spinach salad, but they do provide a degree of traceability—and thus, accountability—long lacking in our complex food-supply chain.

Today, the safety of our food, insomuch as it relates to pathogens, is overseen primarily by a trio of governmental agencies: the United States Department of Agriculture, the Food and Drug Administration, and the CDC. Each plays a role (or a few roles), often through interagency departments that, despite being a fractional component of a larger organization, are themselves large and bureaucratically unwieldy.

That the US government has committed so many resources to combat foodborne illness should come as some relief. I say “should” because the truth is, our food system is so big and so under the purview of for-profit, limited liability corporations that what appears to be a robust regulatory environment is actually severely lacking in its ability and, some would argue, possibly even the will to enforce existing law. A perfect example is the USDA meat inspection program called the Hazard Analysis and Critical Control Point (HACCP) system, which was launched in 1996 with the intent of modernizing meat inspection and the testing protocol for pathogenic bacteria.

This is all well and good, but the funny thing is, HACCP actually allows many inspection processes to be conducted by the meat companies themselves. Now, I don’t know about you, but this has a certain “fox guarding the henhouse” feel that doesn’t do a heck of a lot for my confidence. Sure, one could argue that for-profit food producers have a built-in motivation to keep their product safe for human consumption, if not because they actually care about the well-being of their customers (crazy talk, I know) but because they’d prefer to avoid the lawsuits and related bad press that outbreaks can impart.

But here’s the thing: The scale, scope, and complexity of the modern-day food system make it very difficult to prove accountability beyond a reasonable doubt, and the producers know this. Even when they are held accountable, the repercussions are often not dire enough to outweigh the added cost of keeping the bacteria out of their products in the first place. In other words, foodborne illness becomes an equation: Spend this much to keep the bacteria out of the food in the first place, or spend this much cleaning up the fallout from any potential outbreaks.

Sometimes, it seems as if our regulatory agencies take the same view. Indeed, reports have shown that under HACCP, USDA employees have been discouraged from halting production lines, even when they strongly suspect contamination. A leaked USDA memo made it clear that inspectors would be held responsible for stopping production unless there was absolute, irrefutable evidence of contamination. That’s a tall order, given the tremendous speed and commotion inherent to the modern slaughterhouse. And speaking strictly as someone who would really rather not spend the rest of his days on dialysis, waiting for a kidney transplant, I’d much prefer that the production line be stopped even when there’s something less than irrefutable evidence of contamination.

Let’s say a USDA or an FDA inspector does find contamination, no ifs, ands, or buts. Given the ease with which these bacteria spread in all the commingling of animal bits inside these facilities, surely he or she can order the shutdown of the entire operation, right? Um, no, not quite. Oh, you mean she can demand that only the affected production line be halted? Well, not exactly. But not to fear, it’s not as if she’s powerless. In fact, upon discovering irrefutable evidence of contamination by pathogenic bacteria in meat headed for the national food supply, the inspector is directed to consult with the company and advise them about how they should best remedy the situation. Comforted now?

Meanwhile, the product still flows, and the bacteria still spread along the food industry’s complex and far-reaching supply chains. If you think this all sounds so ridiculous, so utterly, impossibly irresponsible to be true, consider that in 2002, the Public Citizen and the Government Accountability Project discovered that even after repeatedly testing positive for salmonella contamination, several (not one, but several) ground beef processing facilities were allowed to continue operations for several (again, not one, but several) months before action was taken to clean things up.

In any event, through the USDA’s Food Safety and Inspection Service (FSIS), the agency oversees all domestic and imported meat, poultry, milk, and nonshell eggs (those not sold in the shell). In the far corner, responsible for the safety of our domestic and imported fruits, vegetables, seafood, shell eggs, processed dairy (cheese, etc.), and grains, as well as processed foods, is the FDA. If you’re worried about the FDA being overworked in its tireless quest to keep you safe from pathogenic bacteria, well, don’t be. In fact, the FDA inspects a given facility only once every 7 years. But surely they’re keeping a careful eye on the food arriving in our harbors, from places such as China, the country from which we now import 60 percent of our apple juice and the country that recently that sickened 300,000 of its own infants with baby formula contaminated by melamine. Surely, they’re keeping tabs on that stuff, right? Oh, they are, they are: a whole 1 percent of it. My suggestion: Just be sure that any imported food you eat is part of that 1 percent that gets inspected by the FDA. Oh, wait a second: That’s totally impossible. Might as well cross your fingers and keep your health insurance up to date.

To understand why safety-related oversight of our nation’s food system can sometimes seem as if it favors the producers rather than the public, it’s helpful to examine who, precisely, is tasked with running these agencies. Most recently, consider the July 2009 appointment of Michael Taylor, the Obama administration’s food-safety czar. As such, Taylor will be in charge of implementing any food-safety legislation that makes it through Congress. This makes him the singularly most powerful individual in the realm of food safety who is cashing US government paychecks today.

Which in and of itself is no big deal; after all, someone needs to be in charge, and it could be argued that the lack of a figure-head at the top of the food-safety hierarchy has been a detriment to us all. It could also be argued that the appointment of Taylor is indicative of our government’s priorities as they pertain to food safety. To wit: Taylor’s professional career has been conducted almost entirely through a revolving door between the FDA and Monsanto. For the latter, he served as an attorney through the District of Columbia law firm King & Spaulding before becoming policy chief at the FDA, at a time when the agency was constructing its policy around genetically modified organisms (GMOs). Taylor also oversaw the approval of Monsanto’s genetically engineered bovine growth hormone (rBGH/rBST), which was banned in Australia, Canada, Europe, Japan, and New Zealand but has become a cornerstone of the US milk industry, with no labeling required.

Now, the subject of GMOs could fill a book (actually, it already has), but suffice it to say that despite FDA scientists’ contending “the processes of genetic engineering and traditional breeding are different, and according to the technical experts in the agency, they lead to different risks,” the agency approved genetically modified (GM) foods without any required safety studies. In short, any decision making regarding the safety of these products was left to the corporations producing them. It is important to note that recent studies have demonstrated a causal relationship between GM foods and afflictions such as infertility, asthma, allergies, and dysfunctional insulin regulation (to name but a few) in animals.

In any case, Taylor didn’t hang around the FDA for long; in 1994, he slid over to the USDA, where he served as administrator of FSIS and acting undersecretary for food safety. Shortly after, Monsanto came knocking again, and Taylor moved back to the company, where he became vice president for public policy. Which is a really just a fancy way of saying he was a lobbyist.

It would be one thing if Taylor’s career path was an anomaly, but it is not. At both the FDA and the USDA, high-level staff members move seamlessly between agricultural corporations, industry trade groups, and the regulatory agencies charged with keeping our nation’s food supply safe. Proponents of the revolving door will say that these people have the experience necessary to navigate the complexities of our nation’s food system and the related issues. Maybe so, maybe so. But even the neophyte cynic can’t help but wonder if perhaps this cozy relationship threatens the impartiality needed to crack down on an industry run amuck and if a culture of corporate favoritism might arise in such an environment.

While the USDA/FSIS and the FDA go about their business of sniffing out pathogenic bacterium before and after it has entered the supply chain, the CDC tracks the spread and, lately, the emergence of the bacterium itself. Remember FoodNet and PulseNet? The CDC doesn’t have anything to do with the food itself, but once an outbreak happens, the agency begins the difficult—and often fruitless—task of tracing it to its origin. Think of it this way: If the FSIS and the FDA caught every nasty bug before it slipped into the fast-moving river of our food system, there wouldn’t be much need to get the CDC involved at all. Of course, as we know all too well, the FSIS and the FDA aren’t exactly batting a thousand.

And so we are faced with a conundrum: Despite PFGE, despite refrigeration and countless other food-related technologies, despite the regulatory oversight of three of our government’s largest agencies, despite our general vastly improved understanding of foodborne illness along with our evolving ability to monitor and trace it, the issue of food safety has never felt more urgent and real. The outbreaks are getting bigger and more deadly. The bacteria are evolving, and new strains are emerging. In short, these threats to our public health seem perfectly capable of shrugging off our every attempt to thwart their spread. Which makes it rather hard not to wonder: What are we doing wrong? How can we make our food safe?

To answer this question, I believe we need to look beyond the immediate threat of pathogenic bacteria to the larger trends that have accompanied the rise of foodborne illness in the mainstream of public consciousness. Because at the same time this has happened, the techniques by which we grow, process, and distribute food in this country have evolved, too, in ways that would have been unimaginable only a few decades ago. Consider that we now eat hamburgers made from the fleshy bits of hundreds of cows and adulterated with an ammoniated slurry intended to protect us from the real possibility that any one of those cows—which may have come from different continents—was contaminated with E. coli O157:H7. We now have a single peanut butter facility so large that its reach extends to 46 states and the salmonella it disseminates can infect more than 22,000 people. We now are fighting pathogens that only a few years ago were practically unheard of, pathogens that many people believe have arisen not in spite of our modernized food system but because of it. At the same time, new research is showing that we now are physically more vulnerable to foodborne illness than at any time in human history, and that this vulnerability is only exacerbated by attempts to scrub the food system of pathogenic bacteria. Attempts that, if the current profit-driven trend toward ever more consolidation holds, will surely fail.

If there is a lesson to be drawn from collective failure to make our food “safer,” perhaps it is this: Despite our impressive, ever-improving technologies and increasing vigilance, maybe we are focusing on the wrong things. Perhaps foodborne illness isn’t the disease; maybe it’s a symptom of a larger, more systemic malaise. And until we begin to address that malaise, we won’t know what truly safe food really is.