6


DATA, INTUITION, AND OTHER WEAPONS OF WAR

In the emergency room, we aren’t focused on the many things we don’t know about the flu virus. We are simply too busy handling the cases of flu lying on the gurneys in front of us. The questions that an ER doctor focuses on are these: Does the patient coughing, aching, and sneezing in front of me have influenza? Do I need to treat with medications? Do I need to admit the patient to the hospital?

Most ER doctors, including myself, don’t usually bother to get a rapid flu test for a patient but rely instead on the patient’s story and symptoms. If a patient has chills and a runny nose, if she is tired and has a fever and night sweats, if she aches as if hit by a truck, and if it is late fall and her roommate had the same symptoms just a week ago, then she most likely has a viral illness that is influenza, or something very similar. And that’s a good enough diagnosis, because most doctors remember what they were taught in medical school: if the outcome of a laboratory test will make no difference as to how you will treat the patient, don’t bother to order it.

Nearly all the influenza patients I see in the emergency room are well enough to be sent home, with advice to take some over-the-counter medicines to control the fever and body aches, rest, and drink plenty of fluids. This treatment plan will not change in the slightest if I have a laboratory test confirming my clinical diagnosis. And if it isn’t actually the influenza virus at work but one of a dozen others that produce an influenza-like illness (ILI), the result will be the same. The patient is still going to be discharged, is still going to take Tylenol or Motrin for the fevers and body aches, and is still going to rest at home and drink plenty of fluids. Since this is the case, I almost never order a rapid influenza test; its result will be of no consequence to the way I will treat the patient. For these reasons, not getting a flu test is often good clinical practice.

Even if your doctor gets a flu swab, that tells only her about you. It doesn’t tell the local health department about the number of influenza cases that are out there. And this is important information because it allows for community-wide planning and, if needed, for special measures to be taken, like closing schools. For that to happen, the data from each patient needs to be reported. This is its own major challenge. Data collection relies on the goodwill of hospitals and their staff, who—in addition to their already heavy workload—must review their day’s work and fill out forms about the number of flu cases they’ve treated. Will the reporting always happen, and will it happen both promptly and completely? There will be double counting; if a patient with influenza is seen in the emergency room and then admitted to the hospital, should she be counted among the ER statistics or the inpatient statistics (or both)? Also, who should fill out the forms? Nurses? Doctors? Physician assistants? As with all tests, someone needs to pay for the swab and the laboratory materials and for the technician to put the results into the computer. Performing this test for surveillance purposes on the tens of thousands of people across the country with a flu-like illness can cost many millions of dollars.

Many states request—but do not require—that their primary care doctors, pediatricians, internists, and urgent care clinics track the number of patients with flu-like symptoms. In California about 150 providers do this. But in sunny Florida only 43 providers take part in their state’s influenza tracking system. That’s 43 providers for a state of almost 20 million people. Since the CDC recommends one provider submit data for every 250,000 people, Florida has half the number needed to get useful statistics. Relying on the goodwill and volunteer spirit of health care providers—people who likely already have difficulty keeping up with the demands of patient care—means that the flu data they deliver may not always be timely or even complete. We’re left with a patchwork of information that is, in a sense, useless for big-picture planning.

In an average eight-hour ER shift I would see between thirty and fifty new patients, and in the fall and winter perhaps ten or more would have symptoms of the flu: cough, fevers, body aches and chills, fatigue and sweats, a runny nose, and a sore throat. Some patients might have only one of these, while others may also come in with vomiting—or perhaps only with vomiting. Now I am asked to estimate how many of these patients had influenza. Do I report all the patients with aches or chills or a fever, or only the ones who had aches and chills and a fever? What if they had a fever and vomiting, but no body aches? Without a lab test to diagnose the precise virus that is making my patient miserable, I cannot be sure whether I’m seeing true influenza or one of the many ILIs that mimic its symptoms. My clinical judgment can take me only as far as a rather imprecise final diagnosis: “fever” or “viral syndrome.” If I diagnose viral influenza, I may be right only part of the time. Which is fine, in terms of patient care. But it’s not fine if you’re trying to collect data on the severity of the influenza season.

There are a lot of viruses out there to make you sick. There is the rhinovirus, which causes the common cold, and the family of rotaviruses, which cause nausea, vomiting, and diarrhea. There is the adenovirus, which causes conjunctivitis, coughs, a runny nose, and body aches. The human respiratory syncytial virus usually infects young children, giving them a fever, a nasty cough, and a runny nose. But none of these is the influenza virus, which is the only one we want to track. Without a test, your primary care physician cannot tell you which is causing your symptoms. And as we’ve seen, it doesn’t matter to the patient. The treatment is the same for all, so there is no need to get a costly lab test. However, if you are an epidemiologist who wants to predict where and when the next influenza epidemic might erupt, you must track influenza proper. You cannot rely on these clinical diagnoses of flu. Using lab tests to distinguish between ILIs and true cases of influenza becomes unavoidable. In the emergency room I would almost never obtain a flu swab, for all the reasons we’ve just outlined, but every now and again our faculty wanted to know if “the flu” was really influenza, and for a while I’d find myself swabbing patients—using a small tool in an attempt to answer a big data question.

Except sometimes data collection can backfire. Over the summer of 1992 in Fairbanks, Alaska, the public health laboratory received nine positive flu swabs from the office of a single primary care doctor. They were all from children younger than nine years old, but since children are often the first to become ill during the flu season, that was not unusual. What was unusual was that the cases had been detected in the summer, outside of the usual flu season. This uptick in cases came to the attention of the CDC, which sent an agent to Fairbanks in an attempt to determine if the cases might represent the beginning of a fresh flu outbreak. The agent was Ali Khan, now dean of the College of Public Health at the University of Nebraska Medical Center, but back then a medical epidemiologist working for the CDC. Khan was concerned that the influenza might be a pandemic strain. After all, the 1918 pandemic came in two waves, the first of which appeared unseasonably, over the spring and summer.

The reason that you haven’t heard of the Fairbanks influenza epidemic of 1992 is because there wasn’t one. Khan was dispatched to Alaska because one pediatrician was a little too meticulous and sent a flu swab from any patient with a runny nose. Influenza naturally circulates at low levels in the summer, and the pediatrician’s thorough testing merely reflected the usual amount of flu; it was just being reported from a single office. Overall, there wasn’t any more influenza around than usual. The false alarm was caused by data.

*  *  *

We live in an era when the answers to so many questions are just a Google search away. Where should I go for dinner? How much is a flight to Santa Fe? What’s the French translation of “Do you have cold medication?”

Do I have the flu?

Go ahead. Google that. More than 1.5 million results will pop into your browser in less than a second. You may see a hit—sponsored by Tylenol—that says “Feeling under the weather? Flu activity is high in your area.” You may see a link to WebMD: “Flu or cold? Know the difference.” In an earlier era—say the winter of 2008—Google would have scooped up your flu-related queries. Your question would have become Google’s answer to when and where the flu was spreading.

Google’s foray into flu prediction began in 2008 with a new service: Google Flu Trends. First, Google went back and looked at billions of searches that had been performed over the previous five years. Every year in the United States at least 90 million adults search Google for medical information, and Google hunted for flu-related queries (“cough,” say, or “chills”) and matched them with the CDC’s historical flu data. Google then applied those queries to predict what might happen going forward in time. On January 28, 2008, for example, flu queries spiked on Google Flu Trends. Two weeks later, the CDC itself reported an increase in infections.

Silicon Valley had produced real-time flu data that was beyond the reach of slower-moving hospitals, scientists, and the medical bureaucracy. If its algorithms were accurate, Google Flu Trends could help the government and medical industry prepare properly and respond early during a flu season.

A major task in trying to prevent or curb a flu outbreak is figuring out who exactly has the flu. As we have seen, this is much trickier than it sounds. And Google Flu Trends appeared to be a solution, or at least a powerful tool using big data, which seemed to provide a depth, breadth, and complexity of information that nurses and doctors were incapable of collecting, let alone processing. Computers had leveraged a simple Google search to produce something quite sophisticated: an estimate of the influenza burden. Or so it seemed.

For a while, it looked like Google Flu Trends was a smash hit. It seemed to predict flu in Canada, Australia, and several European countries, and was corroborated with data obtained from the sale of antiviral medications. However, there was a hiccup in 2009, when Google Flu Trends underestimated an outbreak of influenza A in the United States. The algorithm was updated to include more search terms directly related to the flu and fewer that had to do with its complications. That hiccup was a harbinger of things to come. The winter of 2012 proved fatal for the program.

That year the U.S. flu season involved a rather virulent strain, and people became sick and died at higher rates than were typically seen. But when it was all over, it turned out that Google had overestimated the already high amount of influenza by as much as 50 percent.

What went wrong? Perhaps the algorithm was clunky. It had to be calibrated anew each year, and was therefore never as refined as it should have been. Or perhaps the problem was more fundamental, and had to do with Google itself. The company’s core mission, after all, is not to provide the best data it can on the prevalence of flu. Instead, Google is a corporation with profit as its highest priority, and much of its central business model is to generate advertising revenue from its powerful search engine. In the influential journal Science, it was suggested that some of the tweaks that Google made to the algorithm were done to improve its business model, and that this came at the expense of its predictive accuracy.

Perhaps many people searched for key flu terms on Google not because they were ill but because they were afraid of getting ill. These “worried well” internet users never became sick, but their searches remained part of Google’s data set. Remember that the 2012 flu was particularly nasty. The media reported its severity and New York declared a public health emergency. Perhaps this drove up the number of people Googling “the flu,” which of course did not equal the number of people getting the flu. In the end, Google Flu Trends was quite accurate in at least one respect: quantifying the peaks and valleys of its users’ interest in the flu.

Underneath all this was perhaps a bit of hubris. Big data does indeed allow an unprecedented look at millions of data points, but those points don’t always reflect an accurate picture of the ground level. Recall the 2016 presidential election, in which nearly every data point signaled a victory for Hillary Clinton. Or consider Boston’s Street Bump app, which uses the accelerometer in your smartphone to detect potholes in the road. The city would learn where the potholes were from crowdsourcing; a citizens’ army would automatically send data on their location. But the app only produced a map of where young, affluent car owners—those who would typically download an app like Street Bump—were driving over potholes. And while that specific set of data was very complete, it did not reflect the location of all the potholes in Boston, just as political polling did not reflect what was going to happen to Hillary Clinton in Michigan, Wisconsin, and Pennsylvania. The circumstances were broader and more complex on the ground.

Did this mean that vanguard technology was worthless without the steadying force of more traditional and time-tested methods? There we were, ninety-five years after the great flu epidemic, and yet the dazzling evolution in technology was still not enough for us to get a firm grasp on who had the virus. Perhaps a mix of traditional and innovative tools—throat swabs and algorithms—would give us our best shot at identifying the infected and curbing epidemics.

“It is hard to think today that one can provide disease surveillance without existing systems,” said Alain-Jacques Valleron, the French epidemiologist who in 1984 founded the first computerized flu tracking program. “The new systems depend too much on old existing ones to be able to live without them.”

Some predicted that Google would once again update the program and refine its algorithm, but in August 2015 its flu team wrote a goodbye note of sorts. They discontinued their own website and instead began to “empower institutions”—like schools of public health and the CDC itself—to use data to construct their own models.

We need to monitor influenza numbers for several reasons. Without an accurate count, it becomes impossible to track the advance and retreat of influenza. Health departments need accurate numbers to prepare, whether by stocking up on vaccines or advising the public about the flu risk. Vaccine producers look back at the influenza count to determine if that year’s vaccine had been the correct one. And knowing the precise strain that is circulating one year is crucial to predicting what strain will be in circulation the next. So if the power of big data and the tentacles of technology can’t count the cases of influenza, what can?

*  *  *

Around the same time that Google Flu Trends was getting started, a group of economists led by Forrest Nelson at the University of Iowa tried a different approach to estimate the burden of influenza. Nelson had spent years on the boundary of where prediction and economics overlap: the stock market. When we buy a company’s stock, we believe the company will grow, outperform the competition, and produce profits. The more people believe in the future success of the company, the higher its stock price will be. Conversely, if we judge that a company’s economic future is bleak, that it will not succeed, its stock price will fall as shareholders scramble to sell.

Nelson applied the prediction market to politics, forecasting the outcomes of elections, and then turned to influenza. Could he harness all kinds of disparate expert knowledge to predict how much flu there would be? That question gave birth to the Iowa Flu Prediction Market. Nelson wanted to draw from a wide range of flu-savvy people in the state: nurses, school principals, pharmacists, physicians, and microbiologists. Their aggregate information would, he hoped, provide insight into how much flu there was, and allow him to predict how much there might be in the future.

Nelson started modestly in January 2004 by inviting fifty-two health care workers with a variety of backgrounds to play. He had secured some grant funding and gave each of his traders fifty dollars to use. With this they bought and sold contracts based on a map the CDC would later release to visualize influenza activity. For example, in January you could buy a contract for the first week of February that called for widespread activity in Iowa, and that would be shown in red on the CDC’s influenza map. Or perhaps you believed, based on all the information you had at hand, that the influenza activity was only going to be sporadic (shown in green on the map). Then you’d buy a contract for that color. A contract that denoted the actual level of influenza when it was finally released by the CDC was worth one dollar. All the others were worthless.

The Iowa Flu Prediction Market continued over several flu seasons, and had some early success. It correctly forecast the CDC’s official influenza count as often as 90 percent of the time, though that number fell the further out the predictions were made. But there were also problems. It was challenging to get enough physicians interested; most told Nelson they simply did not have the time to trade. He ran out of funding and could no longer provide real cash. So he switched to fake money, in the form of “flu dollars,” and continued the project. But those playing the market seemed to tire of using pretend money, and participation dropped off. One of Nelson’s co-researchers passed away, another moved on to other research areas, and in 2012 the Iowa Flu Prediction Market ceased trading.

When I spoke with Nelson, he had retired and was enjoying the warmer climes of Austin, Texas. “It was a baby of mine since 1988,” he told me, referring to the year his presidential prediction markets had first opened. He acknowledged that running the flu market was costly in terms of both time and money, and he was frustrated that there hadn’t been a greater buy-in from the medical profession. But he had never expected the prediction market to replace traditional influenza surveillance. Instead it could be used as a supplement, offering another data point for public health officials to use. And, like any parent, he remained very proud of his baby.

*  *  *

Google searches and doctors’ reports all flow toward one agency inside the CDC: the National Center for Immunization and Respiratory Diseases, in Atlanta. Within that center sits the influenza division, whose staff of three hundred must predict, track, and recommend the treatment for influenza using the data at hand—some of it helpful, some of it flawed, and some a mix of both.

The division relies on the work of clinical laboratories, like the one at my hospital in Washington, D.C., and of public health laboratories, like the one in Fairbanks. Every week across the United States about two thousand health providers—nurses, physicians, and their assistants—fill out a form that tells the CDC how many patients they’ve seen with an influenza-like illness. It’s a time-consuming but valuable report from the front line of the battle against influenza, but it has obvious limitations in terms of the quality of data it produces. Remember that one doctor might report “influenza” while another doctor seeing similar symptoms might report “fever,” or “gastroenteritis” or “viral syndrome,” all ILIs. When it comes time to aggregate the numbers and report ILI activity to the CDC, the electronic medical record might include some, all, or none of these diagnoses.

The CDC also relies on hospital labs to report how many tests for influenza they run, and how many of these are positive. You might think that these data would be more accurate than reviewing the electronic medical record, but here, too, the true incidence of influenza might vary, depending on which patients were swabbed and on the locations of the clinics and hospitals. You might have a doctor who swabs only when she is treating a patient who is very ill, or one who has cancer, HIV, or another complicating condition. In this case, the pool of all patients swabbed is limited but the number of positive cases is high. Or you could have the inverse: another doctor—even in the same hospital—whose practice is to swab many patients, not just those with a chronic illness. In this case, the sample size would be quite large and the number of positive cases of influenza comparatively small. And these numbers, in either situation, include only those who choose to see a doctor, and those doctors who choose to swab their patients. This imperfect, sometimes contradictory information is what the CDC has to work with.

And it’s solely reactionary. The numbers show only what has already happened. The time lag between the collection of data and reporting it to the public can be days, or weeks, or longer. The data might indicate the burden of flu (the impact in a given place) but it lags behind the prevalence of flu—that is, how much influenza is actually circulating. It tells us what was, not what is or what will be. For example, if I see three patients with influenza in the first week of November, nine in the second, and thirty in the third week, I could reasonably estimate that in the last week of November I may see as many as seventy new cases. With this knowledge, I would prepare my clinic for an outbreak. But this data may not predict an increase at all. Perhaps the flu epidemic peaked in the third week, and the number of new cases will begin to fall. If that is the true scenario, then I prepared my clinic for a rush that never came.

This is the situation we are in as I write these words. The first weeks of January 2018 had a sudden and dramatic increase in the number of confirmed cases of influenza. Had we reached a peak, or would the numbers continue to climb? Nobody knew. The press, meanwhile, continued to cast the data as an influenza pandemic, forgetting the lesson of the 2009 swine flu outbreak. When that earlier outbreak was all over, the actual number of deaths from influenza was lower than during a regular flu season.

*  *  *

Counting and predicting influenza activity is really hard. Google Flu Trends tried and failed, and there were no insights from the now defunct flu prediction markets. Data coming from clinics and labs is incomplete and sometimes misleading. So what else might work?

One approach is to skip the data from hospitals and doctors altogether, and focus more on the population of patients. Only a minority of patients with flu-like symptoms pull themselves out of bed and schlep over to their doctor or local emergency room, so you’d have to find a way to reach the majority who stay home or emerge only to get over-the-counter medication. National pharmacy chains like Rite Aid or CVS have data on how many units of flu medicine they sold yesterday, or in the prior week. This data is available in real time with near-perfect accuracy; it does not rely on the subjective entry of a diagnosis or on the decision to obtain a flu swab, but instead links the register scanning your purchase with a database of products that are bought when the flu is in the air. It does not distinguish between real influenza and ILIs, but the two usually rise and fall in harmony.

In fact, the New York City Department of Health has employed exactly this strategy to rapidly detect outbreaks of influenza. The department’s efforts began in 1996, when it focused on surveillance for the nasty waterborne diseases that cause gastroenteritis. The program started by receiving weekly reports of the sales of antidiarrheal medicines, and soon expanded to track medications for ILIs. The department had quite a task, since it estimated there were at least 400 different drugs sold in the “cold department.” Fortunately, it was able to narrow this down to the most popular medications, the 50 or so drugs that had the terms “flu” or “tussin” in their descriptions. The program also received data in real time; nearly all pharmacy sales were reported to the health department by the next day.

But when the department reviewed its performance in detecting early influenza over a three-year period, it was disappointed. Although the medication monitoring system mirrored the natural rise and fall of flu cases over the autumn and winter months, it could not detect any early signal of flu. Just why is unclear: Perhaps people bought medications early, just in case, before the onset of a flu that never came. Perhaps the same medication was used by multiple members of a family, so that the purchase of one unit did not represent the illness of just one individual. Whatever the reason, once again this approach—an early use of big data to detect flu outbreaks sooner than using conventional methods—didn’t deliver. Despite this, the city health department has recently stepped up its pharmacy surveillance program, and now monitors both over-the-counter and prescription medications for colds and flu. They’ve also expanded to include pharmacies outside of Manhattan. The Department of Health now knows how many residents of Queens and Brooklyn are buying cough syrup or cold remedies.

The state of Maryland had another idea involving the public. In 2008 it enlisted a people’s army of influenza trackers. As part of the Maryland Resident Influenza Tracking Survey (MRITS), citizens voluntarily sign up on a website hosted by the state’s Department of Health and Mental Hygiene. Once a week they answer a couple of simple questions about whether they or any members of their household have flu-like symptoms. This data comes straight from the source and relies only on the presence of symptoms, so there’s no need to analyze how many bottles of flu medicine were sold or how many positive flu swabs were received in a lab. In its first year, more than 500 residents of Maryland signed up to participate, and nearly half responded to a reminder email each week. Since then, the program has grown to over 2,600 participants.

I am one of them. Once a week there is an email waiting for me in my inbox. If no one in my household has a cough, fever, or sore throat, there’s a simple link to click on, and I am done. It takes about two seconds. It takes only a little longer to report a family member who has flu-like symptoms. Then MRITS asks me if that person sought medical treatment for their symptoms, traveled in the week before they became ill, or missed their usual daily activities as a result.

Although not everyone remembers to fill out the weekly review, the data produced is pretty close to the data from other surveillance methods. For the 2014–2015 Maryland influenza season, for example, the incidence of ILI symptoms reported voluntarily by medical professionals was 1.6 percent; emergency departments in the state reported an incidence of 2.3 percent; and the citizen-driven MRITS incidence was, like Goldilocks, right in the middle of these: 1.9 percent.

The MRITS network suffers from the same limitations we’ve already noted. Residents report symptoms that are not caused only by influenza. And it is driven by people who are keen to help—so keen that they somehow found out about the survey, signed up for it online, and reported on their household symptoms weekly. How typical of the general population is this self-selected group of flu watchers? Are they like the users of the Boston Street Bump app? Does this group get flu-like symptoms at a lower or higher rate than do their fellow residents of Maryland?

We don’t know the answers to these questions. But we do know that some citizens go way beyond self-reporting symptoms. They are so enthusiastic about the flu that they turn it into a vocation, an avenue for amateur sleuthing and study. Grassroots flu groups are all over the internet. Some are blogs maintained by one person, often with a very specific agenda, and others seem to be more objective, providing flu details without any editorializing. Could their size and agility accomplish certain things that major tech companies and bulky bureaucracies could not?

From her home in Winter Park, Florida, Sharon Sanders is the editor in chief of FluTrackers.com. Her website is rudimentary but sprawling, with dozens of old-school chat forums devoted to influenza and other infectious diseases. She conceived it in 2005, around the time that President George W. Bush was reading a history of the flu during his summer vacation, and the look and design of the site hasn’t changed much since. Sanders has no medical background but became fascinated with the flu when, many years ago, she saw a TV segment by Sanjay Gupta, the medical correspondent for CNN. He had just visited the CDC in Atlanta, and explained how influenza epidemics are cyclical. Sanders had not heard of repeating influenza epidemics before, so she did what inquisitive people do to find out more. She Googled it. (If she’d done so a few years later, her queries would’ve been false alarms scooped up regardless by Google Flu Trends.)

She found two (now defunct) discussion sites, Flu Wiki and CurEvents, on which there were rigorous conversations about every aspect of pandemic influenza: preparations, health care workers, 1918 data, medical considerations, and traditional medicine. Sanders recalled one especially animated discussion thread about whether migratory wild birds could spread flu. Sides were drawn depending on whether you were a wildlife supporter or had a scientific background. It got ugly. After a while the discussions devolved into debates about guns for personal protection and other diversions not related to influenza. Sanders had enough, but by now she was hooked on the subject.

“It became clear that starting a new site would be the only way to have a more serious online environment,” she says. “So we did.” She had made online friendships with a few people she had met through CurEvents. Together with two fellow enthusiasts—one a software engineer and the other a botanist—she launched FluTrackers in February 2006. “We were just actually regular people, concerned citizens without any medical background,” she adds.

Now, I’m all in favor of citizen efforts to keep us informed, but for a project as big as Sanders’s, wasn’t some kind of quality control advisable? She saw things differently, and believed there was a great benefit to unleashing what she calls “previously untapped talent” among the general population. Sanders loads the site with the latest influenza information from the CDC and the World Health Organization. “Our rules were simple,” she says. “No bashing, no violent talk, no politics, no religion debates. Respect for others. We were a boutique site for the small amount of people online who wanted to explore disease spread, particularly the flu. . . . Our discussions were serious. And it was fun.”

Within weeks of the site’s founding, several scientists joined, most of them anonymously. Sanders could vouch for their credentials from their email addresses. Some posted news and scientific papers they thought others might enjoy. Journalists who covered the flu also joined, though nearly all used pseudonyms. Then, as now, anonymity was really important for those who visited this and many other sites. The defunct Flu Wiki’s founders remained anonymous for years. Today, almost all of the professionals and new members who sign into FluTrackers do so anonymously.

Over time the web traffic grew and FluTrackers expanded its focus to include other infectious diseases. Tracking influenza outbreaks is only one of the missions of the site. It also reports on new academic papers, conference notes, and expert speeches. But one of the most useful features is its global reach: the site logged almost 18 million page views in the first ten months of 2017. Eighteen million! Who knew influenza was so popular? FluTrackers not only gathers information but also helps figure out how to use it to inform the general public. Since it has such a large readership, it was even invited to take part in tabletop exercises run by the U.S. Department of Health and Human Services. These exercises evaluated how online media could help disseminate information to the public during an influenza pandemic. For a homegrown clearinghouse on all things flu, this was quite an achievement. Sanders thought so too. “I know it seems improbable that an international group of hobbyist volunteers who have never met in person could rise to such prominence,” she admits to me. “But over the years this is what happened.”

FluTrackers translates foreign news articles and is used by the CDC, the WHO, and a host of other organizations. In an email, Sanders told me that “an alphabet soup of U.S. government agencies view FluTrackers daily to see what we have found.” Because its member base is international and on the ground, the site is often able to report disease outbreaks before they come to the attention of larger, less nimble organizations. Sanders “specializes” in Chinese and Arabic sources, which she deciphers with the help of machine translators. She has also learned to look for indicators of flu activity, which are specific to each country. This is especially needed in countries in which the media is tightly controlled. For example, Sanders once learned that in a particular province in Egypt, health care workers were distributing pamphlets on H5N1 flu in a door-to-door campaign, which likely indicated an uptick of flu cases there. Sanders was also tracking the “amazingly frequent” reports of poultry farms being destroyed by electrical fires in Egypt. Since the government did not compensate farmers for the loss of chickens from bird flu, she suspected—though she had no firm evidence—that some of them were staging these fires to obtain insurance payouts and protect themselves from financial ruin. In one province alone, three poultry farm fires were recorded on the same day. So the more poultry farm fires reported in the Egyptian media, the greater the likelihood of an uptick in bird influenza.

Indicators such as these are of special interest to FluTrackers because they may provide a clue as to when new outbreaks of influenza will occur. Sanders compared this research with the CDC or WHO influenza reports, which are indicators of where flu was, not where it might be going. Of course, there is the possibility of error, but that is something she has come to accept. “We could be wrong,” she says, “but we are wrong in a very earnest way.”

While FluTrackers has no fact-checkers, anyone who posts is required to provide a link to the original news source, unless it would be dangerous for them to do so. This danger is a reality for some who upload information to the web in countries like Egypt and China, where the news media is tightly controlled.

Sanders told me that unlike many blog sites that focus on infectious diseases, her site is apolitical, has no agenda, and is just in the business of getting information out to the public. And the cost of getting this information out is a mere fifty dollars each month in internet service fees. She has great pride in the fact that the site takes no money from business, government, or those with an agenda.

Sanders hasn’t yet formed an opinion as to whether there will be another 1918-like pandemic soon. She notes that there are more strains of humanized novel flu than ever before, but whether this portends a new flu epidemic is unclear. She is surprised that there have not been pandemics from new strains of bird flu that were found in Southeast Asia. Some of them had a nerve-racking death rate of more than 50 percent. Closer to home, she is critical of the lack of attention paid to pandemic preparation over the last decade, and saddened by how many flu experts have retired from public service. This has left a lack of knowledge that she fears will tremendously weaken any future federal pandemic response.

FluTrackers is impressive, but has limitations as well as oversight issues. The site reports on where influenza has been suspected but not always where it has been confirmed. The challenge for those in public health is to know what to do with the enormous amount of information the site gathers. Do reports of pneumonia indicate an uptick in complications from an influenza infection? If the Egyptian media is reporting an increase in poultry farm fires, how are we to act on this information? Should we make more vaccine against the last known avian flu that infected us? Or should we instead get more data before ramping up vaccine production? There is a limit to what data points can really tell us. Often, they yield only more questions.

Organizations like the CDC or the WHO are still the best destinations for data on the year-over-year rise or fall of the flu. This information, together with the number of vaccinations, also gives us a measure of the success of preventive efforts. Depending on how quickly the counts are made, a state or town might employ that data to help health officials target their public messaging.

In spite of all this, we still don’t have an accurate way to measure how many cases of flu there are each season. We can’t solely rely on huge data-driven companies like Google to figure it out for us, nor on citizen-led efforts, and even the CDC data is limited. The influenza virus, a most primitive organism, seems to run circles around our advanced technology. We don’t even know the answer to one of the most important questions about influenza: Why does it rise and fall with the changing seasons?