CHAPTER SEVEN
FOR THE RECORD
I have participated in scores of codes like Jay’s. Mostly, like Jay’s, they were not successful. Nearly every patient who codes dies because of the simple tautology that it is crushing illness that brings them to the doorstep of the code. In a strange way, death is actually one of the steps of the code. It isn’t listed in the algorithm, of course, but it’s there. The first step. Everyone knows it, but no one will say it. Even though the patient has already died from the devastation of disease, the code presses on until someone “calls it.” Then, and only then, can death be acknowledged. It is a wrenching combination of human grief and quotidian bureaucracy.
As a resident, I was always struck by the odd concept of “time of death,” especially when I become the one in charge of the code. On the one hand, I would announce it with scientific precision (“4:17 a.m.”). On the other hand, it was entirely arbitrary. If I’d decided to continue the code for another minute, the time of death would be 4:18 a.m. If I’d faced up to reality a bit sooner, the time of death might be 4:14 a.m. In all cases, life had already ceased for the patient. In fact, life had ceased before the code started. That was the time when the patient had stopped breathing or the heart had stopped beating. That was when the patient had really died. Yet we officially record the time of death as the moment when we adjourn our battle, not the moment the cells have adjourned theirs.
I suspect this relates to the way the medical record dominates healthcare. Everything that transpires during the course of a patient’s contact with the healthcare system must—for good reason—be documented. The chart (also called the medical record) is the chronical of a patient’s medical odyssey. Every medication given, every lab test, every X-ray is part of the chart. Doctors and nurses write progress notes, documenting the patient’s current condition and the plan of care. Even during the chaos of a code, there will always be one nurse who stands calmly in the corner, fastidiously documenting each incremental step of the resuscitation effort. The final entry in that running document, of course, is the time of death.
To me, this reflects how the medical chart often ends up dictating medical practice, rather than the reverse. As documentation demands grow, our practices change in order to accommodate. For generations, the medical record consisted of the standard paper chart, with various team members scribbling their observations in one unified physical location. The chronical was an actual chronical you could leaf through to read the patient’s entire story. Of course, you could also accidentally spill your coffee on that lofty chronical. Or your Thai red curry. The chart could get knocked to the floor—52 pickup–style—by an orderly bustling by and end up with its pages hopelessly out of order. It could be buried under a pile of journals on the desk of an endocrinologist who left for a week’s vacation. The discharge summary could have been written by a surgeon with Neolithic penmanship skills. Three key pages could have been “borrowed” by a medical student for a 7 a.m. conference.
So there are a host of compelling reasons for the medical chart to be digitized. The electronic medical record, known as the EMR (sometimes called the EHR, for electronic health record), obviates most of those shortcomings—it’s always legible and it can’t be stranded in someone’s office. If you spill your coffee and your Thai red curry on it, you might short-circuit your computer terminal, but the actual EMR will survive.
The EMR may be less tangible than the old paper chart, but if anything it exerts an even stronger influence over how medical care is delivered. For reasons both intentional and unintentional, the EMR has fundamentally changed how health professionals process medical information. In the paper-chart days, each time I saw a patient, I was given a blank piece of paper—a blank tableau, you might say. (In Bellevue, for some inexplicable reason, the progress-note paper was flamingo pink. For my entire medical training I felt as though I were floundering in a sea of Pepto-Bismol.)
The beauty of the blank sheet of paper was that I could write down my thoughts in exactly the order in which I processed them. I would start with the patient’s main reason for the visit (“the Chief Complaint”) and the HPI (“History of Present Illness), and then follow with the past medical history. After I examined the patient, I would note the physical exam findings, followed by the pertinent lab or radiology results.
Here I’d stop and think, trying to pull it together. I’d run through my differential diagnosis. If I wasn’t overly rushed, I’d flesh out a detailed assessment, explicating my clinical reasoning as to why I might favor one diagnosis over another. Lastly, I’d pen my explicit plan of action. My goal was that I—or anyone else—could come back to this note at some later time and immediately grasp the entirety of what I was thinking and understand why I was thinking it.
In contrast, when I open up the EMR, the computer forces me to document in its order, which has no relationship to the arc of my thoughts. This reflects the fact that EMRs were initially developed as billing systems. Only later did they start to incorporate clinical information, and even the best of the EMRs do not think the way clinicians think. We humans must be rerouted to the EMR’s requirements.
Not only does the EMR interfere with the train of thought, it also forces its users to compartmentalize their thinking. Each aspect of the patient is contained in a different field, and these fields aren’t logically connected. On the old paper note, I could group the blood test results and the X-ray results together because they logically formed supporting data to prove or disprove a diagnosis. I could jot down the upshot of a cardiology consultation within my assessment, if the result was relevant to my clinical reasoning. But in the EMR, the lab results are in one place, and the radiology results in another, and the consultations are in a third place.
This fragmentation of thinking is particularly dangerous when it comes to diagnosis, a process that, as we’ve seen, requires integration of information. The EMR conspires against integration by forcing information, as well as your flow of thought, into a rigid structure that is convenient for computer programmers and the billing department but not necessarily logical for anyone taking care of patients.
There’s no going back from the EMR, and I don’t think we should go back. The advantages of centralized medical information are substantial. But the consequences of the EMR—however unintended they may be—are equally substantial and have potent ramifications for medical care as well as medical error.
Robert Wachter, an internist from the University of California-San Francisco, has written extensively about medicine and technology. Although an admitted techno-optimist, Wachter writes evenhandedly about the advances and the drawbacks of medical technology in his book The Digital Doctor.1 The case that prompted him to write the book raises the hairs on the neck of any doctor, nurse, or patient who has ever been party to an EMR.
On a balmy July day, a pediatrics resident ordered Bactrim for a teenage patient named Pablo Garcia hospitalized in the wards of UCSF’s Children’s Hospital. Bactrim is one of those antibiotics that has been around for so long that no one even remembers what the milligram dosing is. The prescription is always “one tab, twice daily”—though with lower doses for patients with renal insufficiency and for young children. (Bactrim is actually a combination of two antibiotics—160 milligrams of trimethoprim and 800 milligrams of sulfamethoxazole.)
Pablo’s weight was three pounds under the cutoff for the standard adult dosing (one tab, twice daily) so the EMR sent the resident down the pediatric pathway for weight-based dosing—5 milligrams per kilogram. Weight-based dosing is obviously critical in pediatrics, since children can range in size from 6 pounds to 85 pounds.
The weight calculation in this case led to a dose of 193 milligrams of trimethoprim, which is a bit larger than the tablet size of 160 milligrams. The resident correctly rounded down to the 160-milligram tablet. Such a rounding, though, triggers an automatic alert in the EMR to a pharmacist to double-check the dosing.
The pharmacist contacted the resident to clarify that 160 milligrams was the dose she indeed wanted. This was the EMR’s way of trying to catch calculation errors—having a human intervene to double-check. The resident confirmed the dose with pharmacist and then reentered “160.”
You can enter the dose of Bactrim in the EMR either by the standard milligram dosing (160 mg is one tab) or by weight-based dosing (milligrams per kilogram). Unfortunately, the EMR “defaulted” to the mg/kg unit that the resident had been using earlier. So instead of entering 160 mg, she inadvertently entered 160 mg per kg. That’s 160 milligrams for every one of Pablo Garcia’s thirty-eight kilograms.
You don’t need to do the math to feel faint, but Wachter does it for us. The dose comes to 6,160 milligrams, or 38½ tabs of Bactrim.
Nearly every medical professional with a detectable pulse knows that Bactrim is a “one tab, twice daily” medication. We know it in the same way we know that the normal number of fingers on each hand is five. We would be as likely to dump 38½ packets of sugar in our morning coffee as to prescribe 38½ tabs of Bactrim. But once the error was embedded in the EMR, it took on a bizarrely vigorous life of its own. It was like watching the lead-up to a horror movie climax in excruciating slow motion. I found it almost unbearable to turn the pages.
The Bactrim order was now routed to the medication supply room. To avoid medical error the hospital had installed a state-of-the-art pharmacy robot. This machine could make none of the errors that humans might make—miscounting, misreading, yawning. As the order had already been labeled as “approved” by the human pharmacist, the robot duly dispensed 38½ tablets into the medication bin with 100% accuracy.
The nurse who received the medication bin on the ward was taken aback because she recognized that this was a very unusual number of pills. But she could see that the order had been double-checked by both doctor and pharmacist. This was reassuring. Plus, the bar-coding system that matched the pills to the patient (to prevent medication mix-up errors) assured her that this was the right dose for the right patient. And working in an academic medical center that handled more than its share of oddball diseases and experimental treatments, it wasn’t unusual to see atypical dosing schedules.
If the nurse stopped her rounds to ask another nurse for advice or to page the doctor about the medication, an ominous red alert would pop up on her screen informing her—and her supervisor—that she was late in administering the meds. (Delivering medications on time is one of the many “quality measures” that hospitals emphasize.) Additionally, the EMR wouldn’t allow an unsure nurse to decide, “I’ll finish with my other patients first and then get back to this one when I can give it some extra thought,” because her current task wouldn’t be marked “complete” until she’d scanned and administered all 38½ pills to her patient. So there was no way to get help unless she pushed back against every disincentive and ground the whole ward to a halt. Not an easy thing to do on a busy ward under pressure to “be efficient.”
And so, backed up by the assurance of the bar codes, the precision of the pharmacy robot, the documentation of human rechecking by both pharmacist and doctor—all measures instituted to decrease medical error—the nurse doled out the pills as instructed by the EMR. Thirty-eight horse-pill-size tablets of Bactrim. And, of course, that extra half tab.
At first Pablo noticed only some odd numbness and tingling. Then he became anxious and a little confused. Then suddenly his body erupted into fullblown seizures. He stopped breathing and a code was called. Pablo Garcia survived, miraculously, and apparently without permanent damage. But he easily could have ended up on dialysis or with permanent brain damage or dead. (It seemed like the dosage of luck on that day was quite a bit higher than normal—maybe even 38½ times higher.) He recovered, seemingly fully, though no doubt his trust in the medical system suffered permanent damage.
The case is so rattling because it highlights how an EMR that is trying to improve patient safety through its various alerts and warnings can instead end up harming a patient gravely. It’s all the more ironic because this is an error that would have been caught in a flash if the doctor had written the order by hand instead of computer or if the medication had been dispensed by a pharmacist instead of a robot. The very technology we are counting on to decrease our medical-error rate can actually increase it, or create new kinds of errors.
Wachter uses this case to highlight, among other things, the issue of alerts, the basic tool the EMR brings to the table when it comes to preventing medical errors. For doctors and nurses, these computerized alerts constitute one continuous, communal migraine. Handling prescriptions and medication orders is what we do all day, and the alert system has become an octopus of misery, swatting unceasingly from all directions. Just when you think you may have cleared the gauntlet of alerts, another seven bulbous legs come whipping at you with more alerts to navigate.
Now, perhaps you think I’m being histrionic here, but that is certainly what it feels like. Every time I have to certify that a vitamin D pill is not a controlled substance, I want to scream. Every time I have to tend to the alert that warns me that drug interactions are “not available” for the walker I’m prescribing, I want to take a scalpel to the screen. When the alert informing me that a medication should be “prescribed with caution” for someone over sixty-five pops up for every single medication for every single patient over sixty-five, I’m ready to dismember the keyboard.
I’m angrier still because I know that buried within these useless alerts are some important ones. I want to be reminded if a medication is contraindicated in liver disease or needs to be dosed differently because of impaired renal function. I know that I can’t possibly remember all the crucial drug interactions, so I want the EMR to catch me when I inadvertently prescribe two meds that can’t go together. Thus I’m furious at the EMR for inundating me with so many banal alerts that I—like most doctors and nurses who are honest enough to admit it—end up ignoring them all.
After reading about the Bactrim disaster in Bob Wachter’s book, I decided to rouse myself from complacency. After all, when the doctor ordered 38½ times the normal dose of Bactrim, the EMR did pop an alert in her face. But she had just finished reviewing the order with the pharmacist, so she dismissed the alert like she and I and everyone else do with the hundreds of alerts every hour that seem designed to prevent us from getting any work done.
I therefore committed myself to reading all the alerts before I dismissed them. There might be a crucial one buried in there, and I didn’t want to miss it. Besides, the EMR is a legal document. Clicking “okay” to an alert indicates that I’ve read it, evaluated its contents, considered its impact, and then made a decision. That’s certainly what a lawyer would say in court.
I set to work the very next morning, feeling like a boxer newly motivated in the ring, bobbing confidently, flexing my newly invigorated patient-safety muscles. I could almost feel the satiny robe glittering around my shoulders instead of my saggy white coat with ink stains from a leaky pen. I was ready for battle!
Let’s just say I didn’t even make it through the first round. I was defeated with my very first patient of the day. He needed thirteen prescriptions and there were several alerts for each medication, which added up to dozens of alerts. Nearly all were useless. Things like “weight-based dosing not available for this medication”—for a medication that does not need to be adjusted for weight. Or something equally unhelpful, like “drug interactions not available for this medication” when I’m prescribing alcohol swabs. Occasionally I’d get something of possible importance but couched in murkiness, such as “Drug X may increase bio-availability of drug Y. Quality of data uncertain.” What was I supposed to do with that?
When I prescribed him a blood pressure cuff to check his pressure at home, that prescription set off a host of alerts. How could an eight-inch piece of vinyl interact with seven different medications? And, naturally, weight-based dosing was not available. My patient also committed the cardinal sin of being over sixty-five, so every single prescription—even his blood pressure cuff—was accompanied with the warning that I needed to “prescribe with caution.”
But I persevered, reading and registering every single medication alert, no matter how inane. What ultimately felled me, what pushed me over the ropes completely, was that this patient was taking warfarin. Warfarin is an anticoagulant, a so-called blood thinner, which he was taking because his atrial fibrillation put him at risk for blood clots and thus strokes.
Warfarin notoriously interacts with nearly every food, drug, and chemical in the universe, so the number of alerts it generates is distinctly epic. But warfarin is even more arduous because it’s dosed according to its level in the blood, which changes constantly based on whether the patient overindulged in spinach, or took allergy pills after visiting a friend with a cat, or forgot to take cholesterol meds for a few days, or glanced more skeptically at Mars. The ongoing tinkering with warfarin dosing has a Rube Goldberg feel to it, and patients often end up on elaborate dosing combinations that change monthly, sometimes weekly.
Prescriptions for warfarin have to be rewritten more frequently than those for any other medication, and have the most intricate dosing combinations and the most drug interactions. The stakes are also much higher than with nearly any other medication, because if the dose is a smidge too low, you can cause clots and strokes. If the dose is a smidge too high, you can cause hemorrhage. Erring in either direction can severely harm or even kill a patient. You can see why warfarin is easily the most dreaded medication to prescribe.
My patient was on a not-atypical schedule of 8 milligrams on Tuesdays, Thursdays, Fridays, and Sundays, but 9 milligrams on Mondays, Wednesdays, and Saturdays. Warfarin does not come in 8- or 9-milligrams tabs, so I had to rely on the 4- and 5-milligram tablets and write separate prescriptions for each, plus a footnoted dissertation explaining how to take the pills correctly.
I had to limber up on advanced polynomials just to calculate how many tablets of what size on which day would be needed and how many of each added up to a one-month supply. And that was before facing the forty-six individual alerts that each of warfarin prescriptions elicited. (I believe I hold the Bellevue Hospital record with a set of warfarin prescriptions that elicited 241 individual alerts for an elderly patient who was taking a warehouse of interacting medications. I eventually dug out my old prescription pad and wrote the prescriptions by hand. It took seven seconds.)
Suffice it to say that my adventure in reading every single medication alert didn’t improve the quality of the medical care for this patient. In fact, it didn’t leave any room for medical care, since it consumed just about the entire visit. As his physician, I certainly didn’t gain anything from the process other than an ulcer and a waiting room full of annoyed patients whose appointments were delayed. I threw a skeptical look at Mars and then spent the rest of the day doing what most doctors do—blindly clicking “okay” to alerts that bloom by the dozens, not reading a single one, hoping and praying that we’re not missing something that really counts.
What burns my colleagues and me the most, though, is the motivation behind these alerts. It’s so clear to us that the first priority is attending to liability rather than to patient care. If they’ve posted every possible warning, no matter how lame, then they—the hospital, the EMR, the greater universe—cannot be held at fault if something goes wrong. It’s the doctor who clicked “okay” to the warnings who is at fault.
The whole warning system feels like a transfer of blame—not to mention workload—onto the medical staff. Doctors and nurses, of course, have no other option but to plug through the sea of alerts because we have to get the medications to our patients. It’s estimated that primary care doctors spend a full hour per day just responding to alerts.2 While EMRs have decreased3 classic medication errors (by having standardized dosing and eliminating the penmanship problem), it is not clear that overall harm to patients has decreased, since new kinds of errors can be introduced, as we are seeing.
Wachter broadens the issue of alert fatigue from the EMR to all the alarms and bells that go off in the hospital, all of which exist to prevent medical error. In the five ICUs of his own hospital (which care for about sixty-six patients, on average) there are more than 2.5 million alarms each month—bells and beeps from all the monitors affixed to the patients. The vast majority of these are false alarms,4 which leads the nurses and doctors to reflexively discount and silence most of them.
It was just such discounting and silencing that led to the cardiac arrest and death of a patient at Massachusetts General Hospital.5 An elderly patient was in the cardiac unit because he needed a pacemaker. After eating breakfast on a January morning in 2010, he chatted with his visiting family. He then took a walk around the hospital floor and returned to his room. At 9:53 a.m., his heart rate began to slow. An alarm went off, but apparently none of the ten nurses on duty noticed it, or if they did, it didn’t register as anything urgent. At some point, the alarm was manually silenced by a staff member. This could have been accidental, or it could have been that someone felt it was a false alarm. In either case, the alarm was disabled. So when the patient’s heart rate continued its descent, there was no further alarm. When the heart rate hit zero, there was no sound at all. Not from the patient and not from any of the million-dollar technology affixed to his body. At 10:16 a.m., a nurse entered the room for a routine task and found the patient dead.
The Boston Globe issued a scathing investigative report that uncovered more than two hundred deaths over five years related to alarm fatigue. It found that nurses were bombarded with alarms, the vast majority of which were false. It didn’t seem as though there was an epidemic of ineptitude among the nurses. Rather, there were just so many alarms that they were losing the ability to alarm anyone. They were simply background noise.
The stated goal of these alarms, like the medication alerts in the EMR, is to enhance patient safety, but, as this investigation highlighted, they can inadvertently cause harm. The alarms are designed to cast the net as widely as possible, because even one bad outcome could incur liability costs in the hundreds of millions of dollars to the manufacturers of these devices, the hospitals, and the EMRs. It is therefore in their interest to have the alarms go off at the slightest hint of abnormality. That the nurses are stuck in a hive of alarms is less of a concern to them.
What the EMRs and medical devices do not do is think the way nurses and doctors think.
Doctors and nurses are always prioritizing incoming signals. We can’t possibly treat every signal as a critical emergency, so we relegate certain ones to the top of our concern and others to the bottom. Our EMR does try to color-code the alerts according to severity, but the middle level is still so voluminous that this doesn’t do much to lessen the jungle of alerts that a doctor must traverse to complete a medication order. Alarms on cardiac monitors try to do the same thing, with differing pitches and frequency, but it hardly makes a dent in the sonic jungle that the average cardiac nurse has to function in.
Getting machines and EMRs to think a bit more like humans (while still retaining the encyclopedic, fatigueless abilities that humans lack) is clearly the goal. The various alerts and alarms would have to work together in a physiologic manner. For example, if a cardiac monitor shows no pulse, this typically sets off a code-red type of alarm. But in a smarter world, that alarm wouldn’t go off if the blood pressure monitor is still recording a healthy blood pressure. (If your heart truly stops beating, you quickly lose any semblance of blood pressure.) “No pulse” would therefore be interpreted as the cardiac monitor being dislodged rather than the patient being close to death. A low-level alert could go to the nurse that the machinery needs to be adjusted. In this smarter world that Wachter imagines, alarms would be activated only if the various bits of data made clinical sense.
Similarly, the EMR could stand to acquire some clinical common sense that would adapt its alert system in a more logical way. For example, if a patient who’s older than 65 has been taking lisinopril for 12 years there’s no utility in sending a “prescribe with caution” alert because the patient has clearly tolerated the medication for longer than half the staff has been in practice. The EMR ought to be able to synthesize the warnings with a semblance of clinical relevance (and flush away the 50% that are utter fluff).
Wachter seems confident that this is possible, though it would require a major refocusing by the manufacturers. They’d have to spend far more time in the trenches with the medical staff to see how their products play out in the real world and to acquire a more realistic understanding of how medicine is practiced. They’d also have to work together to make their products compatible—something that might require putting patient safety above profits.
I can’t put all the blame on the manufacturers—although it would feel intensely satisfying for ten solidly self-righteous minutes—because they didn’t create the litigious environment that we all inhabit, something I’ll discuss later in the book. Manufacturers very reasonably want to do everything to avoid lawsuits, even if it ends up depositing more work and additional misery on doctors and nurses.
The litigious environment, however, is one of the ways that EMR-related errors come to light. A survey of malpractice cases demonstrated the variety (and severity) of harms that can befall patients due to the EMR. Mark Graber and his colleagues analyzed 248 malpractice cases in which the EMR was somehow implicated, either because of the system itself or how a staff member used the system.6 An example of a systems-related error was a “chief complaint” field that accepted only a limited number of characters. In this case, the patient had complained of “sudden onset of chest pains with burning epigastric pain, some relief with antacid.” Because of the field-size limitations, the chief complaint came through only as “epigastric pain.” No one did an EKG, and the patient suffered a major cardiac event a few days later.
An example of a user-related error was a case where somebody copied and pasted a previous note. The previous note had neglected to mention that the patient was taking the potent anti-arrhythmic drug amiodarone. That oversight was thus repeated in the current note. The patient—who needed the medication for his arrhythmia—was then given a new prescription for amiodarone and ended up getting double the dose and experienced toxic side effects.
A number of cases centered on delayed or missed diagnoses, especially of cancer but also of many other serious illnesses. In some cases there was a delay in getting results of tests into the computer system or the test results didn’t get routed to the right person. In other cases the results were sitting in the doctor’s queue and simply hadn’t been noticed.
Much of this comes down to basic usability. Even if an EMR is perfectly designed to avoid error, it won’t succeed if it’s so clunky to use that nurses end up improvising shortcuts just to survive the day. Even if the system appropriately alerts doctors to every possible drug interaction, it won’t succeed if the doctors feel drowned by the blizzard of alerts and blindly okay them all just to get a prescription done.
Although these EMR-related cases were only a small fraction of the total number of malpractice cases, they highlight the unique vulnerability that exists at the nexus of humans and technology. Minor flaws in technology can cross-pollinate with minor human flaws, with the potential to multiply to a devastating end.
On September 20, 2014, Thomas Eric Duncan flew to Dallas, Texas, from his home in Monrovia, Liberia. Three days later he started to feel rotten—his stomach hurt, he was nauseated, he felt feverish. The next night, on September 25, he went to the ER of Texas Health Presbyterian Hospital. On any given day hundreds of people show up in ERs with symptoms that sound like a stomach flu. But this wasn’t any given day. West Africa was in the throes of an Ebola outbreak that would ultimately infect almost 30,000 (and kill more than 11.000) in the three countries most affected: Guinea, Sierra Leone, and Liberia.7
The rest of the world was bracing for the Ebola epidemic to spread internationally, given how easily the disease was spread person to person. Hospitals were ramping up their protocols to crisis levels. Bellevue Hospital, where I work, serves a diverse and well-traveled clientele, so we were racing at full speed to prepare. Everyone from the clerical staff to the upper administrative echelons was drilled to be on the lookout for two key factors: recent travel to the endemic area and presence of a fever. The immediate first step was to isolate the patient. (Our medical clinic had to designate one room to be the Ebola isolation room, and my office—being closest to the entrance—drew the short straw. The stacks of literary journals on my file cabinet had to go, replaced by masks, gowns, and gloves, plus folders full of emergency response plans. Maintenance workers sawed a hole in my door to create a glass window so that healthcare workers could communicate with potentially infected patients from a distance, without risking close contact. Post-Ebola, my literary journals have regained their spot, but I still have the window, which now has to be covered with sheets of copy paper to protect patient privacy.)
Thomas Eric Duncan had both red-alert signs: a fever and recent travel from Liberia. Yet he was sent home, along with a prescription for antibiotics. The nurse, in her triage note, indicated that the patient had recently been in West Africa. But the doctor didn’t see the nurse’s note, so the fever and the travel history weren’t connected. Consideration of Ebola therefore did not enter his diagnostic thought process.
Forty-eight hours later, on September 28, Duncan called 911 and was brought back to the ER by ambulance, now severely ill—dehydrated, vomiting profusely, with shaking chills, diarrhea, and bloodshot eyes. On this presentation to the ER, the travel history was elicited and the patient was isolated (though some nurses reported that isolation wasn’t immediate and that there was resistance from higher-ups).8 On September 30, blood tests sent to the Centers for Disease Control confirmed Ebola virus. A week later, on October 8, Thomas Eric Duncan died—the first Ebola case and the first fatality in the United States. Within a week of Duncan’s death, two of his nurses became cases two and three of Ebola in the US. (Case four—an American physician who’d worked in Guinea—arrived ten days later at Bellevue Hospital, though the patient went straight to the isolation ward and didn’t have the chance to use my office and its lovely sawed-in window with views of the bathroom across the hall.)
Luckily, both nurses in Dallas (as well as the physician who came to Bellevue) were treated early and survived. In the Dallas case, though, scores of people were unnecessarily exposed to Ebola because of the initial misdiagnosis—everyone in the ER, anyone who came in contact with Duncan after he left the hospital, the ambulance workers who brought him back to the ER, any other patients transported in that ambulance, and anyone Duncan’s nurses came in contact with. Close to two hundred people had to be monitored for weeks due to the missteps in handling Duncan’s case.
As with all medical errors, there was not one single mistake here but many overlapping ones, any of which—if corrected—could have changed the outcome of the case. The initial error—not connecting the travel history with the feverish illness—rightly got the most attention. If those two dots had been connected from the get-go, the patient would have been isolated and treated at a much earlier stage. He might have survived, and his nurses might not have been infected, and two hundred others would not have been pulled into the Ebola net.
The hospital put the blame on the EMR.9 The nurses’ triage template had a field that prompted them to ask about travel history, in order to trigger reminders about necessary vaccinations. Because vaccination is considered a nursing issue, the travel history field was not designed to populate into the screen that the doctor works from. So the physician who was evaluating Duncan didn’t know about the travel history the nurse had entered. Score one for error due to the EMR. But of course, he could have—and should have—asked for that bit of information himself, given the well-publicized Ebola epidemic. Score one for diagnostic error on the part of the physician.
Pre-EMR days, before doctors and nurses were chained to their respective and isolated computer terminals, they were squashed into the same physical space and did things like talk to each other. In this antediluvian scenario, the nurse might have turned to the doctor and mentioned the tip-off about the travel history. Score one for error due to poor communication.
As with most medical errors, however, the mistakes and responsibilities in this case radiate in multiple directions. The patient didn’t tell the airline—or the hospital—that a week prior he’d helped out a woman in Liberia who turned out to have Ebola. The Dallas 911 system didn’t screen calls for Ebola symptoms, as the New York City 911 system did. The ambulance that transported the patient didn’t get decontaminated for two full days, allowing more patients to get exposed. The second nurse who came down with Ebola was given permission by the CDC to travel to Ohio to visit her family. These many overlapping errors are frustrating, because each offered a missed opportunity to mitigate or even prevent the error that ended in the death of a patient and the infection of two medical personnel who’d cared for him.
While one can’t say that the EMR was the inciting cause of the chain of errors, the fragmenting of information (in this case the travel history) created a fateful fork in the road in this patient’s care. The hospital fixed this flaw in their EMR after the event, but the unwieldiness of information in the EMR remains a potent source of future errors.
Exactly one day after I wrote this section on the Dallas Ebola case, I was supervising our walk-in clinic at Bellevue. Late in the afternoon, a kerfuffle broke out at the front desk. A patient was demanding to see a doctor, but the clinic had reached its capacity, and so he was—per policy—referred to the emergency room downstairs. He wasn’t happy about that, and it turned out that he was a patient of the medical director, so the top brass got involved. Policies and rank were slung about, triage protocols were debated in the hallway, but at length the administrative issues were ironed out, and the patient finally had his intake with the nurse. A few minutes later the nurse approached me and said, “This patient reports a cough and a fever, and he just came back from Saudi Arabia two days ago.”
I’m not a superstitious person, but I couldn’t help wondering if having just immersed myself in the case of a patient with fever and recent travel to an endemic country had re-created the real-life scenario not 24 hours later. In this case, the situation of concern was Middle East respiratory syndrome, or MERS.
We’d already avoided the first error of the Dallas Ebola case, as the nurse connected the fever and the travel history and appropriately conveyed that to the doctor (using that old-fashioned technology of talking face-to-face). She’d already given the patient a mask and isolated him in a room by himself, so we’d avoided the second error of the Dallas case.
Now we could take a few minutes to think. My colleague hunted down our infection-control protocol while I scoured the CDC website for the particulars of MERS. In addition to the obvious things like asking a patient about contact with other people who might have MERS, you also have to screen for contact with dromedaries—the single-humped variety—which are the reservoir of the MERS virus. (The two-humped camels appear to be immune.) It’s important to ask, for example, if the patient had milked or slaughtered a camel (direct contact) or was merely visiting a camel market or attending camel races (indirect contact). Score one for online databases having better memories for details than humans.
We were just getting our dromedary questions coordinated (as well as our protective gowns and masks) when an investigative team descended on the scene and the bluff was called. The whole thing had been a test. Our hospital was checking to see if our clinic was prepared to deal with “emerging pathogens” that could turn up anywhere, anytime. We fell short on our administrative handling—shunting the patient to the ER could have led to an infected patient unnecessarily exposing others to the virus. Instead, we should have triaged the patient, even if the clinic didn’t have the capacity to do a full evaluation. Triage would have determined whether isolation was needed or if the patient could safely go to the ER.
But we were given passing grades on the clinical end—eliciting the travel history, promptly isolating the patient, and then taking the time to access the infection-control protocols and CDC information before beginning the examination of the patient.
It was an effective exercise. All of us had been fully convinced that it was a real case. (The patient had sprinkled in some cinema verité by repeatedly pulling off his mask, arguing with the staff, and pulling rank about knowing the medical director.) We learned about our shortcomings—and about camels—but mainly it emphasized to me just how daunting the task is. A small error, as happened in the Dallas Ebola case, could easily explode into magnified consequences. We were only a hair’s breadth from making an analogous error by letting the patient saunter over to the ER, potentially coughing his way past scores of unsuspecting people. It also drove home the point that time to think is one of our most important error-prevention tools, but one that our current state of healthcare seems to conspire to eradicate.
One morning, I was sorting through my in-basket in the EMR. This is the collection of any test that I’ve ever ordered for a patient, any order from a nurse, any note from a social worker, any request for a medication refill, any message from a patient, any note from an intern needing to be signed, any notification that a patient of mine has visited an ER, emails from staff, consultations from specialists—pretty much anything connected to any of my patients or to me.
On one level, the in-basket is an enormous step forward in patient safety. Before the electronic medical record, a doctor had to actively seek out the results of any test he or she ordered. Following up on test results was therefore entirely dependent on the memory or to-do list of each individual doctor. Throw in a late night on call—or a late night partying—plus a few competing phone calls from other departments, three meetings to attend, and a waiting room or ward full of patients, and you can see how things were missed with regularity. And it doesn’t take missing much to cause a disaster—a missed mammogram that showed an early cancer, an elevated potassium that could cause a fatal arrhythmia, or a gonorrhea infection that could be passed on to someone else.
The EMR allows test results to be tied to an individual physician. Every single test I order for a patient is automatically routed to my in-basket when it’s ready. Nothing can be archived until I’ve signed off on it. In theory it’s a great system but in practice the in-basket is an unwieldy beast. I try to be judicious when I order tests, but even when I’m at my conscientious best there’s a constant stream of results to sort through. Clearing the in-basket is a holy grail for my colleagues and me, but it can never happen because there are always more tests rolling in. The in-basket can only go from full to fuller, and sorting through it takes hours.
Taking care of even just a single test result involves more steps than you might think. Usually there is a time lag between when I saw the patient and when the test result shows up in my queue, so in order to evaluate the test result appropriately I have to retrieve the chart for that patient, dig up my last note, and reread what I’d written to remind myself of the patient’s clinical situation. If it’s something like a blood sugar, I’ll need to compare it to previous results. If it’s something like cholesterol, I might have to pull up a cardiac risk calculator (which will also require retrieving other necessary information such as age, sex, blood pressure, and tobacco use) to decide what to do with the cholesterol results. Some tests results require me to refer the patient to a specialist. Some results require a medication change, which involves writing and sending a new prescription, plus making a phone call to discuss the change with the patient, who might remember three things she forgot to bring up at our last visit. And then our conversation and treatment decisions need to be documented in the chart. A single test result can take up to fifteen minutes to resolve.
Periodically I’ll down three cups of coffee and do a blitz of my in-basket to clear out everything. But the victory is ephemeral, because minutes later another result pops up, and then another. By the next day there are a dozen. Each day that I see patients, more tests are generated. There are days when I envy Sisyphus: at least it’s the same stinking boulder he’s pushing up the hill every day. For a doctor, it’s a sea of boulders, any of one of which—if missed—could come crashing down on one of my patients. Or on me, in the form of a lawsuit.
One day, I was sorting through my in-basket, trying to balance the need for speed with the need to stay focused. I came upon Emile Portero’s glucose, which was still astronomically high, despite his elephantine doses of insulin. His decades of diabetes had already cost him one leg and most of his vision. The severe vascular disease associated with his diabetes made his prosthesis fit poorly, so he mainly used a wheelchair now. His kidneys had taken a hit and I worried that dialysis could be lurking on the horizon.
Before I called him about these lab results, though, I wanted to open his chart to remind myself about our last insulin adjustments. Because of his obdurate diabetes and its cascading complications, Mr. Portero was a prodigious user of the medical system. His electronic chart reflected that and took longer to load. Spending even thirty seconds staring at a whirling graphic while there is so much more work to do sends me into a tizzy, so while Mr. Portero’s chart was loading, I moved on to the labs of the next patient in my review queue: Hassan Jalloh.
Mr. Jalloh had been diagnosed with diabetes only a year ago, and he was in the throes of completely retooling his life. He’d dumped the white rice, which had been his daily manna. The goat stew was gone. The syrupy baklava was history. Fanta orange soda had been excised. He now whipped up “green juice” in the blender on a daily basis and was a legume poster child. When he was diagnosed the year before, he’d required two medications to control his diabetes. But now we’d been able to discontinue one of them and were in the process of weaning him off the second.
Mr. Jalloh’s youthful medical chart loaded much more quickly than Mr. Portero’s, so I decided to call him first. “Great news!” I said. “Your sugar is staying down nicely. All your hard work has paid off. I think we’ll be able to stop your medications completely.” With a disease like diabetes, we don’t often have unadulterated good news to relay to our patients, so this type of phone call is as rare as it is thrilling.
Mr. Jalloh was clearly elated too. “That’s fantastic,” he practically sang into the phone. “This is the first time I’ve ever gotten good news about my sugar!”
First time?
“You really made my day, Dr. Ofri! I can’t wait to dump all my syringes into the trash. Goodbye, insulin!”
Syringes? Insulin? Uh-oh.
I realized I had accidentally dialed insulin-dependent, amputated, obese, wheelchair-requiring, nearly blind Mr. Portero, not lentil-toting, kale-convert, rail-thin Mr. Jalloh. (Only in the sterile digitized world of the EMR could two patients so vastly different be confused for each other.) Now I had to backpedal—on two counts! First I had to tell Mr. Portero that I’d made a mistake, mixing him up with someone else. But then I also had to tell him that the good news was a false alarm. His sugar wasn’t low at all; it was depressingly—and intractably—high.
I apologized profusely, and we spent the next ten minutes talking about his situation, working to find the baby steps that he’d accomplished and small goals that he could shoot for. I struggled to find something optimistic to tell him, but it was tough.
I’d fallen into the trap of having two charts open at the same time. It’s easy to say that I was just being stupid. It’s a complete no-no to have two charts open at the same time—I know that! I warn my interns and students about this till I’m blue in the face. And yet here I was doing it, and making an error because of it. I could easily have accidentally prescribed Mr. Jalloh’s pills to Mr. Portero and sent them electronically to his pharmacy. Mr. Portero might easily have taken that medication, because he was on so many pills that changed so often that he might not have noticed one extra diabetes medication.
But Mr. Portero’s kidneys were in no shape to handle Mr. Jalloh’s medication. That one extra medication could have been the straw that broke the camel’s back (single-humped dromedary, of course). That one nephrotoxic insult could have been enough to push Mr. Portero’s fragile kidneys into dialysis territory.
I recognize that the user (me) was the primary driver of this error, but the EMR also played a role. The EMR is both cumbersome and also ridiculously easy to use. In the paper-chart world it would be impossible to mix up a doorstop chart like Mr. Portero’s with a flimsy novella-sized chart like Mr. Jalloh’s. In the EMR, it only takes a click.
It’s tempting to blame nearly everything on the EMR and technology. The frustrations of using these systems loom large in our daily experience (often larger than the many miraculous tasks these technologies can accomplish). Ultimately, though, they are only tools, and we in medicine—with input from patients and society at large—need to decide how these tools are utilized. As Bob Wachter said in his article commenting on the Dallas Ebola case, “We need to take advantage of these marvelous tools, but not forget that they don’t practice medicine. We do.”10
EMRs have done many wonderful things that improve medical care. Just having all the medical records in one place is a monumental improvement over the days of lost charts and misplaced X-rays. And it’s certainly a step up from inscrutable handwriting, coffee stains, and remnants of Thai red curry. Additionally, the EMR can allow for quick access to online resources when additional information is needed, instead of running down to the library to look things up.
Another excellent way that the EMR can improve medical care is by enabling analysis of a population. A hospital can, for example, survey all patients with diabetes and figure out who hasn’t seen the eye doctor in more than a year, or whose cholesterol is too high. This can help a hospital figure out whether it needs to hire another ophthalmologist or invest in more nutrition education. This type of analysis can also show whether various interventions produce results. If hospitals invest in extra nurses to call patients at home after they are discharged, for example, does this decrease the readmission rate?
But EMRs can also worsen medical care and introduce errors. Cumbersome usability forces doctors and nurses to take shortcuts that can be dangerous. Alert fatigue means that important warnings get lost because they are swimming in a sea of liability-induced minutiae. Diagnosis codes that are driven by billing requirements can distort the diagnostic process. Copy-and-paste ability can lead to voluminous notes that resemble those online “terms of service” agreements that you surely read assiduously.
To me, however, the biggest damage comes from the fact that the computer has centered itself as the most important thing in the exam room, not the patient. It’s hard to have a real conversation with one person’s eyes bolted to a screen. I don’t lament this damage to communication just because I think that the schmooze aspect of medicine is the most fun. I lament it because communication with patients is one of the most powerful strategies we have to reduce medical error. It’s not the deus ex machina for everything, but nearly every medical error I’ve reviewed for this book could have been prevented—or would have its harm minimized—had there been better communication between medical professionals and patients. Certainly Jay’s case reveals numerous examples of poor communication. The medical staff didn’t communicate well in terms of explaining their diagnostic reasoning or the medical treatments. And they certainly didn’t do so well on the listening front either when Tara tried to talk to them.
Technology played its role in the errors, too, as staff appeared to give more weight to readings from the machines (e.g., the oxygen saturation monitor that was in the normal range, a chest X-ray that was negative) than to the patient’s clinical condition, which was steadily worsening. The so-called objective measurements created an image of the patient that did not at all match the situation of the actual patient lying in the bed. Bob Wachter’s comment about technologies and responsibility in the Dallas Ebola case holds true here: “They”—the machines—“don’t practice medicine. We do.”
It’s certainly possible that Jay might have died even if he’d received the most meticulous medical care. He had a severe form of leukemia, after all, and he was infected with a virulent bacteria after undergoing a punishing treatment of chemotherapy—all things that can be deadly on their own, let alone in combination. But there’s no doubt that his medical care was undermined by shoddy communication that no amount of technological wizardry could overcome. Communication wasn’t the only error in Jay’s case, but poor communication compounded the harm every step of the way.