CHAPTER FOURTEEN

BRINGING ALONG OUR BRAINS

“You can’t nail jelly to a tree.”

This was how Itiel Dror, a cognitive neuroscientist, described to me the challenges of changing a culture. In this case we were talking about the medical culture, especially as it relates to fixing medical error. Things such as hierarchy, communication style, traditions of training, work ethic, egos, socialization, professional ideals—these are all deeply ingrained in the culture and have roles in both committing and preventing errors.

Moreover, there is the broader culture within which the medical community dwells. The United States, for example, has a zealous tradition of individualism. It’s also comfortably litigious. European countries, by and large, are more willing to place limits on what individuals can pursue, both in terms of medical care and litigation of errors. Trying to remake these cultures, even with the noble intention of decreasing medical errors, is futile. Like nailing jelly to a tree.

Every country and every hospital is saturated with layers of existing rules—written and unwritten—governed by money, by liability, and by regulatory bodies. Dror recognizes that a sea change in the current system isn’t coming soon. Making little tweaks within the current difficult system is the best we can do.

But Dror has a big beef with our typical approach to tweaking the medical system. Each time we train the hospital staff to address one type of error, it’s very likely that much of the instruction will be forgotten in a few months. Each time we set up yet another checklist in the electronic medical record, it’s very likely that most of the staff will soon tune it out. Plaster the hallways with cognitively ineffective posters, touting the latest quality-improvement initiative, and there’s no doubt that they’ll fade into the background blur within weeks.

“The problem,” Dror says, “is that these methods are not brain friendly.” He illustrates this point with the example of passwords. As a typical hospital employee, I have passwords for the EMR, for my desktop computer, for the hospital email, for the medical school email, for the statewide prescription drug database, for the appointment system, for the on-call system, for the X-ray viewing system, and for the EKG viewing system.

And those are just the passwords medical workers use every day for work. We all have another dozen or so personal passwords that clog our brains. These passwords change every three to six months and each have exacting—and exactingly different—requirements for capital letters, numerals, special characters, and the genomic analysis of your pet gerbil. Furthermore, we are exhorted by whippersnappers in the IT department who are hardly old enough to vote to never use the same password twice. And never ever ever ever write your passwords down.

“That policy looks good on paper,” Dror said, “but it doesn’t take into account the human element, the way our brains really work.” As one who repeats passwords ad infinitum and who secretly writes all of them down (okay, in an undisclosed location, but still), I was exceedingly relieved to hear this. “You don’t have to be a cognitive neuroscientist,” he said, “to know that people have to write their passwords down and/or use the same password on various systems.”

There are countless examples in medicine of things that are the precise opposite of brain friendly. In one iteration of our EMR, for example, there was a particular spot in the note where the doctor was asked yes/no questions for two different screening issues. In one instance, you had to press 1 or 2 to indicate “yes” or “no.” In the other you had to press Y or N to indicate “yes” or “no.” It’s a minuscule point, but it drove me bonkers each and every %$@# time. I felt almost embarrassed getting upset over such a small thing, but it never failed to get my goat.

Talking to Dror made me understand why: the EMR setup lacked cognitive consistency. My trusty brain is always searching for efficiency, and so it wasted no time “thinking” when I came to that first yes/no question—my fingers automatically reached for the number keys to choose 1 or 2. Of course I hit a wall when I did this at the next yes/no question, which required the Y and N keys. For me, it was just endless aggravation, but for Dror it’s an unnecessary cognitive load and thus a potential source of error. The EMR caused me to waste precious cognitive resources sorting out whether to head for the number keys or the letter keys. Given the finite capacities of our brains, this yes/no idiocy—which I was forced to suffer through for every single patient every single day—squandered some of my thinking ability. I thus had less of it available to think about my patients’ actual medical conditions and less of it available to keep an eye out for errors.

A brain-friendly EMR would offer exactly one way of answering yes/no questions, and it would be consistent everywhere across the system, whether you are answering questions about your patients’ latex allergies or their DNR status, or whether it’s okay to substitute a generic for a brand-name medication, or whether you want to enlarge the font size because staring at the computer screen has annihilated the last vestiges of your visual cortex.

The yes/no function is just one microscopic cog in the system, but consider all the seemingly minute inconsistencies in the EMR and then all the inconsistencies of all the other myriad technologies in medicine (and hoo boy, don’t get me started!), and they add up to a boatload of wasted brain reserve. All of which is directly subtracted from patient care. This is brainpower that we want focused on avoiding a medical error, not used for sorting through alerts for drug interactions with alcohol wipes, or pregnancy warnings for 70-year-olds, or ICU alarms that go off when a patient scratches his nose, or well-meaning tobacco screens that require the identical amount of documentation for the patient who quit 45 years ago as for the patient who currently smokes two packs a day, or the prescription field that insists that you distinguish between capsules and caplets, or the language screen that asks you to clarify whether you used a live interpreter or a phone interpreter right after you’ve typed in that both you and the patient speak English, or the Past Medical History screen that I had efficiently memorized as choice #18 but that was pushed up to #19 when some other choice was added so that now #18 brought up Past Obstetrical History, which I proceeded to add to every patient that first week only to learn that there was no possibility of deleting it, so now a whole cohort of my male patients all have their obstetrical histories dutifully and permanently noted in their medical records. But perhaps I digress . . .

“This doesn’t just stress you and frustrate you,” Dror said. “It depresses you.” Truer words were never spoken. At the end of a ten-hour day battling these EMR inanities, it’s not just that I’m drained. It’s that it feels as though we’re forced to machete our way through the EMR jungle just to get to the place where we can finally begin the medical care of our patients. If we even have any functioning neurons left.

There is a burgeoning field of research examining how the “toxic” work environment in healthcare is contributing to burnout among doctors and nurses. The demands of the EMR can’t be blamed for all of this, to be sure, but most medical folks would say that it is certainly the heavy hitter. There is a growing sense that rather than these technologies helping us to serve our patients, the tables have been turned so that we have to serve the technologies. Patient care is shunted off to the side, a quaint leftover that is subservient to the primary goal of documentation.

As a child, Itiel Dror was fascinated by the story of Pinocchio. Having a mind necessitated a miracle, some sort of fairy dust. Frankenstein’s monster, by contrast, was just a set of body parts constructed in a laboratory according to a scientific recipe. Growing up on three different continents—his professor parents juggled alternating sabbaticals—Dror was drawn to people watching. He was fascinated by what was tinkering inside their heads, the fairy dust that made them take all sorts of actions, even ones that could seem illogical or counterproductive. He initially studied philosophy but found himself intrigued by the courses he took in artificial intelligence, computer science, and psychology. Eventually he chose to do his PhD in cognitive neuroscience, as it seemed to strike a middle ground that intersected with all these subjects.

Itiel Dror isn’t a medical doctor, but he’s studied the medical environment enough to reach his conclusion that medical errors are absolutely inevitable. This is simply the nature of the system. In any given medical encounter, he points out, there is a sea of information—usually piecemeal in nature—and most often not enough time to properly wade through it all. On top of that, the human brain has finite resources and so is constantly prioritizing which information to attend to. The relentless time pressure combined with the high stakes of the situation place even greater demands on the brain, so this humble organ has had to develop all sorts of strategies to survive. It filters information, for example, paying attention to certain tidbits while ignoring the rest. It utilizes an assortment of automatic habits and shortcuts. It relies on a trove of previous experiences and a library of recognizable patterns. The brain has limited capacity and is continually honing mechanisms to make up for its shortcomings.

These are brilliant survival strategies, enabling us to achieve what would otherwise be impossible in the slender amounts of time clinicians typically have for decision-making. But the very mechanisms of the brain that allow for such snazzy cogitations also make it prone to error. The brain easily falls prey to pitfalls, such as tunnel vision, groupthink, overconfidence, and biases of all flavors.

People don’t make mistakes only because they are stupid, Dror said. They also make mistakes because they are smart. (I found this oddly comforting, in a roundabout sort of way.) Smart brains develop shortcuts—that’s what enables these brains to handle so much information and still make their owners sound intelligent. Shortcuts are not a side effect of intelligence; they are actually the basis of intelligence. Viewed in that light, you could actually interpret some medical errors as side effects of being smart.

Medical errors, in Dror’s opinion, are the “inevitable outcome” of our neurocognitive system squashed into the demanding medical environment. This is why he has concluded that it is impossible to eliminate them. Although “eradicating” medical errors sounds good in a hospital mission statement or on a grant application, it’s fundamentally impossible given the realities of our brains and the nature of healthcare.

Humans utilize two predominant modes of thinking, often called simply “fast” and “slow.” Fast thinking is what we do in the moment; it is experiential. Slow thinking is more analytical. Most training tools are geared toward slow, analytical thinking (a new set of rules to memorize, another online module to complete, another checklist to fill out, yet another training session to endure). But most of what we actually do in medicine is in the moment. It’s nearly all fast, experiential thinking, so all that lumbering preparatory work is wasted. It’s nailing jelly to a tree.

Dror argues that we need to tailor any improvements we propose for reducing medical error to how the brain actually works. Rather than chasing the impossible idealized goal of eliminating medical error, his research focuses on error mitigation. Since you can’t get rid of all the errors, you can work to make the errors less damaging. The goal is rapid error recognition and even faster error recovery—all things that happen in the moment.1

Focusing on error recovery, rather than error prevention, is more effective because it is brain friendly. One example that Dror uses is handwashing. Even though cleanliness edges out even godliness as the number-one way to reduce hospital infections, medical personnel are embarrassingly lax with their ablutions. In my hospital, as in every other hospital, there are posters and signs and buttons affixed to every available surface exhorting handwashing. All of these earnestly laminated efforts, Dror says, are predominantly a waste of time; our brains quickly relegate them to background noise in an effort to focus their finite capacities on more pressing things. But what if the senior doctor marched into the ICU—sans washing—with her entire medical team in tow? Then, just before her stethoscope breached the patient’s gown, she stopped and turned to the team—with appropriately dramatic flair—and asked, “Did anyone notice anything wrong?” After the error is identified and discussed, she could ask the even more important question: “Why didn’t any of you speak up when you noticed that I didn’t wash my hands?”

Dror calls his technique the “Terror of Error,” and it is based on using these sorts of unpleasant but ultimately memorable experiences. Especially the squirmy discomfort of if/how/when to confront a superior. The emotional content keys into a different cognitive pathway than do the endless handwashing signs plastered in the hallway. Once an emotional component is tied into an experience, it is remembered much more intensely and intuitively.

When I was an intern I once had to perform a physical exam on a patient in front of an attending for an end-of-rotation evaluation. In my nervousness or my hurry, I neglected to wash my hands. When the attending pointed that out—in front of the patient—I was mortified. My cheeks bloomed red as I shamefacedly edged over to the sink and slathered a gallon of antibacterial soap on my hands. But I never forgot that experience—the Terror of Error. Decades later, I can remember the exact room, the exact attending, the exact diagnosis of the patient, and of course, the painfully accrued lesson in handwashing. My current patients probably think I have obsessive-compulsive disorder as I wash and rewash my hands before and after—and sometimes during—the slightest physical contact. Public humiliation is not a recommended pedagogical strategy, of course, but it does point out the power of a lesson that is entwined with emotion. Not to mention the critical need for having hand-lotion dispensers parked next to every sink.

Experiencing failure on a personal level sticks with us in a way that recited rules can never do. It creates an emotional representation that worms into the depths of our brains. This may, in fact, be evolutionary. Imagine that news of distant coyote attacks has reached a hunter-gatherer society in the Paleolithic era. The leaders of this society might try to prevent attacks by warning its members to “Be Aware!” and “Stay Safe!” They might rally community members to aspire to “A Culture of Safety.” They might remind people, “If you see something, say something.” But the average hunter-gatherers are focusing their finite cognitive resources on, well, hunting and gathering. They will quickly tune out these exhortations, no matter how snappy or how focus-group-honed the phrasing is.

But when the first baby is snatched by a coyote—everything changes. This emotionally charged experience is processed in a different part of the brain than the anodyne warnings are. The realness of the experience by necessity carries much more weight in terms of survival, and may be why this cognitive strategy has been evolutionarily successful.

Luckily for us, though, realness doesn’t actually have to be real to be effective. In airline security, for example, we need the baggage screeners to be alert for bombs and weapons. The “Stay Alert” signs that are posted everywhere may as well be abstract art for the staff who toil there day in and day out. Pass a few fake bombs through security, though, and you will get people’s attention in a way that sticks in the brain.

Errors make sense when you understand the cognitive shortcuts that lead to them, and that’s what Dror tries to teach nurses and doctors. When he helps hospitals set up simulation programs, he makes sure the staff get to experience errors. Instead of having the patient ultimately survive—as is the usual case in most simulations—Dror’s exercises make sure the patient dies a few times. With high-stakes situations such as sepsis, cardiac arrest, intubation, surgical mistakes, and medication errors, it is important for the participants to experience things going wrong as a result of their decisions and actions. The experiential aspect of the training offers the strongest chance of transferring the knowledge to situations with real patients.

Simulation is preferred because personal experience with such disasters might actually be too traumatic to be effective. I remember when I was a resident and botched a case of diabetic ketoacidosis, nearly putting the patient into cardiac arrest. I was only a few days out of internship and was so devastated by the experience that I could hardly scrape my sorry self off the linoleum floor, much less think analytically about what had transpired and how to do better the next time. So I appreciated Dror’s preference for simulation over personal experience. It wouldn’t be a stretch to assume that patients also prefer simulation as the place for medical staff to experience the Terror of Error.

Dror pointed out a few other reasons why personal experience might not be the most effective vehicle for teaching. Often these situations involve a rare case or a fluke, things that are not necessarily generalizable. Plus, people tend to overcompensate based on personal experience, especially if it was a particularly devastating event. In simulation, you can—as Dror delicately puts it—“calibrate the trauma” and then debrief afterward to make sure it’s a constructive experience rather than a destructive one.

It is also critical to do training in groups rather than as individuals. For one thing, so much of medicine is practiced as a team in real life, and so many errors relate to communication among team members. There is also the reality that medical information is typically scattered among the members of teams—the nurse knows the vital signs, the intern knows the CT result, the attending knows the patient’s past medical history, the physical therapist knows where the patient is weakest. So teaching error mitigation in groups jibes better with reality. Additionally, group settings allow individuals to engage in the far more approachable task of identifying errors in other people before turning the unsparing lens on yourself.

A training session might be set up to teach the management of low blood pressure in the ICU. A team of doctors and nurses is given a simulation of a patient with hypotension. Each person possesses some bits of information about the patient, and together they have to figure out how to manage the hypotension and keep the kidneys and brain in good working order without flooding the lungs or causing cardiac arrythmia.

Such simulations quickly feel real to the participants, especially if one of the team members is really part of the training staff, discreetly contributing errors to the process (suggesting a medication that the patient is allergic to, mixing up buttons on the IV machine, forgetting some basic protocols, talking to the wrong person for the wrong thing). The training setup could involve rearranging equipment so that things aren’t in their usual places. There could be real-world distractions—team members getting paged, phones ringing, a staff member heating up pungent fish stew in a nearby microwave. The team could be short-staffed because a nurse was pulled to cover for another team because their nurse was out sick. A critical medication could be on back order. The patient might speak Spanish, but admin sent over a Serbian translator. There could be a fire drill. The EMR could be temporarily unavailable because of routine maintenance—but your patience is greatly appreciated.

Dror advocates using such “sabotage” techniques because they create controlled errors. These sabotages heighten the experience of the errors in a constructive way, especially if the case ends in disaster. And of course, these sabotages mimic what happens in real life, so they are practical training. The real bonus, though, is that these sabotage-induced errors are much less fraught for team members to identify and analyze in the post-training discussion. After they are warmed up with these less threatening errors, they can segue into the more unsettling task of identifying their own errors and shortcomings.

This is particularly powerful in the realm of communication. We are told, ad infinitum, that poor communication causes errors. But hectoring doctors and nurses to “Communicate Well!” is about as effective as reiterating that advice to your toddlers (or your teenagers, or your hamsters). If the patient begins to slip away, however, as a result of communication errors in the exercise, the point is driven home in a way that resonates, and the situation can be analyzed with more meaning afterward.

Learning is even more powerful when it’s unexpected. For example, the hypotension training exercise might be billed as a lesson about blood pressure management when in fact it was designed to teach about sepsis. If the session had been titled “Sepsis Training,” everyone would have been in a sepsis state of mind, and there wouldn’t be any learning about the recognition of sepsis, which—as we’ve seen—can be challenging.

Similarly, if a training session was titled “Communication Training,” there would be so much “please,” “pardon me,” and “thank you” that it would feel like high tea with the Queen. We are, after all, diligently trained to produce what we think the person grading us wants to hear. Better to make it a training session about asthma management, but then weave in errors that arise from poor communication.

Most crucially, such training sessions should never be titled “Fixing Medical Error.” It’s hard to imagine a label that would more perfectly encourage participants to check the boxes they know the corporate-compliance supervisors need them to check off. Instead, these training sessions should be integrated seamlessly into the regular curriculum about treating cardiac arrest or adrenal insufficiency or acute psychosis, so that error issues are simply part of learning about the topic.

When the staff members uncover the lessons experientially—figuring out how to rally a disorganized team or how to deal with missing equipment—the message sticks. The focus isn’t on preventing errors per se, but rather on identifying and fixing them as they happen. Being forced to deal with errors in real time as the (simulated) patient is crashing is the epitome of what Itiel Dror sees as brain-friendly training. Contrast this to our typical way of teaching medicine: a lecture hall darkened to planetarium black, a light-year’s worth of PowerPoint slides in rapid succession, each with eighty-seven bullet points in subatomic font accompanied by a series of inscrutable graphs and an apologetic monotoned speaker saying, “I know this is hard to read, but . . .” It’s hard to conceive of anything less brain friendly for learners. You might as well distribute a tab of Valium to every audience member in the first minute of the session—teddy bear and goose-down quilt optional—and call it a day. The learning retention would be about the same, though you’d probably get better course evaluations at the end.

Simulations can be startlingly realistic. On a warm spring day, I found myself doing an unusual set of medical rounds in one of Bellevue’s towering brick behemoths that date from 1905. The ward had come full circle over the course of a century. After caring for generations of New York City’s sickest, it had been demoted to offices and storage units when the new hospital building was erected. But now it was back as a ward, with fully functioning medical rooms and bustling staff in scrubs and white coats. The patients were still genuine salt-of-the-earth New Yorkers but with perhaps a bit more avid Stanislavski training.

Peering through a one-way mirror I watched a patient fidget in his cotton gown that dangled open at back. A hacking cough erupted periodically. His partner paced the room anxiously, occasionally dropping into the bedside chair and leaning in toward the bed. The two men clasped and unclasped hands, attempting to steady each other. The patient’s pneumonia had not improved, despite antibiotics. The chest X-ray showed a rising tide of fluid accumulating around the lung. If that fluid was merely a reaction to the pneumonia, it would likely resolve on its own. But if that fluid were infected—an empyema—the patient would need a large-bore chest tube inserted by a surgeon to drain it. And maybe the pneumonia was masking a lung cancer, and the fluid was harboring malignant cells.

In order to distinguish between these possibilities, the doctors needed to perform a bedside thoracentesis to sample the fluid. They’d pass a medium-size needle through the back muscles just far enough to access the fluid but (hopefully!) not so far as to puncture the lung. But first, they’d need to obtain informed consent. This being an academic medical center, the lowly medical student was dispatched to do the task.

Wearing a short white coat over her scrubs and white-knuckling a clipboard to her chest, the medical student explained the situation and the reason for the thoracentesis. The patient and his partner visibly blanched as she enumerated the possible risks of collapsed lung, internal bleeding, and spreading of the infection. The student herself blanched under the weight of the awful outcomes she was describing. She seemed as unsettled as the patient at the prospect of a needle transgressing some of the body’s most crucial organs. She tried not to unduly terrify the patient, but that didn’t seem possible. The patient and his partner wavered between tentative reassurance and wild-eyed panic, peppering the student with questions she couldn’t always answer about a procedure she’d never actually performed.

In the next three rooms, three other medical students were bumbling simultaneously through the same agonies of informed consent with three other pneumonia patients and their anxious partners.

In four rooms farther down the hall, medical students struggled to figure out what to do for a post-op patient who’d ceased to produce urine, while a scrub-clad nurse waited impatiently for an answer. Was this a harbinger of full-on renal failure? Was the patient about to go south, fast? On the other side of the hall were four rooms of medical students grappling with a case of sky-high blood pressure in a patient who also complained of a headache. Was the headache a distraction or was it a sign of an impending intracranial bleed? The patients were fake, but the pressure on the students was real.

These medical students were about to graduate and become interns with certifiable MDs after their names. They were participating in the aptly titled “First Night on Call” exercise, a simulation program developed by a team of educators led by my NYU colleagues Adina Kalet and Sondra Zabar. The students have ten minutes to handle these tense clinical situations—created by remarkably convincing actors—and then they have to present the cases to (real) chief residents or attending physicians to be grilled on the clinical details. After that, the students engage in what many feel is the most valuable part—a group discussion analyzing their experiences in the simulation. A faculty member facilitates the discussion, but it’s the students who dig through the issues—medical, emotional, logistical, hierarchical—that are unearthed. The students consistently rank the simulation as one of the most effective learning experiences in medical school.

But are they effective in reducing medical error? This is an unwieldy question to study because it is exceedingly labor intensive to gather enough actors (and rooms and time and supervising faculty) to generate a sample size robust enough to detect a change in error rates—outcomes that are both uncommon and hard to detect. Nevertheless, there are some encouraging data.2 Simulation training for procedures such as placing central lines, intubating patients, and doing colonoscopies showed benefits for patients, such as fewer central line infections and higher rates of successful intubation or colonoscopy. Procedures are obviously much easier to study than doing adequate informed consent or figuring out why urine production has ceased, but simulation holds promise as a way to improve patient safety without patients having to suffer the learning curve.

When I observed the sessions, it was remarkable how real they seemed. The actors did not let up on the students, not for one second, asking difficult questions, sputtering with cough, welling up with emotion. Even though they knew it was a simulation, the students told me that once they were in the room, it felt entirely like a real episode on call.

The only notable difference from a real night on call at Bellevue was that while the students were off with their attendings, the actors used their break time to compare notes in a back room about their various auditions and theater productions. The day that I observed, two actors—one wearing a patient’s gown and the other wearing nurses’ scrubs—figured out that they’d both been in A Chorus Line, though at different times. Without missing a beat, they launched into a perfectly executed Broadway number. The patient’s gown wasn’t fully tied at the back, so it billowed out like a spinnaker on the crisp pirouettes and step-ball-change moves. When the two dancers snapped to a meticulously coordinated end, which elicited applause from the onlookers, the gown wafted languidly down, obediently returning to its standard-issue sag.

Come to think of it, something like this probably has happened at some point on the wards of Bellevue.

As I’ve discussed earlier, technology has the ability to cause many errors. But of course it has the potential to prevent errors, which is why most of this technology was developed in the first place. From Itiel Dror’s perspective, the key is to design technology with a knowledge of our cognitive limitations. The goal is to tinker with the system—rather than with the humans—to make things safer. The technology doesn’t have to be overly complex to minimize errors, but it does need to be brain friendly. This can often be accomplished with basic nuts and bolts. In the operating room, for example, anesthesiologists have access to oxygen and nitrogen. In the past, patients have died when the wrong gas was administered. The gas tanks were color-coded to prevent this, but every year there were still a few cases in which the hoses were mixed up. Finally someone thought to redesign the cheap little connectors and just make them two different sizes for the two different gases. Thereafter, it was physically impossible to connect a hose to the wrong gas.

Another fix along these cognitive lines is to standardize the way equipment is set up. For example, the crash cart—used for resuscitating patients in cardiac arrest—should be arranged in one way only, so that the correct medications can be found quickly, with less chance of mix-up. Better yet, the contents of the cart should be arranged in a brain-friendly way. In one study, researchers let pharmacists and nurses organize the setup of the crash cart in a way that made sense given how they used the medications in practice. When the new arrangement was tested against the standard setup, staff members were able to retrieve medications more quickly and more accurately.3

Other errors could be minimized by eliminating similar-sounding names of medications that our brains have trouble distinguishing. One doesn’t have to plumb the etymological depths to imagine the possibilities for error with medications with names like Ditropan and Diprivan. You’d hate to accidentally treat someone’s overactive bladder by knocking them unconscious with an intravenous anesthetic. Nor would you want to mix up Lunesta and Neulasta and give that poor insomniac a syringe full of bone-marrow activator.

There are more than a septillion words you can construct with the 26 letters of the English alphabet. Thus, there’s no logical reason for Celexa, Celebrex, and Cerebyx to coexist in our pharmacologic universe, given that they treat depression, pain, and seizures, respectively. Ditto for Lamictal and Lamisil, unless you want to treat your seizures with anti-fungal cream.

And while we’re at it, we should tackle the dangerous sound-alikes that have the additional bonus of being utterly unpronounceable. What Madison Avenue dream team, I’d like to know, came up with Farxiga and Fetzima? They somehow made it possible to accidentally treat someone’s diabetes with an antidepressant and give the doctor tendinitis of the tongue in the process. Sound-alike and difficult-to-pronounce medication names are perfect examples of the multitude of brain-unfriendly minutiae that clutter up modern medicine. Added up, they squander copious amounts of mental energy, energy that should be focused on patient care.

In order to minimize error and improve safety, we have to take into account the human element and then design systems and teaching methods that work with the realities of our gray matter. How our brains work doesn’t usually make the top-ten list when healthcare concerns are prioritized. But it ought to. Otherwise we’ll just keep nailing jelly to the tree.