CHAPTER NINE

IS THE FUTURE OF SUICIDE PREVENTION DIGITAL?

Back before artificial neural networks, deep learning, and large language models sucked up all the oxygen in discussions about artificial intelligence, there was a simpler, faster, and less complicated machine learning technique called logistic regression. LR is a Big Data approach to data mining, or learning a lot about a lot of data. Logistic regression is often used to provide yes or no (binary) answers to simple questions; for example, whether a certain job candidate is the best choice among many, or whether a high school student should be admitted to a particular college. By training an algorithm on data such as grade point averages, test scores, and extracurricular involvement for thousands of students, logistic regression reveals patterns in the data and provides useful information. The better the data—the better it is labeled, the more there is, the less bias it contains—the better able the algorithm is to yield useful information.

Now imagine that scientists trained an algorithm with the medical records of thousands of patients who had killed themselves. Later, when it evaluated a new patient’s records, the algorithm would indicate whether this patient was or was not like the others—whether he or she might make a suicide attempt. Ultimately, with a machine learning upgrade, the algorithm would automatically run this diagnostic, and others indicating ailments like heart disease, cancer, prediabetes and more, whenever a new patient’s records entered the system. These are some of the core technologies of the exciting innovations we’ll explore now, innovations that will in the future dramatically improve suicide prediction and reduction: the digitization of suicide prevention.


The Safety Plan Intervention and CAMS both address the gap between a crisis and therapy. But CAMS isn’t available everywhere and both interventions are put in place after a suicide attempt or severe ideation occurs. Rutgers University’s Evan Kleiman, PhD, backs up the focus of an intervention to the moments before a suicide attempt. As Dr. Kleiman frames it, a suicide is like a movie that contains a time bomb. It requires knowledge about when the bomb will go off and fast, decisive action to stop it. Defusing the bomb after it has exploded—not very effective.

“So the big problem is that we don’t know enough about what happens right before someone kills themselves. And the bigger problem from that is for people who die by suicide, there is a buildup, right? Sometimes months or years. But the decision to act on the desire to die by suicide for many people can be a fast one. And so we need to know what happens in those hours and minutes so we can intervene. And that’s probably the central challenge to suicide, just this time scale.”

The time period can be so short from impulse to suicide, there’s not enough time for actual human intervention—a real-time meeting with a therapist or a trip to the hospital. Take South Dakota farmer Chris Dykshorn for example. For months he displayed increasingly severe risk factors. He spoke about wanting to die and about feeling like a burden. He was briefly hospitalized. Then in the perilous window after discharge, he died by suicide. Obviously, there was no way to have a therapist on hand at that critical moment because no one could have precisely predicted it. But what if Chris Dykshorn had possessed the skills to recognize a crisis and get himself through it with behavioral health exercises, straight from an app on his smartphone? And what if that app had a record of his agitated episodes and was able to trigger an intervention by his therapist before the suicide occurred? That would be a game changer for reducing suicides.

Kleiman, working with his long-term collaborator Kate Bentley, PhD, of Massachusetts General Hospital, is developing a smartphone application that will deliver therapeutic help anytime and know in advance that a suicidal crisis is coming. It’s being deployed to real patients as part of a suicide reduction trial at Rutgers University Behavioral Healthcare. Kleiman and Bentley’s application is known as an ecological momentary intervention (EMI) for smartphones. There are many EMIs being developed by many researchers, so I’ll refer to this specific smartphone app as Kleiman’s EMI.


When I first learned about Kleiman’s EMI, it made me wonder if the Safety Plan Intervention and other stabilization plans were not already designed for exactly this—filling in the dangerous gap between inpatient and outpatient care with distractions, activities, and contact information for friends and therapists. Stabilization plans have a long track record of putting a dent in suicide rates; in fact, in the six months following hospitalization, patients trained on the safety plan suffer half as many suicides as those not trained. But fortunately, Kleiman’s EMI app contains the safety plan, complete with the data the patient entered in the hospital, as a component of its functionality. So Kleiman doesn’t want to throw out the safety plan. But as a researcher, the fast-talking, mop-haired professor who could be mistaken for a graduate student thinks stabilization plans alone are not enough. He has a bone to pick with them, a few bones, in fact, which will make his technology a better choice in the long run.

Bone One: The Setting. What patients learn in calm they may not remember in crisis.

Kleiman told me, “Patients leave the hospital better, in theory, than when they came in. They learn skills there. Treatment that we call skills-based. They learn how to deal with negative emotions, negative social situations, and they learn to distract themselves. People in the inpatient unit or in an outpatient program are learning skills when they are calm. They’re in a supportive environment. They’re stable.

“But here’s the problem. We throw them out in the real world and then they have to remember the stuff they learned when they were calm. They have to remember what to do, but now they’re in a period of incredible distress. And this plays into the short time period, right?

“Because you can’t really send people out in the real world and expect them to use these therapies they’ve only practiced when they’re calm.”

I could see Kleiman’s point with complex therapies such as cognitive behavioral therapy, which trains you to change negative thinking patterns through repetition. You might learn CBT techniques during an extended hospital stay, though remembering them would be challenging during a suicidal crisis. But what about simple distraction techniques offered by stabilization plans? Watching an episode of The Simpsons? Playing with your dog? Not so challenging.

Sadly, many people who use the safety plan or other stabilization plans die by suicide anyway. A more robust treatment-based app might reduce more deaths than a stabilization plan alone. On average, patients engage with Kleiman’s EMI for ninety minutes to three hours—a substantial amount of time, and much longer than any stabilization plan.

Bone Two: Continuity. Kleiman said, “If you meet with a therapist, after waiting days or weeks to get an appointment, you spend your first outpatient session telling the therapist about your background. Learning the background of your suicidal thoughts or suicide attempt will probably take your entire first appointment.”

But you have already laid out your background to the therapist at the hospital. Now you’re going through it all again? Wouldn’t it be better if you could pick up where you left off with your inpatient provider?

This seems indisputable. A critical part of customizing Kleiman’s EMI for the patient is connecting it to all the patient’s data, which will simplify the handoff to an outpatient provider. Again, this is an aspiration for Kleiman’s EMI, a feature that its co-creator, Dr. Kate Bentley, says will be online in about a year.

And Kleiman’s Bone Three: Overkill. Not everyone having an emotional crisis is suicidal. Sometimes they’re simply having an anxious afternoon and probably don’t need to start working a plan aimed at preventing suicide. Wouldn’t it be best if their plan could handle a range of psychological disturbances? This goes back to treating patients upstream, before they are suicidal. The app would be distributed to every patient in the medical system and be ready to address any level of mental health disorder.

Setting. Continuity. And no overkill.

Kleiman’s technological solution to treatment has some unrealized capabilities, but cracking them is a perfect marriage of his lifelong nerdish obsession with computers and his professional focus on combating suicide. High on its plus side—almost everyone has a smartphone; 81 percent of adults have one with them at any given time. While they are hospital inpatients, individuals download the app to their phone and receive training about the exercises it offers. Thereafter, several times a day, the app contacts them. The patient responds to questions and performs activities that are intended to quickly evaluate their mental state. Then it guides them through calming and thought-restructuring techniques, should those be necessary.

Here’s Kleiman’s EMI in action, in an interface largely designed by his collaborator, Kate Bentley. When I activated the app, I was greeted by a cheerful blue screen. It presented me with three options: Anchor in the Present, Think Flexibly, and Change Emotional Behavior. Each addresses a different emotional need. I first chose Anchor in the Present. In the parlance of behavioral therapy, a lot of anxiety is caused by ruminating on events from the past and on future events that haven’t happened yet. “Anchoring” encourages patients to focus on the present moment rather than the past or the future. It’s a good idea for anyone.

The app offered some anchoring techniques—doing breath work and pressing my feet against the ground. Then it prompted me to identify my thoughts, feelings, and behaviors. I gave a one-word description of each, and it surprised me with an overarching question: “Are your thoughts, physical feelings, and behaviors in line with what is happening here and now in the present moment?” I found if I answered yes to the prompts, the session ended. On the other hand, if I indicated that my thoughts and feelings were focused on the past or future, I was given more anchoring exercises to do, including grounding myself with a physical activity, doing more breathing exercises, and focusing on spending time with someone I enjoyed being around.

Kleiman’s EMI’s other main options—Think Flexibly and Change Emotional Behavior—repeated this pattern. Think Flexibly is useful when you become fixated on unhelpful thoughts. When I indicated I had no negative thoughts, the session ended. But when I indicated a problem, a series of exercises followed.

Predictably, the Change Emotional Behavior option targeted feelings of distress, and there was no easy way to end the session. Instead, the app triggered exercises meant to help me soothe myself with more activities. This segment concluded with questions about the long- and short-term effects of my activities, and asked how I thought they would make me feel immediately after I’d finished the exercise and in the future.

I left my sessions with strong impressions. First, Kleiman’s EMI worked as designed, as an enhanced stabilization program that has crossed over into behavioral therapy. And overall, working the app alone would be a valuable distraction from any negative thoughts I might have. The therapeutic exercises were engaging and scientific, derived from a form of cognitive behavioral therapy called the unified protocol, which was designed especially for those with anxiety, depression, and related disorders. This means it wasn’t more than I needed. Someone having an anxious afternoon could benefit as well as someone having suicidal thoughts. And the instructions and exercises were simple and straightforward—probably not too taxing for someone in crisis.

I also felt that if my answers and activities went into my personal record, the Kleiman EMI’s usefulness would multiply. It would prompt me to engage in activities that had in the past improved my condition. And building a record of my many daily responses would help its algorithm identify periods when I was at increased risk for an emotionally turbulent episode.

Kleiman’s EMI is meant not to be used on its own but in conjunction with therapy. In its current deployment, however, it’s trying to check one of the boxes identified in our discussion of the safety plan—to build resilience and self-sufficiency. That’s because in times of crisis, help from others, from friends to experts, is not always available. However, if a suicidal urge is persistent and strong, Kleiman’s EMI will alert medical personnel who will intervene.


Already the app has yielded important insights into the nature of suicidal thinking. Until now, suicidal ideation has been thought of as a homogenous phenomenon; individuals who were “actively” suicidal were thought to have the same intensity and quality of thoughts for about the same duration. These assumptions came about because in the past, individuals’ suicidal thinking had been assessed irregularly over periods of weeks, months, or years. But getting information about suicidal thoughts that’s updated multiple times a day, and recorded by the person suffering them, has changed everything.

In one study, Kleiman and his associates gathered two samples of eighty-four men and women of a variety of ages and races who had attempted suicide in the past year. Using smartphones, each reported on any suicidal thoughts they had four times a day for twenty-eight days. Analysis revealed something brand-new—patients suffered from five distinct types of suicidal thinking. Their differences were subtle but significant, with changes in duration—how long they lasted—and variability—how much they changed in strength. What’s more, the thoughts rapidly moved from one type to another, sometimes within an hour. Participants who had more severe, unvarying suicidal thoughts were more likely to have attempted suicide recently. The study showed that suicidal thinking can speed up and shut back down over a four- to eight-hour period. What does that mean for therapists, and for Kleiman’s goal of understanding precisely what happens right before a suicide attempt?

Kleiman said, “The first thing it tells us is that suicidal thinking can escalate very quickly and deescalate just as quickly. It tells us it’s important to frequently monitor high-risk patients. Someone could say right now, ‘I am not suicidal,’ and then things rapidly change. But what causes that change? That’s our challenge, to know what happens in the interim, what happens to put these people at high risk?”

Further study will determine how these types of suicidal thinking relate to future suicide attempts. Will severity and variability reveal that an attempt is imminent? Will therapists achieve the clinicians’ Holy Grail: the ability to predict an attempt hours or days before it occurs?

Kleiman moves closer to answering these questions with the help of another technology currently expanding its role in suicide prevention—wearables. In health care, wearable technology includes a variety of gadgets that monitor a patient’s health and point out anomalies. Rising demand for early diagnosis and preventive care is fueling growth in the wearables sector. A wearable vital signs monitor, a smartwatch glucose monitor, and wearable pain management devices are a few examples. Kleiman’s wearable of choice is a smartwatch. He’s used a bunch—Fitbits, Garmins, Ouras, and more. As suicide detection devices, they have issues.

Kleiman said, “Wearable devices are expensive, they are imprecise, and they’re sometimes annoying to wear. I say this knowing that my Apple Watch, which I in theory should wear, is currently in front of me because it was uncomfortable. And so if people are not wearing the devices, they’re not really going to do much to predict any kind of state.”

Kleiman has had luck with a device called the Empatica Embrace, which was developed to detect epileptic seizures and notify caregivers, so it can’t be blamed for not being a perfect suicide tracker. The Embrace does a fair job at obtaining real-time measurements of interest—movement, sweat, and skin temperature. When someone is in a state of agitation, their heart rate goes up, they perspire, and they move around more than usual. Excessive nighttime movement also characterizes poor sleep, which is correlated with suicide and other mental health issues.

First, the bad news. Wearables are not magic wands that help predict suicide in the short term or the next day. Kleiman told me, “Wearables alone have basically no predictive ability for suicidal thinking later on.” That’s because suicide has no signature measurable by a wearable, no telltale combination of heart rate, perspiration, and movement that predicts a suicide attempt.

Now the good news. “Skin conductance response” is a fancy term for perspiration measured by a wearable. People perspire when they are surprised. When people are having a physiological response to distress—the fight-or-flight response—they have a skin conductance response that looks like surprise. As Kleiman explained, when you combine skin conductance responses with Kleiman’s EMI’s data about the strength and variation of suicidal thoughts, it improves the app’s accuracy. The wearable verifies and augments what Kleiman’s EMI tells you about some suicidal thoughts. That’s a big deal. Real progress.

But ironically, if the Kleiman EMI’s powers are all realized and it’s able to anticipate a suicidal urge, its other capabilities almost make the point moot. That’s because with behavioral therapy the app gives patients the ability to work through and minimize urges.

Kleiman said, “We don’t really need to know if someone’s going to attempt suicide. We’re giving them the skills so if they feel like they’re about to attempt, they can hopefully avert that attempt. And we don’t have to do the hard work of figuring out when someone might do it. Recently I’ve become more aware of that. It’s separating these two problems—anticipating a suicidal urge and treating one—because they are totally different.”

Perhaps it’s not as important to predict suicides as it is to treat suicidal thinking and help the vulnerable build resilience. Building resilience is a big theme in suicide prevention. Or as Kleiman says, “It’s the ‘teach a man to fish’ parable that’s going to cut down on suicides.”


In the long term, the most promising development in suicide prediction may be the marriage of machine learning and health records. As we’ve noted, when you train a learning algorithm on large amounts of data, it develops powers of prediction. Train a neural net (simply put, a lot of learning algorithms) on the records of thousands of suicide victims and it should predict whether the person represented by a new record will attempt suicide. Incidentally, every time you buy something online, you trigger similar technology by adding data to your internet purchasing profile. This is why you may find yourself hounded everywhere you go online by promotions of things you’ve examined or purchased in the past. This technique is called affinity analysis and it’s used by Amazon, Netflix, Meta, and innumerable online retailers and social networks.

Machine learning works best when algorithms are trained on structured data. Ideally all the medical records are formatted alike. In the United States the leader in electronic medical health records is a company called Epic Systems. It hosts a uniform electronic health record system for more than 250 health care organizations worldwide; 45 percent of the US population already have their medical records in an Epic system. In a smaller way, Kleiman’s EMI has created its own database of many thousands of data points entered by patients several times a day. What’s more, its data is highly pertinent for suicide prediction, detailing patients’ emotional states, thoughts about suicide, and changes in suicidal thinking. In the future, Kleiman’s EMI will work with Epic to possibly combine their data and increase their prognostication powers.

Kleiman’s app should soon be able to help anticipate suicides. But what about Epic working on its own? Can it anticipate suicides?

The answer is a firm maybe. In 2018, researchers from Kaiser Permanente and other health care organizations trained an algorithm on data from almost 3 million patients who made a total of 20 million clinic visits. Epic’s anonymized data included boilerplate information from primary care examinations but also from depression assessment questionnaires, prescriptions for psychiatric medications, and indications of substance abuse.

The algorithm correctly identified nearly half the patients who attempted or died by suicide within ninety days of a general care or mental health visit, according to a retrospective examination of patient data. Only 25 to 30 percent of subsequent suicide attempts or fatalities were previously detected by the suicide risk assessments used by some hospitals. Gregory E. Simon, MD, MPH, a psychiatrist and mental health researcher who coauthored the examination, concluded, “Risk predictions can supplement clinical judgment and direct clinicians’ attention to where it’s most needed. Predictions don’t replace a clinical assessment, but they can help providers intervene with the right patients at the right time.”

The study must be taken with a grain of salt. Note that half the cases of suicide attempts or deaths were identified after the fact. This suggests that half the time, using Epic’s data, scientists should be able to anticipate which patients might try to kill themselves, and intervene. But clinical therapists have reported that they are able to anticipate which of their patients might attempt suicide no more than half the time. That’s no better than flipping a coin, as Florida State University’s Dr. Joseph Franklin pointed out.

Achieving nearly 50 percent using machine learning does not appear to constitute progress in forecasting. It could, however, provide support for an intervention once a patient’s greater-than-average odds of suicide are detected.


I’m sorry, Daddy just can’t be fixed. No matter how hard I try to be normal and levelheaded it always comes back and even worse. I wish I’d died in Afghanistan.

These desperate words came from Army National Guard veteran Anthony Morris. He developed post-traumatic stress disorder (PTSD) after serving two tours of duty in Afghanistan. This is part of a note he wrote to his four young children shortly before he killed himself near his home in Louisiana. Although veterans of the armed forces make up only 10 percent of the population, they account for nearly 25 percent of all suicides in the United States. According to 2021 research, 30,177 veterans and active duty service people who served in the military after 9/11 killed themselves. In contrast, during the same twenty-year period, 7,057 service members—or a quarter of that total—died in battle. This confounding statistic bears repeating: since 9/11, four times more service members killed themselves than died in combat.

While the United States Veterans Administration claims about twenty veterans kill themselves each day, a 2022 study from America’s Warrior Partnership (AWP) claims that figure is shockingly low. According to the study, the VA failed to include overlooked deaths such as unexplained drug overdoses and the deaths of those whose families and home states did not identify them as veterans when they died and inform the Department of Defense. AWP claims as many as forty-four veterans die from suicide on average each day, more than twice the official estimate. The AWP report is contested by the nonprofit Veterans Healthcare Policy Institute, which cites flaws in the study and claims the VA’s numbers are correct.

For veterans, suicide risk factors are legion. A sampling includes problems reentering civilian society, unemployment, prior suicide attempts, mood and anxiety disorders, alcohol and drug abuse, legal issues, financial troubles, firearms possession, and divorce. Physically, veterans contend with PTSD, traumatic brain injury (TBI), wound recovery and rehabilitation, and other injuries that are part and parcel of many military lives. Ironically, medical improvements that have allowed more troops to survive wounds have also let them repeatedly deploy and accumulate more physical and mental trauma.

Army clinical psychologist Sandra Pahl, PhD, spoke with me about a relatively new phenomenon—complex PTSD. “A lot of times there are multiple traumas that have been going on in your deployment, and then there’s a TBI on top of that. And modern medicine is contributing to trauma because it used to be if somebody got a certain kind of wound, they’d go home or back to the States, but now they get fixed up and sent back into action. So they have an opportunity to get more traumatic injuries. And let’s say both your legs get blown up. In the old days, that individual would have died in the field, but we’ve got so good in medicine that we can save that individual. So the problem becomes, how does that person manage this when they’re coming home? When they have not only psychological trauma from being blown up and having seen friends and superiors being killed as well, now they’re also dealing with losing both of their legs.

“There’s a terrific cultural shock when you stop being deployed and you come home. You lose your identity, you stop being a soldier. On top of that, you’re a paraplegic.”


The rise of suicides among US veterans and active duty service members paralleled the country’s participation in concurrent wars in Afghanistan and Iraq. But the connection of suicide to war turns out to be only half the story. The data about suicide during recent wars reveals a shocking fact: the majority of soldiers who kill themselves have never been deployed to a war zone, and the vast majority have never actually been in battle. Craig Bryan, PsyD, ABPP, formerly the executive director of the National Center for Veterans Studies at the University of Utah, is exasperated by the stereotype of the soldier who returns from war with PTSD and kills himself. “That’s the storyline we’ve made up in our society because it’s easy to understand and it makes sense,” he says. “The problem is that the data doesn’t support the notion that combat leads directly to suicide risk.” For instance, 53 percent of service members who killed themselves in 2011 had never been deployed. Additionally, 85 percent of military personnel who died by suicide that year had no prior experience in direct combat, meaning they may have been deployed but did not take part in any actual conflict.

Dr. Bryan conducted a three-year investigation of the main factors that lead to military suicides. According to what he found, the suicide rate and stress levels in the military started to rise in 2004—even among those who hadn’t been sent to Iraq or Afghanistan. As the “forever wars” on two battlefronts persisted, much was demanded of few. And there’s another theory about noncombatant suicides that is underreported and underdiscussed. Because of the demand for personnel after 9/11, parts of the armed forces temporarily lowered their mental health requirements (interestingly, the same thing happened during World War II). More recruits with mental health disorders entered the military. That would partially account for a general rise in suicides. Furthermore, some inductees were deemed psychologically unfit to go overseas and instead served stateside in administrative and support roles. Financial, legal, and relationship problems—all stress factors for suicide—also kept many from being deployed. In this context it isn’t so surprising that suicides among undeployed personnel are higher than deployed.

Dr. Pahl also connects veteran and active duty suicides to recruiting. She says, “I do think sometimes the people who choose to join the military, and we’ve known this for decades, are different than the ones in general who are going to college. If you’re going to college, the idea is you probably come from a middle-class family or you have a family that is very determined that you need to go to college. This is important. Education matters.

“But when we’re looking in the military, traditionally, some individuals join in order to get out of a bad situation. That means bad family circumstances, a bad life, and they’re really trying to change their life. We would say there’s probably some biological preloading for trauma. Some have parents who weren’t quite stable and they have behavioral health issues or they were drinking or they were abusive or they abandoned their children. So once again, genetically, you load the gun, and the environment pulls the trigger. They’re joining the military. They might struggle more because they might encounter something that reminds them of that abusive upbringing. I hate using the word ‘trigger,’ but it triggers them.”

Too often the triggering event for women in the military is sexual assault. PTSD and traumatic events are predictors of suicide. It’s commonly thought that battle experience is the only factor in military trauma. It isn’t true. The trauma caused by sexual assault and harassment is more prevalent.

Nearly 25 percent of servicewomen report experiencing sexual assault in the service, and more than half report experiencing harassment. Sexual assault frequently serves as the first warning sign in a string of traumatic events that can lead to PTSD, depression, and suicide. In a 2019 study, researchers from three universities polled over 300 female service members and veterans who had been sexually assaulted. They discovered that 29 percent of them were actively thinking about suicide. From 2007 to 2017 age-adjusted suicide rates among female veterans increased by 73 percent. According to Department of Defense data, in 2019 female service members made up 31 percent of all active duty suicide attempts, despite women making up only 16.5 percent of the armed services.

Pahl attributes rising veteran suicides to a generational shift among military and civilian young adults alike, a lack of resilience that makes them more vulnerable to things not going their way. She feels they’re more brittle than prior generations. Military officials I’ve spoken with claim that military suicides track civilian rates, in that they’re both going up with no ceiling in sight. That part is painfully true. But in sheer numbers, military suicides have pulled well ahead. For 2021, the latest year on record, civilians died of suicide at a rate of about 13.5 people per 100,000 in the United States. Active-duty military suicides were almost twice that at 24.3 per 100,000. Veteran suicides were about 31.6 people per 100,000 in 2019, the latest year on record.

And then there are guns again. As we’ve discussed, just owning a gun triples the risk of suicide. A report by Thomas Suitt, PhD, of Brown University determined that a firearm is used in about 65 percent of active duty military suicides. Among veterans it’s even higher—nearly 70 percent. The likelihood that a suicide attempt will result in death dramatically rises if a gun is used.

Officials who deal with veteran suicides hope a reduction is just around the corner, brought about by machine learning.


The US Department of Veterans Affairs (VA) has reached the same conclusion as suicide experts throughout the United States: clinical therapists are not very good at anticipating which patients will die from suicide. Traditionally, doctors have estimated a veteran’s risk for suicide by drawing on past mental health diagnoses, substance abuse, various clinical assessment tools, and gut instinct. It didn’t get them far. “The fact is we can’t rely on trained medical experts to identify people who are truly at high risk,” said Marianne Goodman, MD, a VA psychiatrist and clinical professor of medicine. “We’re no good at it.”

To complement clinical assessments, the VA created an algorithm that was trained on the records of thousands of people in the VA’s database, including those who had died by suicide and those who had not. Reach Vet (Recovery Engagement and Coordination for Health—Veterans Enhanced Treatment) is a machine learning system that, starting in 2017, has been scrutinizing sixty-one variables from each patient’s medical record, including physical ailments, prescription history, hospital visits, and marriage status.

Some variables, such as substance abuse, mental health admissions, and nonfatal suicide attempts, are on every list of suicide risk factors. Others, such as arthritis and the use of statins, are less intuitive. The algorithm then produces a score for each patient and flags the top 0.1 percent as “high risk.” According to John McCarthy, PhD, the director of data and surveillance in the Suicide Prevention Program of the VA, “The risk concentration for people in the 0.1 percent on this score was about forty times. That is, they were forty times more likely to die of suicide” than the average person.

“The idea was that by developing these scores,” Dr. McCarthy explained, “clinicians would have this additional information about the array of things that are in their electronic health record, such as prior diagnosis, prior encounters, prior suicide attempts, were they in the top 0.1 percent risk group or not? So, again, each month, we identify those individuals who are in the top predicted risk group. And then we notify coordinators at their facility. They look at the patients who are identified and the individual providers working with those patients. For patients who are new to the list, the providers not only review the care and records for those patients, but also touch base with the patients and see how they’re doing and try to address access barriers and just make sure that the patients are getting the best care that they can.”

Veterans get on and fall off the regularly updated list, but at any given time, it consists of several hundred men and women who the program identifies as at elevated risk of suicide. In 2020, Vietnam veteran Barry, who didn’t want his last name used, was notified that he was a 0.1 percenter. He was not a surprising pick—he’d been grievously wounded in the 1968 Tet offensive and had tried to take his own life twice before. He said, “I don’t like this idea of a list, to tell you the truth, a computer telling me something like this. But I thought about it. I decided, you know, okay—if it’s going to get me more support that I need, then I’m okay with it.”

A recent study of Reach Vet performed by the US Government Accountability Office determined that veterans who were identified in the program scheduled and completed more outpatient appointments, received safety plans more often, and had fewer emergency department visits. More at-risk veterans were identified by Reach Vet than by any other means employed by the VA.

Reach Vet decreased documented suicide attempts by 5 percent. However, when compared with control groups, those notified had similar numbers of deaths resulting from suicide within six months of notification. Suicide deaths are recorded separately from suicide attempts. Reach Vet did not prevent any suicide deaths. John McCarthy, whose team’s work on suicide predictive modeling prompted development of Reach Vet, noted, “It’s discouraging. I mean, of course, we would love to say that by shining a spotlight on these patients, it’s resulting in zero suicides for this very, very high-risk population or even that it’s associated with fewer suicides. But we can’t.”

Furthermore, the 0.1 percent that the Reach Vet algorithm determined are most at risk turns out to be several hundred veterans among about 6 million tested. Even if Reach Vet prevented suicides among all identified veterans, it would have only a modest impact on the national suicide rate for veterans.

I asked SAVE’s Dan Reidenberg for his take on the findings regarding the Reach Vet clinical program. He said, “What does the study say? It says veterans did more safety planning and more outpatient appointments, and that there were fewer inpatient admissions and fewer emergency department visits. That’s all great news. But five years into the program they can’t say it prevented a suicide and that is troubling to me. Should we toss it out and move on? Probably not. Maybe it will save some lives, and some lives is better than none.”

Maybe part of the issue is how difficult it is to work with medical records in machine learning applications to begin with. Machine learning provides useful insights for some medical conditions, like cancer and heart disease, but it hasn’t worked yet to prevent suicides. One problem is that the algorithms trained on medical records do not reveal anything about why some patients are designated as high risk while others are not. “Interpretability” is an issue with machine learning systems, and it’s the main reason why their use is questioned in safety-sensitive processes, such as self-driving cars. The inputs are known—in this case, medical records—and the outputs are known—the roster of patients determined to be high risk. But computer scientists cannot know what’s happening between the inputs and the outputs. The algorithm’s decisions cannot be interpreted or explained, and so systems employing machine learning are often referred to as “black box” systems.

The problem with a black box system in suicide prevention is that while an algorithm may provide a general warning, it can yield no information about what triggered its conclusion and so no insight can be drawn about the proper treatment of the patient. What might work for one person on the 0.1 percent list may not work for another. But again, Reach Vet is intended to complement clinical assessment, not replace it. About Reach Vet and Department of Defense programs in general, Reidenberg concluded with a discouraging note. He had recently been on a call with someone on the DoD’s suicide task force. They had discussed that in 2022 the military had just recorded the highest number of suicides ever. “This,” he wrote to me, “despite the Reach Vet program being one among literally hundreds of programs being implemented throughout the branches of service.”

John McCarthy takes the opposite view and cautions about the unrealistic expectations and subsequent disappointments. “Even small steps are good. Suicide is a really difficult thing to try to affect. It’s encouraging that we’re reducing suicide attempts. And now we’re working to improve and update the algorithm. We’re also considering other subgroups that might benefit from outreach, not in the highest tier but in a very high tier.

“People look at machine learning approaches as if they’re a kind of magic bullet for suicide prevention and this is something we’ve been very clear on at the outset of the Reach Vet program. It’s not the algorithm that would prevent suicide, it’s the interaction with the patients that would prevent it.”


If you are thinking about suicide or if you or someone you know is in emotional crisis, please call or text 988 at any time to reach the 988 Suicide and Crisis Lifeline for confidential, free crisis support.