Personalized health tracking
Let’s start by exploring what this means for the tracking, diagnostic part of personalized health care. With the widespread adoption of wearable technologies, monitoring our physical and mental health around the clock is easier than ever before. In chapter 3, I told you that we can detect depression from digital traces such as GPS records or social media posts. But that’s just the tip of the iceberg. The world’s leading scientist in affective computing, Rosalind Picard at the MIT Media Lab, for example, combines different devices and data sources to capture people’s holistic experience at a second-by-second level. Your smartphone is used to send short mobile surveys, capture your activity and location, as well as monitor your phone and app usage behavior. On top of that, a smartwatch equipped with sensors helps track your sleep, motion, and physiological measures like blood oxygen, heart rate, skin conductance, and temperature. A mini army of nurses looking over your shoulder 24-7 to see if you might be stressed, anxious, or depressed.
Likewise, companies such Google, Samsung, and Apple have been pouring billions of dollars into their health units. Not surprising, perhaps, when considering that the digital health market is already worth over US$280 billion. And wearable devices are just the beginning, an attempt at measuring what’s going on inside us by strapping technology to our outsides. But that could change soon.
Imagine a small object, less than a millimeter in size with three short legs in the front, three long ones in the back, traveling through your bloodstream at the speed of 100 micrometers per second, like a minuscule spider making its way through the tunnels of your body, powered by the oxygen, sugar, and various nutrients in your blood. Its mission? To locate cancer cells and destroy them. I’m not making this up. The medical microbot I just described was developed and tested by the scientists with Sukho Park, a professor at Chonnam National University in Korea in 2013.10 Over ten years ago! The coolest part? It is made of naturally occurring bacteria in our body that are genetically modified and dressed up with a biocompatible skeleton (they don’t typically come with six legs).
Technologies like Park’s spider-bot could usher in a true revolution in preventive and personalized health care by monitoring health directly at the source. You no longer need to wait for your symptoms to become so pronounced that it’s already too late to prevent a mental health crisis. Simply monitoring your vitamin D, estradiol, testosterone, or B12 levels could tell us if and when you might be at risk for depression.
Likewise, tracking your cortisol might alert us to unhealthy levels of consistent stress that—when ignored—could lead to serious illnesses in the long run. Instead of trying to help you get out of a mental health crisis, we could help you avoid getting into one in the first place. Think of it as an early warning system that tells you—and perhaps your doctors and other guardians you identified—about abnormalities, deviations from what is normal for you (not just the average person).
Personalized health treatment
This brings me to the treatment aspect of your personal mental health companion. Treating mental health problems typically happens at two levels: a physiological one (drugs) and a psychological one (therapy). Once we have dynamic monitoring systems in place, the first one becomes easy. The second one is much harder. While we are still far from having developed the perfect mental health-care companion (one that looks as cute and is as competent as Baymax in the Disney movie Big Hero 6), recent years have seen remarkable strides in the application of AI in mental health counseling.
At the most basic level, algorithms can help us figure out which treatments are most effective for a particular individual. It’s the medical equivalent of Netflix’s movie recommendation engine. Instead of recommending movies to you based on the movies you have liked in the past, and the movies other people with similar preferences have enjoyed, I can use whatever information I have on you—your psychological dispositions, your socioeconomic environment, previous treatment success, and more—to map you against other patients and match you with the treatment that is most likely to succeed.
That’s exactly what Rob Lewis and his team at MIT did.11 They partnered with Guardians, a free mobile application designed to help people improve their mental health through a series of gamified challenges. Exploring the app was probably the most enjoyable research activity I did for this book.
Imagine yourself as a cute, animated turtle. You wander around Paradise Island with a flowing Waikiki skirt, a seashell necklace dangling from your neck, and a flower wreath crowning your head (that alone makes you feel better, doesn’t it?). As you explore the island, you are encouraged to take on challenges that will give you rewards. A cool coconut shake here, a sweet slice of watermelon there.
The challenges themselves are fun too: socialize, express yourself artistically, exhaust yourself physically, or simply do something you enjoy doing. After completing a task, you report back to turtle headquarters (the app’s database) how much your mood has improved. Think of it as the movie ratings you send to Netflix or the product reviews you share with Amazon.
Lewis and his team studied data from 973 users who had engaged in over twenty thousand challenges. Their results confirm the power of personalization. Compared to just using the average ratings for each challenge, a personalized recommendation system à la Netflix or Amazon could far more effectively predict whether a given user would enjoy and benefit from a given task.
But it’s not just the selection of treatments that algorithms can assist with. It’s also the treatments themselves. Take the popular mental health chatbot Woebot, for example. Powered by generative AI, the application replaces the nodding therapist with a bright smartphone screen and swaps out the couch for a place of your choosing.
Do you have a hard time adjusting to the new job? Are you struggling to get yourself out of bed in the morning? Or do you need advice on how to break up with your partner? Woebot is there for you. Twenty-four hours, seven days a week. That’s what I call convenient office hours!
And I’m not talking only about Woebot here. There’s Youper, Wysa, Limbic, or Replika. (Is it me or do they all sound like characters from a Disney movie?) Together, these platforms have attracted millions of users around the world. According to internal research by Woebot Health conducted in 2021, 22 percent of adults in America have used a mental health chatbot. For 44 percent of those, using an app was the first experience with cognitive behavioral therapy; they had never seen an actual therapist before.
The Covid-19 pandemic certainly played a role in this development, adding approximately 53 million instances of depression and 76 million cases of anxiety disorders to an already strained health-care system. When you can’t leave the house, and the next available appointment for your local therapist is in 2030, you might as well give Woebot and his friends a shot. Even when all you suffer from is loneliness.
But Covid-19 isn’t the only reason mental health chatbots have become so popular. The truth is that there are simply not enough affordable mental health professionals to take care of everyone in need of treatment. According to the World Health Organization, there are a hundred thousand potential customers for every thirteen mental health professionals worldwide. And unsurprisingly those thirteen professionals are highly unevenly distributed between rich and poor countries. If you go to the extremes, we’re talking about a factor of over forty.
But you don’t have to cross national borders to observe inequities in access to mental health treatment. In the United States, there are huge gaps in access to mental health services when it comes to race, ethnicity, income levels, and geography. A Black man in Florida is much less likely to find a licensed therapist than a white woman in New York.
Take Chukurah Ali, who was interviewed by Yuki Noguchi at NPR in early 2023.12 After a car accident that left her severely injured, Ali lost everything. Her bakery, the ability to provide for her family, and the belief in her self-worth. She became depressed. “I could barely talk, I could barely move,” she recalls. But without a car to drive to a therapist nor the money to pay for what often amounts to hundreds of dollars per session, Ali was stranded. The much-needed assistance in helping her get back on her feet seemed out of reach. Until her doctor suggested she try using Wysa. She did.
At first, Ali was skeptical. It felt strange talking to a robot. But she quickly warmed up to the idea. “I think the most I talked to that bot was like seven times a day,” Ali admits. She felt comforted knowing there was someone to turn to in these difficult moments, even when these moments happen at 3 a.m. There was someone to answer her questions. Someone to help her avoid spiraling into negative thought patterns.
Whenever Ali felt blue, Wysa would suggest she listen to calming music or do some breath work. A small but effective nudge to keep her out of bed and on track for all the other doctor’s appointments she needed to recover from her injuries. Without Wysa, Ali likely would have never seen a therapist.
Stories like Ali’s are powerful examples of how AI-driven applications could democratize access to mental health care. But I believe they can do much more than that. I believe they could make our engagement with mental health more personal and effective than ever before. Take the 24-7 service they offer, for example. That’s not just a great feature for a convenient scheduling experience. It’s also a feature that generates insights that are far more granular than any therapist could ever hope for.
If you’re seeing a therapist right now—one of those who is flesh and blood—chances are you won’t meet with them more than once a week. Acute crises aside, that might seem like a reasonable time interval to dive into the depth of your despair. No need to mull over your problems every day.
To make the weekly sessions valuable, however, you will have to remember everything that happened in between. The call with your sister that went sour after just a few minutes. The meeting with your boss that left you feel underappreciated. The fight with your significant other that made you question your ability to love unconditionally.
The problem is that our memories are fickle. Every time we access them, we change them a little. By the time you get to the therapist, the dialogue with your significant other will no longer be the same, even if you try your very best to offer an accurate, unbiased account of what happened.
None of this is news to therapists. It’s why they might ask you to keep a diary. To write down your feelings and thoughts as you experience them (or at night before you go to sleep). But even if you meticulously captured the big and small moments of life in your notebook—which most people won’t—your conversations with the therapist about these feelings and thoughts will still be retrospective. You might try to remember what it felt like. To put yourself back into the situation and try to relive it. But anyone who knows how hard it is to imagine what being sick might feel like when you’re currently healthy also knows how difficult it is to replicate a real feeling on demand.
Chatbots don’t require scheduling ahead of time. You can talk to them whenever you want. In the moments you find yourself right in the middle of an emotional vortex. Or the moments when, after weeks of mulling over a problem, you finally have a breakthrough and see the world more clearly. In short, the moments when having someone to share these experiences with and think through them together might be the most valuable—when the feelings are still raw. Most importantly, nobody prevents you from taking these conversations to your flesh-and-blood therapist to dissect them in greater detail and get a human perspective on the matter.
What I’ve just described might sound wonderful. But let’s be clear: it’s the potential of AI in personalized health care, not its current reality. Yes, chatbots like Woebot, Wysa, or Replika have helped people like Ali to manage their mental health problems. And there’s at least tentative evidence from more rigorous scientific studies supporting these anecdotal success stories.
But we are still far away from chatbots replacing human therapists, let alone offering services that might be considered superior. If I had to choose between a chatbot and a human therapist, I would still pick the carbon version ten times out of ten.
Take Wysa, for example. The application uses natural language processing to interpret your questions and comments. What are the challenges you face? What kind of advice are you looking for? But instead of generating a response that is tailored to you and your specific question (as any human therapist would), Wysa selects a response from a large repository of predefined messages that have been carefully crafted by trained cognitive behavioral psychologist.
Don’t get me wrong. These responses can be extremely helpful. But they are far from the level of personalization I fantasized about earlier. And because they are always chosen from the same set of responses that can become rather repetitive when using the app for a long time. It’s like hearing your mom giving you the same advice over and over again.
On the flip side, applications like Woebot use generative AI to come up with responses on the fly. Like a human conversation partner, Woebot isn’t constrained by a predetermined set of responses but can cater its advice to your unique situation. Say you told Woebot about your debilitating fear that Cambridge University made a terrible mistake in admitting you to the graduate program. You might have been able to fool the faculty during the interviews but soon they will realize what a fraud you are. Everyone around you is clearly so much smarter, and it’s only a matter of time until you’ll be asked to leave.
Unlike other applications, Woebot won’t merely respond with a generic suggestion for how to overcome impostor syndrome and build confidence. Instead, it will follow up with specific questions and relate its recommendations back to your unique experience at Cambridge.
But the increased flexibility and personalization of Woebot compared to other applications comes at a cost. Even though generative language models have made remarkable strides over the last few years, they still make mistakes. Just look at Woebot’s response to Estelle Smith, professor of computer science at Colorado School of Mines, who probed it with a statement about suicidal intentions in 2022 (figure 6-1).
Not the response you’d hope for. And not an exception either. In 2018, Woebot made the headlines with a shocking response to another researcher’s question about sexual abuse (figure 6-2).
The two examples are a good reminder that we are still miles away from the utopian future I’ve painted. I can’t imagine chatbots fully replacing human therapists anytime soon, no matter how sophisticated they become. As Alison Darcy, founder of Woebot, put it: “A tennis ball machine will never replace a human opponent, and a virtual therapist will not replace a human connection.”13 If you can afford to see a flesh-and-blood therapist, I bet you will continue to do so.
But that’s beside the point. Chatbots like Woebot weren’t built to replace existing mental health offerings. They were built to complement them. To support therapists by providing additional insights. To fill in at 2 a.m. when your therapist isn’t available, but you urgently need to talk to someone. And to offer an alternative to anyone who can’t afford the luxury of paying $500 a week for a one-hour therapy session, or who is too worried about the stigma that is still associated with mental health problems.
And with generative AI becoming exponentially more powerful every month, we are getting closer to this vision every day. A team of psychologists and psychiatrists with Johannes Eichstaedt (the same scientist who showed that depression can be predicted from tweets), for example, developed a scalable, low-cost tool that could soon make effective treatment for post-traumatic stress disorder (PTSD) available to a much larger part of the population. Building on well-established treatment protocols for PTSD, they created a custom version of ChatGPT that can train therapists by mimicking both patients and supervisors.
This brings me to the final example of how psychological targeting could act as a force for good—one that we have the technology to implement already but that is currently nothing but a lofty dream.