6 Finding the Good

When I picked up the call from one of my old college friends, I could tell right away that something was wrong. She asked me about life in New York. I complained about the trash in the streets, the brutal winter days, and my students being late to class. She responded with an occasional “hmm,” but her mind was clearly somewhere else. When I asked if everything was OK, she burst into tears.

A childhood friend of hers had taken his own life. Nobody had seen it coming. She wished she had known he was struggling. Maybe she could have saved him.

I felt terrible. My original goal had been to use psychological targeting to help people. I put that plan on hold because it seemed too hard to implement. As time went by and I got creeped out by the dark side of psychological targeting (we’ll get to that in chapter 7), I abandoned it altogether. I had become so obsessed with the obvious costs of using psychological targeting that I lost sight of the potential costs of not using it.

But the conversation with my friend served as a wake-up call. It reminded me that psychological targeting also had the potential to do good. Just like my neighbors’ actions could be extremely helpful and reassuring, psychological targeting could help us become better versions of ourselves.

A lot of my thinking since then has been guided by what if ? What if we could use psychological targeting, for example, to help people:

This list is by no means comprehensive. I’m sure that after reading through the examples in this chapter you can easily come up with your own set of questions.

Maybe you’re interested in education and how psychological targeting could help kids and young adults learn more effectively while having fun. Or maybe you feel like overhauling the way we find our professional calling by using psychological targeting to help people find jobs they love.

I picked three of the topics to focus on: the potential of psychological targeting to support us in accomplishing goals that many of us struggle with (e.g., to save more), democratize access to personalized mental health care, and expand our experience of the world.

I didn’t choose these three applications because they are the most important ones or because they seamlessly fit into a simple and coherent narrative. Rather, I chose them to highlight how psychological targeting could be used for the greater good across a wide range of potential applications. Some of the ideas I will share are nothing but lofty dreams—fantasies of mine that I hope will materialize in the future. Others are far more concrete examples of how psychological targeting is currently benefiting individuals.

Let’s start with the latter.

The Savings Struggle

I spent much of my early career trying to get people to spend more. Could we get people to find the perfect vacation by tapping into their traveler personality? Yes. Could we persuade women to buy makeup by tailoring our marketing messages to their extroversion level? Yes, again.

I’m not saying that this is necessarily a bad use of psychological targeting. My own research has shown that people are happier if they manage to align their spending with their psychological needs, and with access to products and services from all around the world, we need a way to separate the wheat from the chaff. 1

But is helping people spend their money really what our society needs the most? What if, instead, we could use psychological targeting to do the exact opposite? Help people save.

When I first got interested in studying financial health, I was shocked by the numbers I found. In 2020, 53 percent of Americans reported living from paycheck to paycheck, 62 percent did not have enough savings to cover three months of living expenses, and more than 10 percent could not even cover a single week without getting paid.2

To call this state of affairs problematic would be an understatement. It’s disastrous. The 30 million Americans with essentially no savings are at the constant cusp of ruin. They might be able to hang on this week, but what if next week their car breaks down? They don’t have the money to take it to the shop and get it fixed. This means they can’t drive to work but spend hours on public transportation that is notoriously unreliable (if available at all). Consequently, they might lose the job that covered their rent, insurance, and weekly expenses. One seemingly small incident and they are done. Game over.

The picture might not look quite as dire for the 200 million that have less than three months of savings, but it isn’t rosy either. Not only are most of us woefully underprepared for retirement (which, thanks to medical advances, will last longer and longer), but there is growing scientific evidence showing how financial distress holds us back in the present. It adversely impacts our physical and mental health. And it hijacks the cognitive bandwidth we need to make good decisions and be creative. You simply can’t live up to your full potential if you are worried about our finances.3

Most of us know that we should save more, and many of us are eager to do so. According to Forbes, 30 percent of people started 2023 with the New Year’s resolution of becoming better at managing their finances.4 And yet, the majority of these 30 percent will fail to translate their noble intentions into dollars saved in their bank account. Like many of the other New Year’s resolutions we struggle with, such as our desire to eat healthily and exercise more, saving doesn’t come easy.

Even for the most financially literate among us, saving is a constant battle—a battle with our brain that much prefers to savor the current moment instead of worrying about the future. Saving is hard because it means giving up a real, tangible reward today (i.e., our ability to buy stuff with the money we’ve earned) for a potential benefit in the future that is often far less tangible (i.e., a chance to deal with unexpected emergencies)—a concession our brain is notoriously unwilling to make.

Trading your new PlayStation or pearl earrings for a few hundred extra dollars in your bank account will seem like a no-brainer once you hit an emergency and all hell breaks loose. But we all know how it goes before we get to this point. We want to be responsible superheroes but end up defaulting to our regular, self-indulgent selves.

I was convinced that psychological targeting could help. An obvious starting point for my investigation was to identify at-risk profiles. I was curious whether certain types of people had a harder time managing their finances than others. The most likely suspect among all the personality traits I could think of was conscientiousness.

I remembered my highly dependable and reliable sister who always paid her bills on time and magically managed to accumulate savings even while being a poor student. And then I thought of myself, the somewhat more careless and disorganized member of the family who was quick at spending money and typically had a lot of month left at the end of the money.

Not a terrible guess. It turns out that conscientiousness is indeed related to some aspects of savings. But not as strongly and consistently as I had expected. However, there was another personality trait that reliably predicted financial health. Any guesses? We still have openness, extroversion, agreeableness, and neuroticism in the race.

To be fair, if you had asked me a few years ago, I wouldn’t have guessed the right answer. I was puzzled at first when study after study kept showing that agreeable people tended to end up worse financially.5 Agreeable people had fewer savings in their bank account, accumulated more debt, and were more likely to default on loan payments. Similarly, US counties with higher average levels of agreeableness experienced higher levels of bankruptcy.

Instead of the careless, disorganized slouch I had pictured in my mind, it was the friendly and caring nice guy that was struggling the most. The type of person we love to welcome to our communities and social networks because they put others ahead of themselves.

I had heard the saying “nice guys finish last,” but it didn’t necessarily make much sense to me in the context of financial decision-making. So, I started to dig deeper. What was driving the relationship between agreeableness and poor financial health in my data? Could it be that agreeable people didn’t negotiate as aggressively as their disagreeable counterparts? The answer is no (to be precise: agreeable people are indeed less aggressive, but that doesn’t explain why they do worse financially).

Instead, the relationship was driven by something far more obvious: Agreeable people simply didn’t care as much about money as their disagreeable counterparts. In our research, we asked people how much they agreed with statements such as “There are very few things money can’t buy” or “You can never have enough money.” Agreeable people consistently indicated that money just wasn’t that important to them.

When I first saw this explanation, I was both disappointed and, frankly, a little disheartened. I really wanted to help the nice guys do better financially. Teaching them how to negotiate more effectively (not necessarily more aggressively) would have been a great intervention. But how could I help them if they simply didn’t care as much? It didn’t feel right to try and make money a higher priority for them. On the contrary, I thought it was somewhat endearing that they didn’t care as much. Clearly, they were good people who chose positive social relationships with others over money. Admirable.

But the more I thought about it, the more I realized how flawed my thinking was. Just because someone cares about money doesn’t mean they don’t care about other people. The same way that caring about other people doesn’t mean you shouldn’t care about money. It’s a false dichotomy.

Think of it this way: if you don’t manage your money properly, you are putting your loved ones at risk, too. And your love for others doesn’t simply disappear just because you also care about money and what it can do for you and others.

I was intrigued. If it was true that agreeable people simply didn’t care as much about money, then arguing that they should save for the sake of accumulating money wasn’t going to convince them. But what if we could reframe the purpose of saving to highlight the potential impact on their loved ones? Emphasize how saving allows them to protect the people they love and care about the most.

One of the challenges that makes saving such a difficult task is that we don’t experience the benefits day to day. Unlike a new PlayStation or pearl earrings, we can’t physically experience our savings or share them with others. They are but a number in a bank account, hidden away from sight and far less enticing than whatever we might have spent those savings on.

SaverLife’s Race to 100

Until we finally invent time travel (please!), we cannot teleport people into the future to see and experience how their savings will make a difference one day. What we can do instead is to create this image in their mind. And do our best to make it as real and appealing as possible.

That’s exactly what we did. In September 2020, my colleagues Robert Farrokhnia, Joe Gladstone, and I teamed up with SaverLife, a US-based nonprofit supporting individuals and families to save money, improve their financial health and literacy, and build wealth.6 A true superhero company among a vast sea of predatory fintech products.

Our timing couldn’t have been better. With Covid-19 entering its third quarter of the year, many SaverLife users were struggling to make ends meet. Our mission was clear: encourage saving among those who needed it the most. Those with no or very low levels of savings (less than $100). Those who couldn’t afford their car breaking down or receiving an unexpected medical bill without major repercussions.

Our goal: get them to save $100 as part of SaverLife’s Race to 100, a four-week challenge offering anyone who hit the $100 target a chance to win $2,000 in cash.

With SaverLife’s and its users’ permission, we collected personality data from volunteers and worked tirelessly to come up with messaging that would appeal to different personality traits. Take a moment to think how you could market saving in a way that gets an open-minded, creative person to dream about the future? How might this be different from the message you would craft for a more conservative and traditional customer? Not that difficult, is it? Here are two examples from the actual campaign.

Low Openness

Saving money is a tried-and-true technique for preserving the lifestyle you want. But that doesn’t mean it’s always easy. There are plenty of temptations to spend. You need real guidance to save.

SaverLife has already helped over 380,000 people just like you set up a secure savings program with their own bank and contribute to it regularly. This month, an extra incentive to insure your future: the Race to $100.

Save $100 by September 30, and you’ll have a chance to win an additional $100 from us! It’s a risk-free way to start securing your future today with a solid savings buffer.

High Openness

What would it be like if you could be the creator of your own future? What if you had the means to choose from unlimited adventures and fulfill your big, bold dreams?

There’s only way to find out. This month, get started on an exciting new future with the Race to $100.

Save $100 by September 30, and you’ll have a chance to win an additional $100 from us! SaverLife is here to help you create savings habits in a different way than you’ve ever experienced before. This could be the first step to a wonderful new life.7

We did the same for the other personality traits (both high and low ends).8 Caring personality? Save today to build a better future for your loved ones! Competitive personality? Every penny saved puts you one step ahead of the game! Extroverted? Turn your cravings for human contact into savings that will help you make the most of your post-Covid adventures with friends. Introverted? With so many cozy nights in, let’s start dreaming about the home you always wanted.

The moment I found out that the campaign had been successful was one of the happiest in my entire career. Among those who received the personality-tailored messaging, 11 percent managed to save $100. Up from 4 percent among people who didn’t receive any messaging and 7 percent among people who were targeted with SaverLife’s best-performing generic campaign. That’s a 275 percent boost compared to the no-message control and a 60 percent boost compared to SaverLife’s gold standard.

Of course, the numbers are far from perfect. In an ideal world, they would be much closer to 100 percent. But let’s be realistic: getting people with less than $100 in savings to at least double their savings requires a small miracle (just think of what it would take you to double your savings).

As I said before, psychological targeting isn’t a magical panacea. But that doesn’t mean we can’t make a real difference using it. Think of it this way: among every one hundred people reached by our campaign, we had managed to get an additional five to build a critical emergency cushion. Now imagine the impact on society at large if this type of intervention got scaled to millions of people!

Before jumping from this very real and tangible application of psychological targeting to a lofty dream of mine, let me discuss a domain that falls somewhere in between. The one that motivated me to look for the good side of psychological targeting again in the first place: mental health.

Your Personal Mental Health Companion

In 2017, the Australian published an article accusing Facebook of selling insights into the mental health states of millions of teenagers as young as fourteen to advertisers in Australia and New Zealand. According to an internal sales document leaked to news reporters, Facebook offered marketers the opportunity to target teens at moments when they “need a confidence boost.”9

The document—drafted by two high-ranking Facebook executives—outlined how the company’s predictive algorithms could dynamically predict the emotional states of young users from their posts and photos. The example labels highlighted in the sales document paint a disturbing picture; they include emotional states such as “worthless,” “insecure,” “anxious,” “stressed,” or “defeated.” The incident is a blunt attempt at using psychological targeting to exploit the most vulnerable members of society for profit.

But what if we could use psychological targeting, instead, to prevent serious mental health problems in the first place, or—when they do occur—help people get back on their feet as quickly and easily as possible? What if we could design a health companion who knows that something is wrong way before we do, and who could provide personalized care that not only speaks to our unique genetic makeup but also to our psychological needs? In other words, a system that can both track and treat. Not just those who can afford to pay $300 or more for a weekly visit to the shrink. But all of us.

What might sound like science fiction isn’t that far-fetched. Rather it is the goal of an increasingly popular approach to health care, called personalized or precision medicine. The idea is simple: use insights into a person’s unique genetic makeup, their environment, as well as psychological dispositions and lifestyle choices to optimize both diagnosis and treatment of disease (and ultimately, prevention).

Think of it as the good Samaritan equivalent of the sneaky marketer trying to boost the effectiveness of its ads by learning as much about your preferences as possible. Not a far-flung comparison. The Food and Drug Administration’s website uses the same language to describe precision medicine as marketers typically do: “Target the right treatment to the right patients at the right time.”

Personalized health tracking

Let’s start by exploring what this means for the tracking, diagnostic part of personalized health care. With the widespread adoption of wearable technologies, monitoring our physical and mental health around the clock is easier than ever before. In chapter 3, I told you that we can detect depression from digital traces such as GPS records or social media posts. But that’s just the tip of the iceberg. The world’s leading scientist in affective computing, Rosalind Picard at the MIT Media Lab, for example, combines different devices and data sources to capture people’s holistic experience at a second-by-second level. Your smartphone is used to send short mobile surveys, capture your activity and location, as well as monitor your phone and app usage behavior. On top of that, a smartwatch equipped with sensors helps track your sleep, motion, and physiological measures like blood oxygen, heart rate, skin conductance, and temperature. A mini army of nurses looking over your shoulder 24-7 to see if you might be stressed, anxious, or depressed.

Likewise, companies such Google, Samsung, and Apple have been pouring billions of dollars into their health units. Not surprising, perhaps, when considering that the digital health market is already worth over US$280 billion. And wearable devices are just the beginning, an attempt at measuring what’s going on inside us by strapping technology to our outsides. But that could change soon.

Imagine a small object, less than a millimeter in size with three short legs in the front, three long ones in the back, traveling through your bloodstream at the speed of 100 micrometers per second, like a minuscule spider making its way through the tunnels of your body, powered by the oxygen, sugar, and various nutrients in your blood. Its mission? To locate cancer cells and destroy them. I’m not making this up. The medical microbot I just described was developed and tested by the scientists with Sukho Park, a professor at Chonnam National University in Korea in 2013.10 Over ten years ago! The coolest part? It is made of naturally occurring bacteria in our body that are genetically modified and dressed up with a biocompatible skeleton (they don’t typically come with six legs).

Technologies like Park’s spider-bot could usher in a true revolution in preventive and personalized health care by monitoring health directly at the source. You no longer need to wait for your symptoms to become so pronounced that it’s already too late to prevent a mental health crisis. Simply monitoring your vitamin D, estradiol, testosterone, or B12 levels could tell us if and when you might be at risk for depression.

Likewise, tracking your cortisol might alert us to unhealthy levels of consistent stress that—when ignored—could lead to serious illnesses in the long run. Instead of trying to help you get out of a mental health crisis, we could help you avoid getting into one in the first place. Think of it as an early warning system that tells you—and perhaps your doctors and other guardians you identified—about abnormalities, deviations from what is normal for you (not just the average person).

Personalized health treatment

This brings me to the treatment aspect of your personal mental health companion. Treating mental health problems typically happens at two levels: a physiological one (drugs) and a psychological one (therapy). Once we have dynamic monitoring systems in place, the first one becomes easy. The second one is much harder. While we are still far from having developed the perfect mental health-care companion (one that looks as cute and is as competent as Baymax in the Disney movie Big Hero 6), recent years have seen remarkable strides in the application of AI in mental health counseling.

At the most basic level, algorithms can help us figure out which treatments are most effective for a particular individual. It’s the medical equivalent of Netflix’s movie recommendation engine. Instead of recommending movies to you based on the movies you have liked in the past, and the movies other people with similar preferences have enjoyed, I can use whatever information I have on you—your psychological dispositions, your socioeconomic environment, previous treatment success, and more—to map you against other patients and match you with the treatment that is most likely to succeed.

That’s exactly what Rob Lewis and his team at MIT did.11 They partnered with Guardians, a free mobile application designed to help people improve their mental health through a series of gamified challenges. Exploring the app was probably the most enjoyable research activity I did for this book.

Imagine yourself as a cute, animated turtle. You wander around Paradise Island with a flowing Waikiki skirt, a seashell necklace dangling from your neck, and a flower wreath crowning your head (that alone makes you feel better, doesn’t it?). As you explore the island, you are encouraged to take on challenges that will give you rewards. A cool coconut shake here, a sweet slice of watermelon there.

The challenges themselves are fun too: socialize, express yourself artistically, exhaust yourself physically, or simply do something you enjoy doing. After completing a task, you report back to turtle headquarters (the app’s database) how much your mood has improved. Think of it as the movie ratings you send to Netflix or the product reviews you share with Amazon.

Lewis and his team studied data from 973 users who had engaged in over twenty thousand challenges. Their results confirm the power of personalization. Compared to just using the average ratings for each challenge, a personalized recommendation system à la Netflix or Amazon could far more effectively predict whether a given user would enjoy and benefit from a given task.

But it’s not just the selection of treatments that algorithms can assist with. It’s also the treatments themselves. Take the popular mental health chatbot Woebot, for example. Powered by generative AI, the application replaces the nodding therapist with a bright smartphone screen and swaps out the couch for a place of your choosing.

Do you have a hard time adjusting to the new job? Are you struggling to get yourself out of bed in the morning? Or do you need advice on how to break up with your partner? Woebot is there for you. Twenty-four hours, seven days a week. That’s what I call convenient office hours!

And I’m not talking only about Woebot here. There’s Youper, Wysa, Limbic, or Replika. (Is it me or do they all sound like characters from a Disney movie?) Together, these platforms have attracted millions of users around the world. According to internal research by Woebot Health conducted in 2021, 22 percent of adults in America have used a mental health chatbot. For 44 percent of those, using an app was the first experience with cognitive behavioral therapy; they had never seen an actual therapist before.

The Covid-19 pandemic certainly played a role in this development, adding approximately 53 million instances of depression and 76 million cases of anxiety disorders to an already strained health-care system. When you can’t leave the house, and the next available appointment for your local therapist is in 2030, you might as well give Woebot and his friends a shot. Even when all you suffer from is loneliness.

But Covid-19 isn’t the only reason mental health chatbots have become so popular. The truth is that there are simply not enough affordable mental health professionals to take care of everyone in need of treatment. According to the World Health Organization, there are a hundred thousand potential customers for every thirteen mental health professionals worldwide. And unsurprisingly those thirteen professionals are highly unevenly distributed between rich and poor countries. If you go to the extremes, we’re talking about a factor of over forty.

But you don’t have to cross national borders to observe inequities in access to mental health treatment. In the United States, there are huge gaps in access to mental health services when it comes to race, ethnicity, income levels, and geography. A Black man in Florida is much less likely to find a licensed therapist than a white woman in New York.

Take Chukurah Ali, who was interviewed by Yuki Noguchi at NPR in early 2023.12 After a car accident that left her severely injured, Ali lost everything. Her bakery, the ability to provide for her family, and the belief in her self-worth. She became depressed. “I could barely talk, I could barely move,” she recalls. But without a car to drive to a therapist nor the money to pay for what often amounts to hundreds of dollars per session, Ali was stranded. The much-needed assistance in helping her get back on her feet seemed out of reach. Until her doctor suggested she try using Wysa. She did.

At first, Ali was skeptical. It felt strange talking to a robot. But she quickly warmed up to the idea. “I think the most I talked to that bot was like seven times a day,” Ali admits. She felt comforted knowing there was someone to turn to in these difficult moments, even when these moments happen at 3 a.m. There was someone to answer her questions. Someone to help her avoid spiraling into negative thought patterns.

Whenever Ali felt blue, Wysa would suggest she listen to calming music or do some breath work. A small but effective nudge to keep her out of bed and on track for all the other doctor’s appointments she needed to recover from her injuries. Without Wysa, Ali likely would have never seen a therapist.

Stories like Ali’s are powerful examples of how AI-driven applications could democratize access to mental health care. But I believe they can do much more than that. I believe they could make our engagement with mental health more personal and effective than ever before. Take the 24-7 service they offer, for example. That’s not just a great feature for a convenient scheduling experience. It’s also a feature that generates insights that are far more granular than any therapist could ever hope for.

If you’re seeing a therapist right now—one of those who is flesh and blood—chances are you won’t meet with them more than once a week. Acute crises aside, that might seem like a reasonable time interval to dive into the depth of your despair. No need to mull over your problems every day.

To make the weekly sessions valuable, however, you will have to remember everything that happened in between. The call with your sister that went sour after just a few minutes. The meeting with your boss that left you feel underappreciated. The fight with your significant other that made you question your ability to love unconditionally.

The problem is that our memories are fickle. Every time we access them, we change them a little. By the time you get to the therapist, the dialogue with your significant other will no longer be the same, even if you try your very best to offer an accurate, unbiased account of what happened.

None of this is news to therapists. It’s why they might ask you to keep a diary. To write down your feelings and thoughts as you experience them (or at night before you go to sleep). But even if you meticulously captured the big and small moments of life in your notebook—which most people won’t—your conversations with the therapist about these feelings and thoughts will still be retrospective. You might try to remember what it felt like. To put yourself back into the situation and try to relive it. But anyone who knows how hard it is to imagine what being sick might feel like when you’re currently healthy also knows how difficult it is to replicate a real feeling on demand.

Chatbots don’t require scheduling ahead of time. You can talk to them whenever you want. In the moments you find yourself right in the middle of an emotional vortex. Or the moments when, after weeks of mulling over a problem, you finally have a breakthrough and see the world more clearly. In short, the moments when having someone to share these experiences with and think through them together might be the most valuable—when the feelings are still raw. Most importantly, nobody prevents you from taking these conversations to your flesh-and-blood therapist to dissect them in greater detail and get a human perspective on the matter.

What I’ve just described might sound wonderful. But let’s be clear: it’s the potential of AI in personalized health care, not its current reality. Yes, chatbots like Woebot, Wysa, or Replika have helped people like Ali to manage their mental health problems. And there’s at least tentative evidence from more rigorous scientific studies supporting these anecdotal success stories.

But we are still far away from chatbots replacing human therapists, let alone offering services that might be considered superior. If I had to choose between a chatbot and a human therapist, I would still pick the carbon version ten times out of ten.

Take Wysa, for example. The application uses natural language processing to interpret your questions and comments. What are the challenges you face? What kind of advice are you looking for? But instead of generating a response that is tailored to you and your specific question (as any human therapist would), Wysa selects a response from a large repository of predefined messages that have been carefully crafted by trained cognitive behavioral psychologist.

Don’t get me wrong. These responses can be extremely helpful. But they are far from the level of personalization I fantasized about earlier. And because they are always chosen from the same set of responses that can become rather repetitive when using the app for a long time. It’s like hearing your mom giving you the same advice over and over again.

On the flip side, applications like Woebot use generative AI to come up with responses on the fly. Like a human conversation partner, Woebot isn’t constrained by a predetermined set of responses but can cater its advice to your unique situation. Say you told Woebot about your debilitating fear that Cambridge University made a terrible mistake in admitting you to the graduate program. You might have been able to fool the faculty during the interviews but soon they will realize what a fraud you are. Everyone around you is clearly so much smarter, and it’s only a matter of time until you’ll be asked to leave.

Unlike other applications, Woebot won’t merely respond with a generic suggestion for how to overcome impostor syndrome and build confidence. Instead, it will follow up with specific questions and relate its recommendations back to your unique experience at Cambridge.

But the increased flexibility and personalization of Woebot compared to other applications comes at a cost. Even though generative language models have made remarkable strides over the last few years, they still make mistakes. Just look at Woebot’s response to Estelle Smith, professor of computer science at Colorado School of Mines, who probed it with a statement about suicidal intentions in 2022 (figure 6-1).

FIGURE 6-1

Woebot’s response to a statement referencing suicidal thoughts

Source: Adapted from Grace Browne, “The Problem with Mental Health Bots,” Wired, October 1, 2022, https://www.wired.com/story/mental-health-chatbots/.

Not the response you’d hope for. And not an exception either. In 2018, Woebot made the headlines with a shocking response to another researcher’s question about sexual abuse (figure 6-2).

The two examples are a good reminder that we are still miles away from the utopian future I’ve painted. I can’t imagine chatbots fully replacing human therapists anytime soon, no matter how sophisticated they become. As Alison Darcy, founder of Woebot, put it: “A tennis ball machine will never replace a human opponent, and a virtual therapist will not replace a human connection.”13 If you can afford to see a flesh-and-blood therapist, I bet you will continue to do so.

But that’s beside the point. Chatbots like Woebot weren’t built to replace existing mental health offerings. They were built to complement them. To support therapists by providing additional insights. To fill in at 2 a.m. when your therapist isn’t available, but you urgently need to talk to someone. And to offer an alternative to anyone who can’t afford the luxury of paying $500 a week for a one-hour therapy session, or who is too worried about the stigma that is still associated with mental health problems.

FIGURE 6-2

Woebot’s response to a statement referencing sexual abuse

Source: Adapted from Grace Browne, “The Problem with Mental Health Bots,” Wired, October 1, 2022, https://www.wired.com/story/mental-health-chatbots/.

And with generative AI becoming exponentially more powerful every month, we are getting closer to this vision every day. A team of psychologists and psychiatrists with Johannes Eichstaedt (the same scientist who showed that depression can be predicted from tweets), for example, developed a scalable, low-cost tool that could soon make effective treatment for post-traumatic stress disorder (PTSD) available to a much larger part of the population. Building on well-established treatment protocols for PTSD, they created a custom version of ChatGPT that can train therapists by mimicking both patients and supervisors.

This brings me to the final example of how psychological targeting could act as a force for good—one that we have the technology to implement already but that is currently nothing but a lofty dream.

There’s Hope for Politics

Much of what I have told you about psychological targeting so far has been focused on its potential for personalization, its potential to craft and filter your experience of the world according to your core identity. You’re an extrovert? Let’s help you find these extroverted products we know you’ll love, connect you with extroverted service representatives who know exactly how you tick and what you need, and tailor the look and feel of your online experience to your bright and enthusiastic personality. No need to deal with the boring reality of those lame introverts anymore.

Don’t get me wrong; personalized experiences can feel incredibly rewarding. But they also make us rather unidimensional and isolate us from other people who are different. It’s this isolation and gradual breakdown of shared reality that has become a growing public concern, with buzzwords like echo chambers and filter bubbles capturing the public imagination.

Nowhere is this concern more pronounced than in the political sphere. Over the past few decades, the healthy competition of ideas between political parties in the United States has descended into pure political tribalism. We no longer quibble with the opposition about political ideals or specific policy goals. We demonize them. Our hate for the other side now exceeds our love for our own side. In 1994, less than 20 percent of members across both parties held extreme negative views of the other side. In 2017, those numbers went up to about 45 percent on each side. We use the little political engagement we have left to radicalize and shield us from the enemy.14 In a world like this, psychological targeting might seem like pouring gasoline on an already ravaging fire—another way of solidifying our echo chambers by locking us further into our own perspectives and amplifying our existing views of the world.

But what if we could use psychological targeting to accomplish the exact opposite? What if, instead of narrowing our view on the world, we could use it to explore the world and peer into the echo chambers of others?

Our natural ability to take someone else’s perspective is severely limited by our own experiences. If I had never left Vögisheim, I would have a hard time imagining what life in the United States really looks like, something I only learned after my first visit to Costco (holy shit!). And even now that I live in New York, I have absolutely no idea what the life of a fifty-year-old farmer in Ohio might look like, or what the day-to-day experiences of a single mother in the suburbs of Chicago entail.

But what if there were a way to see what the world looks like from the vantage point of people who are completely different from us? People who don’t have the same skin color, political ideology, socioeconomic background, personality, or childhood experiences.

Walking in others’ shoes

Facebook and Google could offer such a magical machine tomorrow. With all the data they have collected over the years, these tech giants know exactly what the worlds of a fifty-year-old farmer in Ohio or the life of a single mother in Chicago look like. For now, they use that data to optimize their content recommendations—keeping the three of us separated in our respective echo chambers. But there’s potential here to offer an alternative path.

For the first time in history, we could design tools that allow us to step outside of our own shoes and start exploring the world from the viewpoint of someone who is entirely different. Not just any viewpoints, but viewpoints that we define and might never otherwise get to experience.

As a starting point, you could ask Facebook to let you explore the news feeds of other users who agree to be part of a “perspectives exchange” or “echo-chamber swap.” For a few hours, you could live your online life in their shoes and see what they see.

Or if you wanted to have a little bit more control over your experience (I don’t know if I could handle the news feed of an eighteen-year-old teenage boy at peak puberty), you could request a dial that lets you choose how far you’re willing to stray from your comfort zone. On a regular day, you might keep the dial close to the “Show me the content that best fits my preferences” side. But for the days you feel adventurous and ready to take on the world, you could push it all the way to the “Show me content that I would otherwise never see.”

Think of it as an explorer mode, with varying degrees of adventurousness. Who knows, if Netflix had such an explorer mode, I might find a new passion for the type of dark Korean horror movie the recommendation algorithm has been hiding from me all these years. Instead of making us unidimensional and boring by narrowing our experiences, psychological targeting could make us more interesting and multifaceted by expanding them.

If you don’t like the idea of pure randomness, I have two alternatives for you: algorithmic guidance and self-guidance.

Imagine Google optimizing its search results to guide you to the content you really should know about. The important gaps in your knowledge about immigration, for example. The arguments for stricter abortion laws you haven’t seen yet (and would likely never look for yourself). Imagine a news selection algorithm that isn’t trying to reinforce what you already believe about the world but instead uses its intelligence to show you the news you most likely haven’t been exposed to but would benefit from.

If the thought of Google picking the shoes for you to walk in makes you feel uncomfortable, fair enough. I agree that this would require a level of trust in the tech giants they haven’t necessarily earned yet. But what if, instead, you could have full control over whose shoes you would like to walk a mile in today? I would love to see what an extroverted emotionally stable version of myself would see on Instagram—I can only hope it would still involve cat videos.

For all its risks of trapping us in our own echo chambers, psychological targeting could be a real game-changer in how we learn about the world. For the first time in history, we could step out of our shoes and start exploring the world from the viewpoint of someone who is entirely different. Not just any viewpoints, but viewpoints that we define and might never otherwise get to experience.

To be clear, we might not use these different exploration modes all that often. Life in our echo chambers is too comfortable. It feels good to have the world reassure your beliefs and values. In most instances, Google’s optimized recommendations are precisely what I want. I want its search engine to know what I’m looking for. Don’t dare make me go to page two!

But then there are these rare occasions when I am itching to break out of my comfort zone. To see what the world looks like from a different perspective. The Roe v. Wade supreme court case, for example, was one of these occasions. I would have loved to see what the Google’s recommended articles looked like for an older male Republican in Texas. Chances are I wouldn’t have liked the content very much (just as the guy probably wouldn’t have appreciated mine). But I want to at least have the option to see it.

And perhaps psychological targeting could do even more than that. It could help me better understand what I’m seeing. I don’t necessarily mean it would help me agree with the opinion of the person whose shoes I’m wearing. It’s hard to imagine changing my mind about the importance of a woman’s right over her reproductive choices. But I wouldn’t want to simply dismiss the person’s reality either. Or worse, become so appalled by what I see that I dig my heels in even deeper. That’s what got us into the current political mess in the first place.

What we need is a bridge between the different realities we live in. A way to tailor our new shoes to make them fit just a little bit better so that we can continue exploring the unchartered path ahead of us. Think back to what I have told you about Matthew Feinberg and Rob Willer’s work on moral reframing in chapter 5. We are far more likely to relate to the arguments of the other side if we get a chance to think them through using our own moral lens. That’s what psychological targeting has to offer.

Imagine having a personalized conversation partner who takes the time to sit down with you and helps you digest what you see. Who engages in a real conversation. With all the back and forth, arguments and counterarguments, and questions and answers a true political debate deserves (but without the animosity we might experience when talking to our next-door neighbor).

I’m not talking about another human, but an AI. With language models such as ChatGPT having become increasingly adept at natural conversation, this isn’t a far-fetched dream but a reality already. As the psychologist Thomas Costello and his colleagues at MIT have shown, personalized conversations with a GPT-based chatbot reduced people’s beliefs in conspiracy theories by about 20 percent. That’s a truly remarkable accomplishment considering how hard it is to change people’s minds about deeply held beliefs that are core to their identity.15 And just like we are more likely to confide in Google than our spouse when it comes to questions about sex and money, we might feel more comfortable asking ChatGPT politically charged questions. “I am pro-choice but was shocked to find out that you can detect a fetus’s heartbeat as early as six weeks into the pregnancy. Could there be good reasons for a liberal to opposed abortion?” “My family supports the Second Amendment, but I worry about my children at school. What arguments could I use to change their mind?” Having these conversations isn’t going to solve the political tribalism problem overnight. But maybe, just maybe, it could lead us back to a more constructive dialogue between people who simply see the world in different ways.

The three examples offer a sneak peek into how psychological targeting could potentially be a catalyst for social good. But not all of them are a reality yet. Some are mere dreams in need of bold leadership to bring them to life. Others already exist but are yet to realize their full potential. And there are still others I haven’t discussed.

Undoubtedly, there’s much more work to be done. But it starts with people like you and me daring to ask the what if questions. To pause and think about all the ways in which psychological targeting could help people live healthier and happier lives. Individually and collectively.

Leaving my village allowed me to appreciate the benefits of being known to the people around me. I’m grateful for that opportunity. But while I now view village life more positively in retrospect, it will never be entirely positive. The things I hated about it as a teenager would still frustrate me today. Even though I miss some of the intimacy the village provided, I’m still not keen on people meddling in my life without permission.

The same holds true for psychological targeting. No matter how confident I am that psychological targeting could make the world a better place, I’ve never been able to shake off my discomfort with it entirely.

And as we will explore in the next chapters, there are good reasons to be skeptical.