7 When Things Go Wrong

In 2020, a woman in the south of China applied to relocate to Hong Kong with her husband, but her request was denied. The reason? An algorithm had flagged their relationship as suspicious. The couple hadn’t spent enough time in the same place, and when they failed to celebrate the Spring Festival together, the algorithm sounded the alarms and concluded that the marriage was fake. Cases like this show how your personal data can be used to discriminate against you. However, the latest plans by the Chinese government go far beyond this, and—for the first time—include forms of psychological targeting.

As a recent investigation by the New York Times revealed, the Chinese government is rolling out profiling technology that not only tracks its citizens but is increasingly geared toward preventive action.1 This includes using people’s psychological dispositions and current situations (see chapters 24) to predict their likelihood of petitioning a government action or participating in a protest. Short-tempered, paranoid, or overly meticulous? High risk of voicing dissent. Even higher risk if you’ve recently gone through some personal trauma or tragedy. Sorry, but with those statistics, you aren’t welcome in Beijing. You could pose a threat to the government, and officials don’t like to take chances. Should you try to make your way to the capital anyway, you will be placed under heavy surveillance and potentially stopped.

Psychological targeting opens the door for a whole new level of discrimination that isn’t merely based on what we can observe on the surface but what lies beneath. In 2016, Li Wei, a scientist at China’s National Police University, described this trend, noting, “Through the application of big data, we paint a picture of people and give them labels with different attributes. For those who receive one or more types of labels, we infer their identities and behavior, and then carry out targeted preemptive security measures.”2

Are you suffering from mental health problems? Do you have HIV? Or are you an unemployed young adult? In China, any of these labels could mean heavy surveillance and restrictions on the public places you are allowed to visit.

It’s possible that you don’t find any of this particularly alarming. I teach a class on the ethics of personal data to executives at Columbia Business School. There are always a handful of people who respond to my cautionary tales with a snide comment about how they don’t care about their privacy and have given up the idea of ever getting it back.

And I can relate to their sentiment. Trying to reclaim what should be ours by law can feel like an impossible uphill battle. But there are at least two problems with comments like this. First, I don’t believe them. Not because I think they are lying to me when they say they don’t care, but because I believe that their judgment is based on at least one of the two fallacies that trick our brains into jumping to the wrong conclusion: the “It’s worth it” fallacy and the “I have nothing to hide” fallacy (more on those later).

Second, giving up on privacy means giving up on much more than just your right to be left alone. It means giving up on your ability to make your own decisions.

As the philosopher Carissa Véliz puts it in the title of her book: “Privacy is power.” The moment others have unrestricted access to your deepest psychological needs, they gain the power to control what you do—and eventually who you are.

Privacy Isn’t Dead Yet

In 1999, the CEO of Sun Microsystems, Scott McNealy, famously declared, “You have zero privacy anyway. Get over it.”3 What seemed like a bold claim at the time has evolved into a near-prophetic statement in today’s era of big data.

Notably, only a fraction of this is driven by covert surveillance operations like those uncovered by whistleblowers like Edward Snowden. Much of it stems from the everyday choices we make: we decide to use Facebook, we decide to enable GPS tracking, and we decide to invite Alexa into our homes.

Has privacy become obsolete just as McNealy predicted? Even if we aren’t as cynical as some of my students, our behaviors seem to suggest so. When was the last time you carefully read the terms and conditions of the products you’re using?

But let me ask you this: Would you be comfortable letting a friend or colleague scroll through your messages? What about making your Google searches publicly available? And what about releasing all your credit card transactions and GPS locations?

My guess is that these options don’t seem particularly appealing to you. Just think about the offline equivalents of these examples. The postal worker reading your mail, the therapist selling your session transcripts, and the stalker following your every step would all go to jail. If you still think you’ve lost all appetite for privacy, I encourage you to take down your window blinds and remove all your account passwords. No need for them in your blissful post-privacy existence.

But, if our fundamental need for privacy isn’t dead, then why do we do so little to protect it and instead try to convince ourselves we’ve stopped caring?

Over the years of talking to students and giving public talks, I have come to realize that people often substitute the question whether they care about their privacy with two simpler and misleading ones: Is sharing my data worth it? and Am I worried about my data being out there?

These questions serve as mental shortcuts to a more complex issue—whether we care about our privacy and should be more protective of it. They might seem reasonable substitutes at first glance, but they can lead us astray, masking our true feelings about privacy and influencing our actions in ways that may not serve our best long-term interests.

The “it’s worth it” fallacy

When I ask students whether they care about their online privacy and how they protect it, they often respond by reciting all the benefits they would have to give up if they didn’t share their personal data. They tell me about Google Maps, Netflix recommendations, and Uber rides.

I wholeheartedly agree that these are fantastic perks. But that’s answering a different question: Is sharing my personal data worth it?

Granted, this approach doesn’t seem completely unreasonable. We often assess the value of a certain item by how much it would hurt us to give it up. I know that drinking five cups of coffee a day might not be great for my health, but I simply enjoy the experience too much to change my behavior.

Similarly, sharing your personal data can come with amazing benefits that you might simply be unwilling to give up: turn off the GPS on your smartphone and you are lost without Google Maps; disable your social media accounts and you will be disconnected from your social life; cancel your credit card and you might have a hard time getting by.

But just because the benefits of sharing your personal data might outweigh the potential risks doesn’t mean you don’t care about your privacy (just as me deciding to have five cups of coffee a day doesn’t mean that I wouldn’t also like to be healthy). All it means is that your immediate desire for pleasure or convenience supersedes your privacy concerns.

Ideally, wouldn’t you prefer to enjoy these services while also maintaining a high level of privacy? What makes substituting the original question with a cost-benefits analysis particularly problematic is that we are caught in an unfair battle against the tech industry, which has every incentive to highlight the benefits of sharing our personal data and no incentive to highlight the costs.

The upside of sharing your location data with Google Maps is obvious and immediately tangible. You get to where you want faster without ever getting lost. A no-brainer. The downside of doing so is a lot more nebulous.

Think about it for a second: What exactly are the potential costs of sharing your geo-location with smartphone applications? What are the types of inferences about you that someone can draw from getting access to your longitude and latitude coordinates? And how could this information be used to your detriment, now or in the future?

Before reading this book, you probably had a vague suspicion that sharing your location with third parties could reveal more about you than you might like. It might give away your home address or the place you work. It might reveal which stores you regularly shop in. But I doubt that any of the January 6 rioters at the US Capitol expected their GPS records to get them up to twenty years in jail. And it almost certainly isn’t top of mind for most people that our GPS records can offer insights into your personality and mental health.

But there’s more. In many instances, you might not even be aware that you’re sharing data in the first place.

Take the example of a young woman—let’s call her Anna—whose naked picture was posted on social networks without her permission in December 2022.4 The picture showed her in a lavender shirt sitting on the toilet with her white pants pulled down all the way to her ankles. The mystery wasn’t just how the picture had found its way online. Anna had no idea when the picture was taken, let alone who had taken it. How could someone capture such an intimate moment without her knowing? It wasn’t a heartbroken ex. Nor the work of a skilled three-year-old toddler. The culprit turned out to be her Roomba vacuum cleaner.

Anna had been part of a customer focus group tasked with testing a new Roomba model. She had signed the paperwork without much thinking, welcoming the fully automated master snooper into her home. The fine print of the agreement stated that the vacuum cleaner could take pictures at any given point in time and that all footage belonged to iRobot.

The purpose of this exercise wasn’t to spy on people in precarious bathroom situations. The data was collected to train Roomba’s object recognition models to improve its navigation abilities. None of this mattered to Anna. Her naked picture had been leaked by contract workers in Venezuela, and there was nothing she could do to get it back.

The “it’s worth it” fallacy focuses our attention on the few occasions in which we happily share our data for better service. But that’s the exception, not the rule. In the same way Anna didn’t benefit from having her naked picture leaked online, you don’t benefit from most of the ways in which your data is used. But because we are constantly reminded of the amazing benefits some of the data-hoarding technologies around us do offer, we don’t even think about asking for more.

You shouldn’t be forced to choose between convenience and service on the one hand, and privacy on the other (I’ll return to this in chapter 9). You should demand and receive both.

The “I have nothing to hide” fallacy

A second response I hear whenever I raise concerns about data privacy is: “I am not worried about my privacy, because I have nothing to hide.” This sentiment is a narrative that has been skillfully nurtured by Silicon Valley: if you don’t feel comfortable with others accessing your personal data, something must be wrong with you.

But that’s nonsense. As the whistleblower Edward Snowden eloquently argued in his book Permanent Record, privacy is not about hiding something illegal or shameful. It’s about maintaining control over your personal information and having the freedom to decide for yourself how that information is collected, used, and shared.

I believe my students when they say they are not worried about their data being out there. But adopting this mindset is shortsighted and potentially dangerous.

Not having to worry about your personal data getting into the wrong hands is a privilege. Remember how computers can predict sexual orientation from Facebook likes, status updates, and even photographs? Being gay remains a crime in seventy countries around the world. And some countries, including six nations that are members of the United Nations, still impose the death penalty for same-sex sexual activities.

Granted, as tragic as this reality might be for those who suffer from the consequences of a post-privacy world, it might not be immediately obvious why it should impact how you personally feel about sharing your data. You might fully empathize with the gay population in Saudi Arabia. But if there is no reason for you to believe that your data could hurt you in any way, why would you be opposed to sharing it? But no matter how safe and comfortable you feel about sharing data right now, you cannot predict what is going to happen with it in the future.

Take the history of my own home country. In 1938, Germany was a democracy. In 1939, it wasn’t. Over 6 million Jews from all over Europe lost their lives in the Holocaust. And the accessibility of personal data played a major role in the number of atrocities. Some countries, including the Netherlands, appended official census records with religious affiliation, making it easy for the Nazis to track down and arrest members of the Jewish community. Other countries, including France, did not have this information. In France, about 25 percent of the Jewish population perished. In the Netherlands, 73 percent did.

Now imagine the Nazi regime having access to the digital footprints of every single person in Germany and the rest of Europe. What if they knew exactly what people were doing, where they were going, and who they were socializing with?

The Apples, Facebooks, and Googles of the time might have been happy to share user data with the Nazi regime and secure themselves a spot in the limelight. And even if their CEOs had resisted the pressure to comply, they would have simply been replaced with ones that were more amenable to the intentions of the government. If you are like me, you’d rather not imagine this. The thought sends chills down my spine every single time.

This is precisely why robust privacy and data protection laws are so essential. The risk of a gay person in Saudi Arabia today, or that of a regime-critical citizen in China, could one day be risks you face in your own country.

The 2022 Supreme Court decision to overturn Roe v. Wade made this painfully real for American women. Within a matter of days, millions of women suddenly had to worry that their search histories, use of period-tracking apps, online purchases, or GPS location data could be used against them.

No matter how safe and comfortable you feel now, your data could be misused in the future. Always remember: data is permanent, but leadership isn’t.

Why Your Life Isn’t Truly Yours without Privacy

You might still be skeptical whether privacy is really such a big deal. Maybe you prefer to live in the here and now rather than worry about some hypothetical risk in the future. But here’s the catch. By giving away your personal information, you’re not just risking a potential future threat. You’re sacrificing something valuable and tangible, right now your self-determination.

Giving up your privacy means giving up the freedom to make your own choices and live life on your own terms. When others have access to your personal data and can use it to gain intimate insights into who you are, they hold power over you and your decisions.

Take the case of Kyle Behm, a young man from Maryland. In 2014, Kyle had taken a break from studying at Vanderbilt University to deal with some mental health issues. He was looking for a part-time job when his friend recommended him for a minimum-wage position at a Kroger supermarket. The application seemed like a formality. After all, Kyle was a smart and talented university student. But Kyle was turned down for the job.

Flabbergasted by the outcome, he asked his friend for an explanation. He was told that his application had been “red-lighted” by the personality test he had taken during the interview process. Kyle’s honest reporting of his struggles with bipolar mood swings had made his neuroticism score shoot through the roof.

The testing company Kronos put a red flag on his application, raising concerns that Kyle was likely to ignore customers if they were upset. Faced with this prognosis, Kroger rejected his application. As did every other company Kyle applied to; they all used the same personality test.

Kyle had lost control over his life. Others had decided for him that his personality wasn’t suited for even the most basic jobs. What was his life going to look like? With all doors being shut in his face, what was he going to do? Desperate and hopeless, Kyle took back control over his life one last time. He committed suicide.

Kyle’s tragic story was featured in the 2021 HBO documentary Persona: The Dark Truth Behind Personality Tests, which examines the use and impact of personality tests in various parts of society such as the workplace, schools, and even dating. Although the documentary’s narrative is one-sided and lacks critical nuance, it drives home an important lesson: the choices we have available to us, and the paths our lives could take, are often shaped by others’ perceptions of who we are.

As in my village, these perceptions can both open and close doors for us. My neighbors introduced me to the right people when I needed a job because they thought of me as reliable and trustworthy. However, for the same reasons, they also sometimes forgot to invite me to some of the more questionable—but fun—events going on in the underground scene of Vögisheim (and yes, my dad being a police officer probably didn’t help either). In the case of Kyle, the perceptions generated by the personality test closed not just one but many doors.

It gets worse when you move away from self-reported questionnaires to automated personality predictions from people’s digital footprints. That’s not hypothetical. There are plenty of companies offering just that. TurboHire, for example, deploys chatbots and natural language processing to infer your personality, promising recruiters faster screening of job applicants.

Similar promises are made by Crystal Knows, a company that analyzes publicly available information on websites like LinkedIn to predict a candidate’s personality. Such automated assessments strip away every last bit of agency you might have had over the recruiting process.

Despite all the shortcomings of traditional self-reported surveys, you were still in charge of answering the questions. It was up to you how to respond to the question, “Do you make a mess of things?” Once companies start predicting your psychological traits via machine learning algorithms, you have no say in that matter whatsoever.

But that’s not even the worst part. Losing control over how other people think of you is never fun. However, it’s particularly upsetting when their perceptions are inaccurate. Or plain wrong. Predictive models can be fairly accurate on average and still make plenty of mistakes at the individual level.

To be clear, humans aren’t perfect either. Hiring managers don’t need test scores or computer-based prediction of personality traits to make discriminating hiring decisions. As humans, we have plenty of stereotypes and decision-making biases to fall back on.

If you’re open-minded and extroverted, for example, you are more likely to hire the open-minded extrovert than the more conservative introvert (a phenomenon known as the similarity principle). But even though human biases are often harder to detect and correct for, they are usually less systematic than algorithmic bias.

Think back to Kyle’s example. The moment a company used Kronos’s assessment as part of its hiring, he was doomed. Kyle’s profile was always interpreted the same way: strong agreement with questions 1 + 5 = high neuroticism = red flag. While some hiring managers interviewing him might have come to the same conclusion, others might have shown more empathy or used different selection criteria altogether. The job wouldn’t have been guaranteed, but it could have at least been an option. A bias in a standardized test or predictive algorithm means game over.

It is tempting to assume that doors close (rather than open) for those unfortunate enough to possess personalities we consider problematic. Kyle’s high levels of neuroticism are a prime example. Most of us don’t like to admit that our emotions get the best of us sometimes. Is it really so surprising, then, that Kyle had a hard time finding a job? As comforting as this assumption might be, it’s not only morally but also factually wrong.

Take agreeableness—the personality trait typically thought of as highly desirable. It’s good to be seen as nice and caring, isn’t it? Well, that depends on what you’re trying to solve for. As I said in the previous chapter, agreeableness is also the trait that has been linked to higher levels of loan defaults. If your bank learns that you are the nice and caring type, it might think twice about offering you a loan. And even if it does, your high-risk disposition might get you terms that are much less favorable than those your more critical and competitive counterparts receive.

But granting others access to your personal data—and with it, your psychology—does not only determine what options are available on your life’s menu. It can also influence which options you choose. And we’re not talking peanuts here.

You make about 35,000 decisions every day.5 Those decisions could be as mundane as choosing today’s socks or breakfast cereal, or as consequential as deciding which career to pursue or who to marry. Even though we all love the idea of being in control of our decisions, few of the choices we make are entirely our own. We are influenced by what other people do, the context in which a decision is made, or our mental state at the time we make a decision.

Think of the last time you went grocery shopping hungry. Chances are you not only bought more food than you needed, but your cart may have had a few additional items that were more satisfying than healthy.

As I have shown in the previous two chapters, the ability to tap into your psychological needs and motivations gives others power to interfere with your decisions and potentially alter them. In some cases, that might be extremely helpful. But in others, it might not. And in too many instances, you might not even be aware you are being manipulated.

Ever thought about buying life insurance? As a life insurance company, I want to know if you’re a little neurotic. If you are, I will add you to my shortlist and make sure to repeatedly target you with offers. Granted, life insurance might be just what you need to put your mind at ease. But even if that’s the case, the mere fact that I’ve singled you out as a potential target and exposed you to many more ads than your emotionally stable neighbor means that you did not make the decision to buy life insurance alone. Or maybe I leverage your anxiety to sell you a little more than you need? Highlighting all the terrible risks you could be facing without life insurance might be enough to get you to reach a little deeper into your pocket.

In contrast to Kyle’s story, I’m not creating or wiping out paths your life could have taken. I’m simply making some of the paths more appealing than others. I might not be able to convince you to swerve all the way from the outer right to the outer left path—just as I don’t think Cambridge Analytica could have easily converted a die-hard Democrat into a Trump supporter—but many of the paths we can choose from are near one another. No need for a hard swerve; a small and gentle course correction will do. Should I buy Colgate or Crest? Study psychology or economics? Travel to Italy or Croatia?

As I’ve shown in chapter 5, insights into your psychology give me the instruction manual for how to paint these paths such that you pick the one I want you to travel on. Sometimes, that might not be the end of the world. If I get you to buy Crest toothpaste instead of Colgate, who cares? But what if I aimed to influence which partner you pick, where you invest your money, or who you vote for? Or all of the above? At what point does your life no longer belong to you?

It all comes down to this: you can’t take control of your life without privacy. Only when access to your personal data—and with it, the inferences that can be derived from it—is restricted, do you truly have the choice over what to share, with whom, and for what purpose.

You need privacy to be the conductor of your own life.

Haven’t We Seen All This Before?

Our history is replete with examples of discrimination and manipulation. While we might not have called it psychological targeting, we naturally try to read the people we encounter, learn about them, and then use that knowledge to decide how to interact. We mingle with the good guys and avoid the bad guys (unless you’re a motorbike-loving teenage girl, of course). We recommend movies based on what we believe our friends will enjoy. And we intuitively adapt our speech to suit the listener.

You don’t talk to a three-year-old kid the same way you talk to your mom or your boss. When I was young, I knew exactly how to ask my dad for a favor, and how to persuade my mom to give in. These forms of psychological targeting come so naturally to us that we typically don’t even notice using them.

What makes the algorithmic version of psychological targeting so different from its traditional counterparts is the mere scale at which it can be deployed. Our face-to-face interactions might be highly personalized, but they are also highly limited in scope. In most cases, personalized conversation is a one-on-one endeavor. Psychological targeting, in contrast, can reach millions of people all at once.

At the same time, there’s no reciprocity in the process anymore. Back in the village, my neighbors invaded my privacy, and I invaded theirs. They influenced my decisions, and I returned the favor. Today, it’s largely a one-way street with anonymous counterparts pouring money into understanding and changing our minds. And we do not stand the slightest chance to counter their moves.

Importantly, the scalability and unidirectionality of psychological targeting alone don’t make it a novel threat. Any radio ad or TV commercial is some sort of large-scale manipulation, and Cambridge Analytica or the Chinese government aren’t the first to figure out how to use new broadcast technologies to silence contrarian voices and stir fear among the public either.

However, while the goals of psychological targeting and its traditional counterparts might be similar, there’s a big difference. Not only do traditional TV or radio propaganda follow a one-size-fits-all approach, but they are also visible to everybody (that is kind of the point). Psychological targeting, on the other hand, happens entirely in the dark. We have no idea what content and marketing other people receive.

One reason why it took so long to recognize the interference of Cambridge Analytica and other actors in the presidential election was that the ads were targeted. The propaganda was only visible to those with the highest likelihood of following it. And even among those targets, the propaganda seen by one person in Houston, Texas, might have been entirely different from that seen by another person just a few blocks down the road.

It is like being in a dark backstreet with Mark Zuckerberg, who whispers in your ear whatever the highest bidder asks him to. Not exactly a pleasant thought!

The modern version of psychological targeting is so powerful (and therefore potentially dangerous) because it combines two different worlds. It follows the scale of traditional propaganda but has the granularity and depth of face-to-face interactions.

For now, there is no way to collectively monitor what is being said and shared, making controlling this explosive combination a herculean task that we haven’t yet learned how to master.

The tension between the bright and dark sides of psychological targeting in many ways mirrors the tension I experienced in growing up in the village.

Just as being seen and understood allowed me to receive the guidance and support of people who wanted me to succeed in life, psychological targeting can make us all better off—both individually and collectively. But in the same way that my neighbors often interfered with my life in self-serving ways I didn’t appreciate, psychological targeting can also be used to exploit our deepest fears and let us dance like mindless marionettes.

As a teenager, I had no choice but to deal with the ups and downs of village life. For me, this meant learning to play the game to my advantage. I found ways to encourage my neighbors’ support whenever I needed it. And I got better at protecting myself from their uninvited interference.

We need to do the same for today’s digital village. Just as I couldn’t simply pack up my things and leave, we can’t go back in time to a place where big data and technologies like psychological targeting don’t exist. We can’t put the genie back in the bottle. But we can become better at managing it.

That’s what part 3 of the book is all about.