5
TOO LITTLE PRIVACY
The Internet never forgets. Everything we do online is recorded in credit card databases, Google archives, Facebook servers, computer cookies, and the electronic vaults of the NSA. If someone were to piece together the information we scatter online, they would know a lot about us – the brands we like, the movies we watch, our political views, and our sexual preferences. Data-mining firms like eXelate and Intellidyn do just this, collecting information about our online behavior and offering it to firms for marketing purposes. Facebook knows our age, gender, education, residence, travels, political views, and likes and dislikes for many products and services – information we have willingly provided them. Imagine if Facebook were to combine their records with commercially available data from credit card companies, credit rating agencies, and census databases: they would have more information about us than our closest friends and family. In fact, even a simple Google, Facebook, or LinkedIn search can turn up a lot of information we might consider private, such as our home address, employer, approximate salary, romantic relationships, and political views. The more time we spend online, the less privacy we have where privacy is “the right of individuals, groups, or institutions to determine for themselves when, how, and to what extent information is communicated to others” (Westin, 1967) – i.e., the right to be left alone.
Lack of privacy on the Internet is the latest chapter in a history of surveillance going back to ancient Egypt, where imperial records tracked births, deaths, immigration, taxation, agricultural production, and military service. The Book of Numbers in the Old Testament tells us that the nomadic people of Israel regularly conducted a census covering every member of the tribe. In the Middle Ages, the Domesday Book contained detailed property records of English land-holders and their dependents. In the twentieth century, Britain was the first country to create large-scale bureaucratic databases to facilitate conscription during the First World War. Today, the Chinese government uses databases to record not just the financial credit of citizens but also information about their social and political behavior.
Recent events have further increased the reach of the surveillance state. Since the financial crisis, governments have come under pressure to make the rich pay their fair share of taxes. As a result, authorities are now tracking incomes and expenditures more zealously than ever before. The United States requires foreign financial firms to report the transactions of all American clients, and more than 100 countries have signed up to the Common Reporting Standard, which requires them to regularly exchange information about foreign account holders. After 9/11 and other terrorist attacks, governments around the world have installed new surveillance tools to track their citizens in the name of security.
Our zone of privacy has shrunk not just online but also when we make phone calls or send text messages. These can be recorded by governments and Internet service providers, and layered with geolocation data from cell phone towers to create a detailed picture of our movements. Some of the devices we carry can be programmed to record data unobtrusively. Facebook has considered offering a mobile app that would turn on our cell phone microphones to identify songs or TV shows playing around us. The app would then automatically add a tag to our status showing, for example, that we are listening to the band Arcade Fire or watching Game of Thrones. Our privacy is also being eroded by the Internet of things, where household objects such as refrigerators, cars, and homes are connected to a virtual network. The IT analyst firm Gartner estimates that there will soon be 20 billion devices connected to the Internet of things (Gartner, 2015). Although these networks make products run more efficiently, they also increase the risk of hackers controlling our homes and accessing private information. Consider, for example, a car insurer who offers 20 percent off if you let them install a vehicle geo-tracker that monitors your driving skills. If you accept this offer, remember that the insurer will also know if you go to the liquor store three times a day, visit your doctor every Friday, and check into cheap motels on weekday afternoons. One day, this information could be used against you by hackers, governments, life insurance companies, or divorce lawyers.
The surveillance state is spreading from the online to the offline world. If you live in cities like London, Paris, New York, Dubai, Singapore, or Hong Kong, your public movements are constantly being recorded by CCTVs (closed-circuit televisions) on every street corner. The day is not far away when camera-mounted drones will patrol our skies to keep the peace. As the world’s population becomes more urbanized, the unblinking eye of the surveillance camera will become our constant companion wherever we go. As Orwell predicted many years ago, Big Brother is now watching us.
Costs of Too Little Privacy
Privacy is a natural human instinct because it offers a number of psychological benefits. We have a need for personal autonomy, which is served by maintaining a zone of privacy that separates us from others. We have a need for emotional release from the tensions of everyday life by entering a private area safe from prying eyes. We have a need for self-evaluation where we can retreat into our own space to reflect on life experiences, and privacy helps us set appropriate boundaries for different people and institutions. In addition, there are practical reasons for privacy, such as keeping potentially harmful medical, financial, or legal information from falling into the hands of those who might misuse it. WikiLeaks showed how governments intercept personal communications, and we often read about hackers breaking into financial, political, and business databases.
Given the importance of privacy, how concerned are people about their lack of online privacy? Early research on this topic indicated a relatively low level of privacy concern, but more recent studies indicate that such concerns are on the rise. An early study by Gross and Acquisti (2005) reported that Facebook users could easily be persuaded to hand over personal information such as their address and phone number, and relatively few Facebook users changed the permissive default privacy settings on their accounts. However, more recent studies show that Facebook users have become reluctant to share personal information, and many have proactively changed their privacy settings (Christofides, Muise, & Desmarais, 2009; Dey, Jelveh, & Ross, 2012; Fogel & Nehmad, 2009). A survey by the Pew Research Center reported that 81 percent of people do not feel secure about using social media to share private information, 68 percent feel that way about online chat, 57 percent about email, 46 percent about talking on cell phones, and 31 percent about talking on landlines. Not surprisingly, people who were aware of government surveillance programs were more likely to say that their communications were not secure. Interestingly, people seem to be equally distrustful of advertisers and government, with more than 70 percent of people expressing privacy concerns in both cases. Other common privacy concerns include misuse of personal information by malicious individuals for bullying, stalking, or character assassination and misuse by criminals for identity theft and financial fraud (Boyd & Ellison, 2007; Krasnova, Kolesnikova, & Guenther, 2009).
The Privacy Gap
If people are becoming more concerned about privacy, we might expect them to act more carefully when they go online. Strangely enough, that’s not what actually happens. One of the contradictions of the Internet is that people often express a concern about privacy when asked in surveys, but when it comes to actual online behavior, people act as if no one is watching them. Several studies have reported this “privacy gap” between expressed privacy concerns on one hand and observed online behavior on the other (Acquisti & Gross, 2006; Stutzman & Kramer-Duffield, 2010; Tufekci, 2008). For example, Acquisti and Gross (2006) found that the 16 percent of people who reported being “very worried” about the possibility that a stranger knew where they lived still revealed this information on their Facebook profile. More than 55 percent of respondents in the Pew study described earlier said they were willing to share information about themselves online in exchange for free samples. Sexting is common among teens, despite the ease with which supposedly private pictures and video can be posted online. People often download music and movies illegally; pornographic websites are among the most visited worldwide; and many politicians and public figures have been laid low by indiscreet tweets and Facebook posts. Even private correspondence and pictures are not safe – ask Jennifer Lawrence about stolen iCloud pictures or Hillary Clinton about hacked emails. In each of these cases, people discovered to their dismay that online behavior is never strictly private.
Reasons for the Privacy Gap
Why is there a gap between expressed privacy concerns on one hand and actual online behavior on the other? One reason for the privacy gap is that different information is salient in our minds at different times. Survey questions about the Internet are often phrased broadly, such as “How concerned are you about your private information being accessed by others online?” When answering such general questions, people are likely to use their general knowledge about online privacy. Because there is frequent media coverage about violations of online privacy – the Snowden revelations or the Hillary Clinton email hack – people are likely to tell researchers in surveys that they too are concerned about the lack of online privacy. However, when people are browsing a particular website, factors other than privacy are more salient. For example, when we are on Facebook, we are busy browsing pictures, status updates, and personal messages from friends. Similarly, we focus on videos on YouTube, news on CNN, and products on Amazon. In all of these situations, we have more immediate objectives during browsing that take precedence in our minds over abstract concerns about privacy.
This mental bias in favor of the immediate and concrete is known as the availability bias (Tversky & Kahneman, 1973). The availability bias explains not only the privacy gap but also why people become overly fearful of terrorism after spectacular but one-off attacks such as 9/11; why drivers buy additional insurance after suffering a minor accident even though the probability of future accidents hasn’t really changed; and why politicians prefer to use vivid stories and anecdotes in their speeches rather than statistical data. The concrete and vivid is more persuasive than the abstract and general. When we are browsing the Internet, the availability bias ensures that immediate and concrete objectives such as entertainment, social interaction, and information gathering loom larger in our minds than more abstract concerns about privacy. Another reason why the availability bias increases the privacy gap is that our own first-hand experiences are more salient to us than things we read or hear about from others. Since many of us have not personally suffered losses due to invasion of online privacy, the issue is not particularly salient when we go online. And when something is less salient at any given moment, the availability bias predicts that it will have a lesser effect on our online judgments and behavior.
Yet another reason for the privacy gap is our tendency to follow the behavior of others, which is known as the principle of social proof (Cialdini, 2004). Try this experiment yourself: Ask three of your friends to stand on a sidewalk and look up to the top of the nearest building. Very soon you will see that most passers-by are looking up as well. You may have seen signs in hotel bathrooms saying “more than 75 percent of our guests reuse towels.” Research has shown that these signs are effective at increasing towel reuse because people tend to follow the behavior of previous guests (Goldstein, Cialdini, & Griskevicius, 2008). Similarly, electricity companies try to reduce energy consumption by informing customers that “most of your neighbors” have installed energy-efficient thermostats. Social proof operates on the Internet to normalize illegal downloading of movies or music – if most of my friends are doing it, I can do it too. Social proof has a stronger effect on our behavior when others doing the behavior are similar to ourselves. This is why illegal downloading is more common among younger people: they are surrounded by a large peer group of downloaders similar to them in terms of age, gender, and education.
The privacy gap is also driven by another psychological bias called hyperbolic discounting, which is a tendency to discount events in the distant future compared to the near future (Kirby & Herrnstein, 1995). Hyperbolic discounting explains why people often fail to save enough money for retirement, ignore the dangers of global warming, or indulge in chocolate cake when they are supposed to be on a diet. In each of these examples, the pleasure of the present looms larger than the price to be paid in the future. Online behaviors such as pirating movies or browsing extreme pornography provide immediate gratification, and hyperbolic discounting favors the pleasure of the moment over long-term concerns about privacy.
A fourth reason for the privacy gap is the self-positivity bias, which is our tendency to assume that we are special, and good things are more likely to happen to us than to other people (Menon, Block, & Ramanathan, 2002). I sometimes test the self-positivity bias in my classes by asking students on the first day of class to predict their expected grade at the end of the semester. I find that the majority of students say that they expect to get an A, even though the grading policy at my university – which students are well aware of – permits professors to award As to less than 10 percent of students. I don’t tell students this, but the truth is most of them will not get the A they expect. Like the children of Lake Wobegon, most people consider themselves to be above average even though in fact this is mathematically impossible.
We can see the self-positivity bias at work in many other situations: entrepreneurs start businesses all the time hoping to create the next Google, even though failure is the norm for start-ups; high divorce rates don’t deter people from getting married; and gamblers flock to casinos even though they know the odds are stacked heavily against them. People act as if fortune favors them even if the facts don’t. The self-positivity bias on the Internet leads people to discount the likelihood of bad things happening to them such as stalking, trolling, identity theft, and financial fraud. Because of this discounting of personal risk, people’s online behavior remains relatively unaffected by privacy concerns.
A fifth reason for the privacy gap is that we trust familiar brands. When we buy a new BMW, we trust that the car will deliver the promised horsepower and mileage, and we don’t hire a mechanic to do an independent verification of the manufacturer’s claims. When we buy Advil at the pharmacy, we trust that it contains the promised amount of ibuprofen without testing the tablet at a laboratory. Our online behavior is influenced by brand familiarity in a similar manner. When we share our personal pictures and information on Facebook, we trust that Facebook will not use our information in ways that can hurt us. The more familiar we are with brands such Facebook, Amazon, and PayPal, the less proof of privacy we will demand from them, and the more careless we will become about our online behavior.
A final reason for the privacy gap is our sense of anonymity on the Internet. The effect of anonymity on human behavior was examined in a study which monitored 27 houses in Seattle on the evening of Halloween (Diener, Fraser, Beaman, & Kelem, 1976). Inside each home was a table with two bowls: one filled with candy bars, the other with pennies and nickels. As children arrived in costume for trick-or-treat, they were asked by the host – who was actually a confederate in the study – to take only one candy and not to touch the money. The host then told the children that she had to get back to work in another room. In reality, the host was looking through a peephole to see how much candy or money the children would take.
The researchers were trying to understand whether perceived anonymity influences bad behaviors such as cheating and rule-breaking. To get to the bottom of this issue, researchers varied the anonymity of different groups of children by asking some kids their names and addresses while others were not asked any personal questions. The researchers found that children who felt anonymous because they had not been asked their names and addresses were much more likely to take forbidden candies and money – the rate of cheating was around 15 percent among children who gave their name and address, but more than 50 percent among those who were anonymous. The disinhibiting effect of anonymity is an important reason why otherwise law-abiding citizens take part in soccer riots and good students participate in hazing rituals at universities. People feel anonymous in crowds, and anonymity lowers the psychological barriers to bad behavior.
We usually browse the Internet by ourselves, and this solitude increases our sense of anonymity. Feeling anonymous, in turn, can increase negative online behaviors such as chatting with strangers, flaming, trolling, or expressing bigoted opinions. Indeed, research has shown that people are more willing to discuss controversial topics online when their identity is hidden (Chen & Berger, 2013). Of course, perceived anonymity is not always a bad thing. Research also shows that feeling anonymous can increase creativity because our thinking becomes less inhibited when we are not being watched and judged by others (Dahl & Moreau, 2002). An interesting implication of this finding is that Internet censorship in authoritarian countries could have an unexpected downside. Creativity flourishes when people feel they are not being watched, so countries that monitor their citizens’ online behaviors may be sacrificing creativity and innovation for the short-term benefit of political control.
What Can We Do?
Firms as well as individuals can take steps to close the privacy gap. One technique often used by firms is to publish a privacy policy on their websites. For example, websites such as Facebook, eBay, Expedia, and iTunes require users to read and assent to a privacy statement before using their services. The objective of these statements is to inform users about privacy risks so users can decide whether and how to interact with the website. However, an important problem with privacy statements is that they are drafted by lawyers and filled with dense language difficult to follow by regular folks. Research shows that complex messages are often ineffective because people either tune them out or misunderstand them (Chen & Chaiken, 1999; Petty & Cacioppo, 1986). So it is not surprising that few people actually read privacy statements before electronically agreeing to them (Martin & Murphy, 2017).
Another reason people sign privacy statements without reading them is the underweighting of rare events (Roth, Wänke, & Erev, 2016). For example, it has been found that people often don’t buy insurance for rare events with catastrophic consequences such as airplane crashes and earthquakes (Erev & Roth, 2014). Similarly, people are likely to underweight rare but serious events such as their private online information being used maliciously. The more such negative outcomes are discounted, the more likely it is that people will thoughtlessly sign privacy statements without understanding the risks being described.
To address these shortcomings of privacy statements, firms sometimes publish shorter versions called privacy certifications. Two types of privacy certifications are commonly used. One is a seal granted by professional organizations such as the International Association of Privacy Professionals. These certifiers audit the privacy policies of client firms and issue a seal that can be electronically affixed to the client’s website confirming that best privacy practices are being followed. Visitors to the website are also given a link to more information about the certification process. Although certifications are good because they are simple and visual, they also introduce a new problem. Just as people don’t read long privacy statements, it is equally unlikely that people will take the trouble to check the bona fides of the firm issuing the certification. As a result, just about any certification – even from unknown or disreputable certifiers – might be accepted as valid, lulling users into thinking that privacy risk has been dealt with. And even when the certifying firm is dependable, there is a conflict of interest between certifiers and clients. Since certifiers are paid by clients, they might be tempted to issue a positive certification even when it is not warranted by the facts. There was a similar conflict of interest between credit rating agencies and banks before the financial crisis of 2008, and we know how that turned out. A more recent approach is to present privacy statements in visual form, such as graphic novels or short online videos. For example, iTunes has used a graphic novel called Terms and Conditions which informs users in an entertaining manner about the privacy implications of using their service. This playful take on privacy was able to shrink the iTunes privacy statement from 20,669 words to a shorter (and illustrated) 7,000 words.
Another type of privacy certification is endorsement by other users of the website, such as “trusted seller” ratings on eBay or Amazon. The higher the average “trusted seller” rating, the less the privacy risk should be for prospective customers. These crowdsourced privacy certifications are useful because previous customers have no obvious reason to lie, and the rating itself is simple and easy to understand. However, this type of certification is useful only if many customers have submitted their ratings so that the sample size is large enough to permit reliable conclusions. Certifications are not useful when data is limited, such as when the seller is new to the market or when few customers have chosen to submit their ratings.
Firms can encourage safer online behavior by increasing the default security settings on browsers and apps. Browsers such as Firefox and Internet Explorer come with security settings that determine whether search histories are retained, cookies stored, pop-ups allowed, and passwords saved. These settings have default values chosen by the firm, which can subsequently be modified by the user. Interestingly, research has shown that people have a strong tendency to stick with defaults provided to them (Brown & Krishna, 2004). Consider the example of organ donations in different countries. Some countries such as the United Kingdom and Germany have an “opt-in” system where the default option is denial of permission to harvest organs after death, and where people have to proactively indicate willingness to donate their organs. In contrast, other countries such as France and Sweden have an “opt-out” system where people are assumed to have given permission for organ donation after death. Of course, people in these latter countries are free to change the default option by notifying authorities of their objection to organ harvesting after death.
It turns out that organ donation rates are very different in these two groups of countries, from a low of around 10 percent for opt-in countries to almost 100 percent for opt-out countries (Johnson & Goldstein, 2003). In other words, the decision to donate one’s organs closely follows the default option: where the default option is to donate, almost everyone donates, but where the default option is not to donate, hardly anyone donates. Of course, one might wonder if these two groups of countries were different in other ways – culture, history, economics, or education – that could explain their different organ donation rates. Although possible, these alternative explanations are not very probable. The two groups of countries were mixed in terms of culture and history, so the effects of these factors should have washed out across the groups. Furthermore, both groups had similar economic and educational indicators, making it unlikely that these factors were driving differences in organ donation rates. This leaves default option as the most likely reason for the dramatic difference in donation rates among these two groups of countries. The power of default options suggests that firms should set default privacy settings at a high level since most users will end up keeping these settings and thereby enjoy greater online security. If firms don’t take this action, governments should mandate high default privacy settings. Notably, such a mandate would not reduce people’s freedom of action because users can always change their default setting at any time.
Another technique for influencing people towards safer online behavior is the privacy disclaimer. An example of privacy disclaimer is the cookie warning, which is now a required element of website and app design in Europe. This warning alerts users that the website is trying to install a cookie on their computer and asks the user to assent to the installation before continuing. Privacy disclaimers can also take the form of a “do not track” button, which allows users to use websites without leaving electronic traces of their visit. A problem with privacy disclaimers is that seeing these warnings regularly on websites and apps could habituate users and thus reduce their attention towards these disclaimers over time. This habituation problem is similar to health warnings on cigarette packages, which often become ineffective over time as people get used to them (White, Bariola, Faulkner, Coomber, & Wakefield, 2015).
So far, we have talked about steps that firms can take to reduce the privacy gap. We, as individuals, can also act more prudently online. First, we could increase the perceived presence of others around us when we go online. People sometimes browse the Internet in the presence of others in coffee shops and libraries. At other times, we browse the Internet alone at home or on the road. When others are visible to us, their presence will work through the availability bias to reduce perceptions of anonymity. And if we think we are less anonymous, we will be more circumspect in our browsing behavior. In contrast, when we surf the Internet alone, the availability bias magnifies our sense of anonymity, which encourages risky behavior. This implies that we should surf the Internet in the physical presence of others as much as possible. When browsing the Internet alone, we could increase the virtual presence of others by using pictures of people – family or friends – as screen savers on our device. Having their virtual eyes on us might reduce perceived anonymity and encourage safer browsing.
Smart devices in our homes – televisions, computers, phones, cars, and refrigerators – are programmed to collect data and automatically forward this to manufacturers. We can often disable this data collection in the settings menu of the device. It is important to remember that the more functions we use in a device, the more information is collected about our usage. Therefore, a good principle would be to deactivate features on products and services that we don’t intend to use. We could also use data encryption tools such as Tor or PGP that make it technically difficult for others to intercept our online communications. However, a problem with these tools is that they require a high degree of technical knowledge. Furthermore, they are not a permanent solution since encryption technologies quickly become obsolete and require an ongoing learning effort to stay ahead of the curve. A more fundamental solution would be to lower our expectations of privacy when we go online. If the Internet is a transparent medium by design, we could simply accept the fact that our privacy is limited in the virtual world and act accordingly.