PRIVACY
In 1949, George Orwell wrote the dystopian novel Nineteen Eighty-Four, which described a future society in which the government continuously monitored everyone’s actions and conversations.1 Narrow AI technology has now made that level of monitoring possible, and society needs to cope with the consequences.
BIG BROTHER IS WATCHING YOU
Facial recognition is perhaps the AI technology with the most potential for abuse. The Chinese government is in the process of rolling out its Xueliang2 Project, which is connecting security cameras on roads, buildings, and malls to track its 1.4 billion inhabitants.3 The goal is to stop criminal behavior and monitor dissidents.
Figure 10.1 AI-based surveillance. © Pixinoo | Licensed from Dreamstime.com ID 117735807.
On the somewhat amusing side, there are reports that Chinese authorities have used the technology to catch and shame jaywalkers4 and toilet paper thieves.5 On the scary side, there is evidence that the Chinese government is using the technology to monitor and increase the persecution of minorities (e.g., Tibetans, Uighurs) and religious groups (e.g., Falun Gong practitioners6). Time magazine reports that the authorities harass people if the cameras catch them growing a beard, leaving by their back door, or wearing a veil.7
While few people in the US would be comfortable with a surveillance apparatus as broad as China’s, surveillance in the US is widespread and expanding, with both proponents and detractors. By 2021, the top twenty US airports will be using facial recognition to screen all incoming international passengers.8 Here again, the detection of terrorists would be a benefit if it works. However, if it misidentifies law-abiding travelers as terrorists, it could result in inconvenience or worse—wrongful arrest—to innocent passengers. According to data from the Department of Homeland Security (DHS), facial recognition systems erroneously reject as many as one in twenty-five travelers using valid credentials. At this rate, an error-prone DHS face-scanning system could cause 1,632 passengers per day to be wrongfully delayed or denied boarding at New York’s John F. Kennedy (JFK) International Airport alone.9 Imagine having your vacation ruined because a facial recognition system incorrectly matches your face to a terrorist as you are checking in at the airport.
Police departments in Maryland have used facial recognition software to identify people participating in protests who had outstanding warrants.10 Maryland police also used the technology to catch robbery and shooting suspects they had captured on camera during the commission of their crimes.11 The FBI has started using facial recognition technology to search driver license photos stored in the Department of Motor Vehicles databases in several states. In 2019, the US Government Accountability Office estimated that law enforcement networks include the faces of over 640 million adults,12 which adds up to nearly two images for every adult in the US that year.
These uses of facial recognition technology have drawn the ire of civil rights advocates13 and several members of the US Congress. They have complained about the fact that individuals did not give permission to use their license photos in this manner. In a rare show of bipartisanship, both Democratic and Republican members of the US House Oversight and Reform Committee condemned the way law enforcement has been using this technology.14 Still, as of this writing, no law has been passed to prevent it.
Mass shootings, especially in schools, have been on the rise in the US. At the Parkland, Florida, high school that was the site of a massacre in 2018 that killed fourteen students and three teachers, officials have installed 145 cameras with AI monitoring software. Even at Parkland, some students and parents have questioned the invasion of privacy, as well as the software’s potential for mistaken identity.15
Another problem with the use of facial recognition software is that most facial recognition databases are composed primarily of white males. As a result, facial recognition training produces systems that do well on white males but that do not perform as well on women and people of color. NIST published a 2019 report evaluating 189 face-recognition systems from ninety-nine different developers and found that many systems were ten to one hundred times more likely to falsely match a Black or Asian face to a criminal than a white one.16 The use of these systems would likely result in authorities falsely identifying women and people of color as matches to the faces of suspects far more often than white men. For example, if the systems used for terrorist recognition contain discriminatory biases, minorities may be more likely to be falsely detained or arrested.
Facial recognition systems also can be fooled by noise in images. People can look at two images and determine whether they are of the same person, even if the images have minor distortions, such as graininess or extraneous pen marks. But facial recognition systems are easily fooled by these same distortions. Suppose you were to show images of two different people to a facial recognition system and the images had similar distortions. The facial recognition system would likely incorrectly decide that they represented the same person just because the images have the same distortions.17
Amazon sells commercial facial recognition software named Rekognition.18 In 2018, a group of forty organizations led by the American Civil Liberties Union (ACLU) sent a letter to Amazon CEO Jeff Bezos demanding that Amazon stop the sale of its facial recognition technology to government organizations. The primary concern is that people should be free to walk down the street without Orwell-style government surveillance. A nonprofit organization, Ban Facial Recognition, offers an online interactive map19 of where facial recognition technology is being used in the US by police departments and airports. It also shows where municipalities have enacted laws against the use of facial recognition. In addition to the possibility of incorrect identification, the Ban Facial Recognition website20 argues that the use of this technology violates the US Constitution’s Fourth Amendment right against search without a warrant and that it supercharges discrimination.
To drive this point home with lawmakers, in 2019 the ACLU submitted images of members of the US Congress to Amazon’s Rekognition software. They found that the software falsely identified the faces of twenty-eight members as matches to the faces of criminal suspects. Worse, 40 percent of the members incorrectly identified were people of color, even though 80 percent of congressional members are white.21 The ACLU issued a position paper on surveillance technology in which it expressed concern that the technology has the potential to “worsen existing disparities in treatment suffered by people of color and the poor by embedding, amplifying, and hiding biases.”
As of this writing, bills are pending in both the US Congress and in many state and local government legislatures to ban the use of facial recognition technology in law enforcement. The state of California already enacted a three-year ban on the use of facial recognition in police body cameras starting in 2019.22 There is a bill pending in the US Congress to ban the use of facial recognition in public housing.23 And the European Union is considering a five-year ban on the use of facial recognition software.24 In 2020, Amazon, Microsoft, and IBM all decided to at least temporarily suspend sales of facial recognition software to law enforcement agencies.
The bottom line here is that mistakes by facial recognition systems can be inconvenient or worse.25 No one wants to be put in jail or detained at an airport because a facial recognition system incorrectly matched their image to that of a terrorist or criminal. As a society, we must weigh the benefits of catching terrorists and criminals against the individual consequences of facial recognition errors. When data issues cause this to happen more frequently to minorities, the issue is worse. It becomes discrimination, which can only be fixed either by banning the use of facial recognition systems in law enforcement or by ensuring that training tables are free from bias.
Surveillance systems using facial recognition can also become an invasion of privacy. Across the board, we need to find the right balance between catching terrorists and criminals and respecting the privacy of citizens.
DATA PRIVACY
Internet-connected systems collect massive amounts of data from surveillance cameras, phone conversations, social media, email, e-commerce, retail sales records, and many other sources. Every click on a webpage, every Google search, every Facebook like, and every tweet is captured, tracked, and associated with an individual. There are huge privacy issues around the handling of all this data.
When we visit a new site, we often agree to a pop-up privacy policy that contains ten pages of legalese. To expect any of us, attorneys included, to read and understand every privacy policy we encounter is unrealistic. Carnegie Mellon University researchers estimated back in 2008 that it would take each of us seventy-six days out of a year to do so.26 Instead, most of us do not read the policy. We just click the Accept button. As a result, major corporations have access to a great deal of our private data. For example, Google has gained access to tens of millions of patient health records in the US. It is contractually allowed to access many of these records without the permission of the doctor or patient.27
Mathematician Hannah Fry, in her book Hello World: Being Human in the Age of Algorithms, 28 describes a “creepy” scheme in which people download a Chrome extension named The Web of Trust. The privacy policy clearly states that the extension will anonymously record your entire browsing history. Even though it is anonymous, the recorded URLs contain clues to the person’s identity. For example, when someone visits their own LinkedIn or Twitter page, the URL often includes the person’s name.
Corporations and governmental organizations acquire this data and use it in ways we might not like. A Target analysis of its massive customer database reportedly found out a teenager was pregnant before her parents did.29 This occurred when a Target analyst discovered that women on the baby registry often bought large quantities of certain personal products and decided to use the items in a pregnancy-prediction algorithm to drive a marketing campaign directed at pregnant women—including this teen. The teenager’s father was, at first, outraged at Target for sending his daughter such materials, but he later calmed down when he found out his daughter was actually pregnant.
Airlines and other travel-related businesses use data to predict the socioeconomic status of the person accessing the site and offer higher prices for people the algorithms predict are wealthier.30 Credit card companies use similar data to raise or lower customer credit limits. One piece of litigation claimed a credit card company lowered credit limits based on visits to massage parlors, marriage counselors, and pawn shops.31 Imagine what insurance companies, retailers, and law enforcement could do with the data collected by self-driving cars, which will know everywhere you drive. Even worse, they have cameras that record what you do inside the vehicle.
The Cambridge Analytica (CA) scandal was made possible by weak protections on Facebook data. CA paid Facebook users five dollars to take a personality survey. The first step in the survey was for users to grant access to their Facebook profile. Via that access, CA was able to cull information not only about the user, but also, due to weak privacy controls, about all the user’s friends. CA used this information to create psychological profiles on perhaps as many as 87 million people using machine learning techniques.32 They then targeted these users with social media ads that many believe influenced the 2016 US elections. The critical point here is that privacy regulations could have and should have prevented this breach of privacy and the subsequent influencing of elections.
In response to public outcry and government regulations such as the European Union’s General Data Privacy Regulations (GDPR), corporations now offer consumers some privacy settings for this data. However, vendors often obscure privacy settings options with misleading wording, hidden privacy-friendly choices, anti-privacy defaults, and making privacy-friendly options too much effort for users to locate and use.33 Consumers also have difficulty choosing informed consent settings because corporations often do not—and often cannot (because of uninterpretable algorithms)—explain how they will use this data.
One solution is to build these technologies in a way that keeps identities (e.g., of people and vehicles) anonymous. However, this has not worked well for other internet-connected data. Vendors have learned how to correlate anonymous data with personally identifiable data. Data vendor Acxiom claims to have global data on 2.5 billion individuals.34 Their data comes from tracking user browsing habits, government databases such as voter registration records, and criminal records. Some use of data is modulated by governmental regulations, such as the US Fair Credit Reporting Act, and data privacy laws like the GDPR. However, much use is unregulated, and this has led psychologist Shoshana Zuboff, in her book The Age of Surveillance Capital,35 to make a persuasive argument that capitalism is mutating in a way that gives corporations immense power that, left unchecked, will lead to vast inequality in society.
Although AI enhances the ability to collect and analyze data, privacy is a big data issue, not an AI issue per se. AI technology makes it easier to analyze big data. However, the problem results from the widespread availability of personal data and not from the algorithms used to analyze it. We can only solve privacy issues through government regulation and perhaps through consumers voting with their dollars. The European Union has taken the lead on privacy issues, and other governmental bodies should follow.
AT WHAT COST?
Facial recognition technology puts powerful surveillance tools into the hands of governments and law enforcement agencies. The use of this technology provides some protection against terrorists and criminals at the expense of our privacy. Lawmakers will need to find a balance, though. Data privacy is an important issue that is already the subject of numerous regulations. However, it is not really an AI issue. AI only makes it easier to analyze the data.
Facial recognition technology gives governments the ability to completely take away our privacy, and it is prone to discrimination. If we want to avoid becoming a surveillance state, where anyone can be arrested for being in the wrong place or for being the “wrong” color, we need laws that rein in how governments use AI-based surveillance tools. However, the tools themselves are not a threat.