Chapter 12

AI AND FACIAL RECOGNITION: Do Our Faces Deserve the Same Protection as Our Phones?

In June 2002, Steven Spielberg premiered a new movie he had directed, Minority Report, based on a famous 1956 short story by the science fiction writer Philip K. Dick. Set in 2054 in a crime-free Washington, DC, the film stars Tom Cruise, who plays the head of Precrime, an elite police unit that arrests killers before they commit their crimes. The team has the authority to make its arrests based on the visions of three clairvoyant individuals who can see into the future. But soon Cruise is evading his own unit—in a city where everyone and everything is tracked—when the psychics predict he will commit a murder of his own.1

More than fifteen years later, this approach to law enforcement happily seems far-fetched. But today, one aspect of Minority Report seems to be on track to arrive much earlier than 2054. As Cruise is on the run, he walks into the Gap. The retailer has technology that recognizes each entering customer and immediately starts displaying on a kiosk the images of clothes it believes the customer will like. Some people might find the offers attractive. Others might find them annoying or even creepy. In short, entering a store becomes a bit like we sometimes feel after browsing the web and then turning to our social media feed only to find new ads promoting what we just viewed.

In Minority Report, Spielberg asked theatergoers to think about how technology could be both used and abused—to eliminate crimes before they could be committed but also to abuse people’s rights when things go wrong. The technology that recognizes Cruise in the Gap store is informed by a chip embedded inside him. But the real-world technology advances of the first two decades of the twenty-first century have outpaced even Spielberg’s imagination, as today no such chip is needed. Facial-recognition technology, utilizing AI-based computer vision with cameras and data in the cloud, can identify the faces of customers as they walk into a store based on their visit last week—or an hour ago. It is creating one of the first opportunities for the tech sector and governments to address ethical and human rights issues for artificial intelligence in a focused and concrete way, by deciding how facial recognition should be regulated.

What started for most people as a simple scenario, such as cataloging and searching photos, has rapidly become much more sophisticated. Already many people have become comfortable relying on facial recognition rather than a password to unlock an iPhone or a Windows laptop. And it’s not stopping there.

A computer can now accomplish what almost all of us as human beings have done almost since birth—recognize people’s faces. For most of us, this probably began with the ability to recognize our mother. One of the joys of parenting comes when a toddler erupts enthusiastically when you return home. This reaction, which lasts until the onset of the teenage years, relies on the innate facial-recognition capabilities of human beings. While this is fundamental to our daily lives, we almost never pause to think about what makes it possible.

As it turns out, our faces are as unique as our fingerprints. Our facial characteristics include the distance of our pupils from each other, the size of our nose, the shape of our smile, and the cut of our jaw. When computers use photographs to chart these features and knit them together, they create the foundation for a mathematical equation that can be accessed by algorithms.

People are putting this technology to work around the world in ways that will make life better. In some cases, it may be a matter of consumer convenience. National Australia Bank, using Microsoft’s facial-recognition technology, is developing the capability for you to walk up to an automated teller machine, or ATM, so you can withdraw money securely without a bank card. The ATM will recognize your face and you can then enter your PIN and complete your transaction.2

In other scenarios, the benefits are more far-reaching. In Washington, DC, the National Human Genome Research Institute is using facial recognition to help physicians diagnose a disease known as DiGeorge syndrome, or 22q11.2 deletion syndrome. It’s a disease that more often afflicts people who are African, Asian, or Latin American. It can lead to a variety of severe health problems, including damage to the heart and kidneys. But it also often manifests itself in subtle facial characteristics that can be identified by computers using facial-recognition systems, which can help a doctor diagnose a patient in need.3

These scenarios illustrate important and concrete ways that facial recognition can be used to benefit society. It’s a new tool for the twenty-first century.

Like so many other tools, however, it can also be turned into a weapon. A government might use facial recognition to identify every individual attending a peaceful rally, following up in ways that could chill free expression and the ability to assemble. And even in a democratic society, the police might rely excessively on this tool to identify a suspect without appreciating that facial recognition, like every technology, doesn’t always work perfectly.

For all these reasons, facial recognition easily becomes intertwined with broader political and social issues and raises a vital question: What role do we want this form of artificial intelligence to play in our society?

A glimpse of what lies ahead emerged suddenly in the summer of 2018, in relation to one of the hottest political topics of the season. In June, a gentleman in Virginia, a self-described “free software tinkerer,” also clearly had a strong interest in broader political issues. He had posted a series of tweets about a contract Microsoft had with the US Immigration and Customs Enforcement, or ICE, based on a story posted on the company’s marketing blog in January.4 It was a post that frankly everyone at the company had forgotten. But it says that Microsoft’s technology for ICE passed a high security threshold and will be deployed by the agency. It says the company is proud to support the agency’s work, and it includes a sentence about the resulting potential for ICE to use facial recognition.5

In June 2018, the Trump administration’s decision to separate children from parents at the southern US border had become an explosive issue. A marketing statement made several months earlier now looked a good deal different. And the use of facial-recognition technology looked different as well. People worried about how ICE and other immigration authorities might put something like facial recognition to work. Did this mean that cameras connected to the cloud could be used to identify immigrants as they walked down a city street? Did it mean, given the state of this technology, with its risk of bias, that it might misidentify individuals and lead to the detention of the wrong people? These were but two of many questions.

By dinnertime in Seattle, the tweets about the marketing blog were tearing through the internet, and our communications team was working on a response. Some employees on the engineering and marketing teams suggested that we should just pull the post down, saying, “It is quite old and not of any business impact at this point.”

Three times, Frank Shaw, Microsoft’s communications head, advised them not to take it down. “It will only make things worse,” he said. Nonetheless, someone couldn’t resist the temptation and deleted part of the post. Sure enough, things then got worse and another round of negative coverage followed. By the next morning, people had learned the obvious lesson and the post was back up in its original form.

As so often happens, we had to sort out what the company’s contract with ICE really covered.

As we dug to the bottom of the matter, we learned that the contract wasn’t being used for facial recognition at all. Nor, thank goodness, was Microsoft working on any projects to separate children from their families at the border. The contract instead was helping ICE move its email, calendar, messaging, and document management work to the cloud. It was similar to projects we were working on with customers, including other government agencies, in the United States and around the world.

Nonetheless, a new controversy was born.

Some suggested that Microsoft cancel our contract and cease all work with ICE, a persistent theme about government use of technology that would take hold that summer. One group of employees circulated a petition to halt the ICE contract. The issue began to roil the tech sector more broadly. There was similar employee activism at the cloud-based software company Salesforce, focused on its contract with US Customs and Border Protection. This followed employee activism at Google, which had led the company to cancel a project to develop artificial intelligence for the US military. And the ACLU targeted Amazon, backing Amazon employees who voiced concern about Rekognition, its facial-recognition service.6

For the tech sector and the business community more broadly, this type of employee activism was new. Some saw a connection to the role that unions had played in certain industries for well over a century. But unions had focused principally on the economic and working conditions of their members. Employee activism in the summer of 2018 was different. This activism called on employers to adopt positions on specific societal issues. The employees had nothing directly or even indirectly to gain. They instead wanted their employers to stand up for societal values and positions that they thought were important.

It was helpful for us to take stock of the different reactions to this new wave of employee activism. Just a few miles away in Seattle, the leaders at Amazon seemed to do less to engage directly with employees to discuss these types of issues.7 That reaction appeared to dampen some of the employee interest in raising issues, in effect encouraging people to keep their heads down and focused on business. In Silicon Valley, the leaders at Google took a very different approach, sometimes responding quickly to employee complaints by reversing course, including by pulling the plug on an AI-focused military contract.8 It was quickly apparent that there was no single approach, and every company needed to think about its own culture and what it wanted in terms of its connection to its employees. As we thought about our own culture, we decided to chart a path between the approaches we were watching elsewhere.

These episodes seemed to reflect several important developments. First and perhaps most important was the rising expectation that employees had for their employers. This had been captured well a few months earlier when the annual Edelman Trust Barometer identified the change.9 The Edelman communications firm has been publishing its Trust Barometer since 2001, identifying changes in the public mood around the world as people’s trust in institutions waxes and wanes. Its report in early 2018 showed that while trust in many institutions had plummeted, employee confidence in employers was a big outlier. It found that worldwide 72 percent of people trusted their employer “to do what is right,” with an even higher 79 percent feeling that way in the United States.10 In contrast, only a third of Americans felt that way about their government.

What we were experiencing reflected this view and went even further. In the tech sector, some employees wanted to play an active role in shaping their companies’ decisions and engagement on the issues of the day. Perhaps not surprisingly, this view was more pronounced at a time when people had less trust in governments. Employees were looking to another institution they hoped might do the right thing and have some influence on public outcomes.

The change thrust business leaders into new terrain. At a small dinner I attended in Seattle, the CEO of one tech company summed up the collective angst. “I feel well prepared for most of my job,” he said, describing how he’d risen up the ranks. “But now I’m being thrust into something completely different. I really don’t know how to respond to employees who want me to take on their concerns about immigration, climate issues, and so many other problems.”

Perhaps not surprisingly, the phenomenon was also the most pronounced with our newest generation of employees. After all, there’s a well-established tradition of students clamoring for societal change on college campuses, at times pushing their universities to lead the way by changing their policies. Because it was summer, we had roughly three thousand interns working on the Microsoft campus. Not surprisingly, they took a strong interest in the issue. Some wanted to have a direct impact on the company’s position even if they were just spending the summer with us.

We talked about how to think through the topic and respond. As Satya and I compared notes, I reflected on what I had learned serving on Princeton University’s board of trustees. “I think leading a tech company is becoming more like leading a university,” I said. “We have researchers with PhDs who are like the faculty. We have interns and young employees who sometimes have views similar to university students. Everyone wants to be heard, and some want us to boycott a government agency much like they want a university to boycott the purchase of stock in a company that’s doing something objectionable.”

For me, there had been a couple of key takeaways from my trustee experience. Perhaps the most important was that well-intentioned students might not have all the right answers, but they might be asking the right questions. And these questions could lead to a better path that had eluded experts and senior leaders alike. As I like to say to our teams within the company, the best response to a half-baked idea often is not to kill the idea, but to finish baking it. Some of our best initiatives came together this way. And it built on the culture that Satya had fostered for Microsoft, grounded in a growth mind-set and constant learning. In short, if a new era of employee activism was dawning, it would be important for us to find new ways to engage with our employees, understand their concerns, and try to develop a thoughtful answer.

I had also learned from my Princeton experience that universities had developed some sound processes to meet this need. They created opportunities for everyone to have input and have more collaborative discussion. It allowed emotions to subside and encouraged reason to prevail by helping a group think through and make a difficult decision with the time needed to get it right. We set out on this path, and Eric Horvitz, Frank Shaw, and Rich Sauer, our senior lawyer responsible for AI ethics issues, started holding a series of roundtables that employees could attend.

It became increasingly important to spell out when we thought it made sense for the company to take a position on a public issue and when we should not. We didn’t view corporate leadership as a license to use the company’s name to address any issue under the sun. There needed to be some vital connection to us. We felt our responsibility was fundamentally to address public issues that impacted our customers and their use of our technology, our employees both at work and in their community, and our business and the needs of our shareholders and partners. This didn’t answer every question, but it provided a useful framework for discussions with our employees.

Employee questions also pushed us in a constructive way to think harder about our relationship with the government and the challenges posed by new technology such as facial recognition.

On the one hand, we were not comfortable with the suggestion that we react to the events of the day by boycotting government agencies, especially in democratic societies governed by the rule of law. In part this was a principled reaction. As I often tried to remind people, no one elected us. It seemed not just odd but undemocratic to want tech companies to police the government. As a general principle, it seemed more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government. Satya and I discussed this point frequently and we believed it was important.

There was a pragmatic aspect as well. We recognized the enormous dependence that organizations and individuals had on our technology. It was far too easy to unleash chaos and unintended consequences if we simply turned technology off based on an objection to something a government agency was doing.

This pragmatic dimension was thrust into bold relief in August 2018. As I drove to work on a Friday morning, I listened to an account on The Daily podcast from the New York Times that got to the heart of the matter. The issue of the day was the government’s inability to meet a court deadline to reunite immigrant children with their families. As I listened, I recognized the voice of Wendy Young, who leads Kids in Need of Defense, or KIND, a pro bono organization I have chaired for more than a decade.11 As Wendy explained, the administration had implemented the initial family separation policy “with no thought given to how you reunify families” later.12

While I was familiar with this situation based on several conversations with Wendy, I was struck by an additional detail reported by New York Times journalists Caitlin Dickerson and Annie Correal. They explained that the Customs and Border Protection personnel used a computer system with a drop-down menu when people initially crossed the border. Agents would classify someone either as an unaccompanied minor, an individual adult, or an adult with children, meaning a family unit. When children subsequently were separated from their parents, the computer system’s design forced agents to go back and change this designation, for example by inputting a child’s name as an unaccompanied minor and the parent’s name as an individual adult. Critically, this overwrote the prior data, meaning the system no longer retained the family designation that previously had listed everyone together. As a result, the government no longer had any record that connected family members.

This was not only a story about immigration and families. It was also a story about technology. The government was using a structured database that worked for one process but not for another. Rather than update the IT system to support the new steps involved in separating families, the administration had plunged ahead without thinking about the computer architecture that would be needed. Having seen CBP’s systems at a command center near the Mexican border on a visit with Wendy just months before, I was not surprised that its systems were antiquated. But I was still horrified that the administration had failed to think about the implications of what it needed in terms of basic technology infrastructure.

When I walked into the conference room that morning where Satya’s senior leadership team was gathering for our Friday meeting, I shared what I had heard. As we talked about it, we recognized that it connected to our broader concerns about the proposition advocated by some that tech companies take it upon themselves to unplug government agencies from all services based on policies to which we object. Technology has become a key infrastructure of our lives, and the failure to update it—or worse, a decision simply to unplug it—could have all kinds of unintended and unforeseen consequences. As Satya had noted several times in our internal conversations, the government was using email as one tool to bring families back together. If we shut it off, who knew what would happen?

This led us to conclude that boycotting a government agency in the United States was the wrong approach. But the people advocating for such action, including some of our own employees, were asking some of the right questions. Facial-recognition technology, for example, created challenges that needed more attention.

As we thought it through, we concluded that this new technology should be governed by new laws and regulations. It’s the only way to protect the public’s need for privacy and address risks of bias and discrimination while enabling innovation to continue.

To many, it was odd for a company to call on the government to regulate its products. John Thompson, our board chair, said that some people in Silicon Valley told him that they assumed we were behind other companies in the market and wanted regulation to slow our competitors down. This made me bristle. To the contrary, in 2018 the National Institute of Standards and Technology completed another round of facial-recognition testing, finding that our algorithms were at or near the top in every category.13 While forty-four other companies had provided their technology for testing, many others, including Amazon, had not.

Our interest in regulation came from our emerging sense of where the market was heading. A few months earlier, one of our sales teams had wanted to sell an AI solution that included facial-recognition services to the government of a country that lacked an independent judiciary and had a less than stellar track record for respecting human rights. The government wanted to deploy the service with cameras across its capital city. Our concern was that a government that flouted human rights could use the technology to follow anyone anywhere—or everyone everywhere.

With the advice of our internal AI ethics committee, we decided we would not move forward with the proposed deal. The committee had recommended that we draw a line and refrain from making facial-recognition services available for generalized use in countries that Freedom House, an independent watchdog that tracks freedom and democracy around the world, had concluded were not free. The local team was not happy. As the person responsible for the final call, I received an impassioned email from the head of the sales team that had been working on the deal. She wrote that “as a mother and a professional,” she “would have felt much safer” if we had made the service available to counter risks of violence and acts of terror.

I understood her point. It underscored the difficult trade-offs that had characterized the age-old tensions between public safety and human rights. It also illustrated the subjective nature of many of the new ethical decisions that will be made for artificial intelligence. And, of course, we remained concerned that, as she and others had pointed out, if we refused to provide this service, some other company might step in. In that case, we would both lose the business and then watch on the sidelines as someone else facilitated the harmful use despite our position. But as we had balanced all these factors, we concluded that we needed to try to nudge the development of this new technology toward some type of ethical foundation. And the only way to do this was to turn down certain uses and push for a broader public discussion.

This need for a principled approach was reinforced when a local police force in California contacted us and said they wanted to equip all their cars and body cameras with a capability to take a photo of someone pulled over, even routinely, to see if there was a match against a database of suspects for other crimes. We understood the logic but advised that facial-recognition technology remained too immature to deploy in this type of scenario. Use of this nature, at least in 2018, would result in too many false positives and flag people who had been wrongly identified, especially if they were people of color or women, for whom there remained higher error rates. We turned down the deal and persuaded the police force to forgo facial recognition for this purpose.

These experiences started to provide some insights into principles we could apply to facial recognition. But we worried that there would be little practical impact if we took the high road only to be undercut by companies that imposed no safeguards or restrictions at all, whether those companies were on the other side of Seattle or on the other side of the Pacific. Facial recognition, like so many AI-based technologies, improves with larger quantities of data. This creates an incentive to do as many early deals as possible and hence the risk of a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success.

The only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

We drew insights from the historical regulation of other technologies. There are many markets in which a balanced approach to regulation has created a healthier dynamic for consumers and producers alike. The auto industry spent decades in the twentieth century resisting calls for regulation, but today there is broad appreciation of the essential role that laws have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, food, and pharmaceuticals.

Of course, it was one thing to talk about the need for regulation and another to define what type of regulation would be most sensible. In July 2018, we published a list of questions that we thought needed to be considered 14 and asked people for advice about possible answers. The discussions started with employees and technology experts, but quickly expanded across the country and around the world, including civil liberties groups like the ACLU, which was playing an active role on the issue.

I was particularly struck by the reaction of legislators I met with in the National Assembly in Paris. As one member said, “No other tech company is asking us these questions. Why are you different?” Facial recognition was the type of issue where we sometimes diverged from others in the tech sector. Perhaps more than anything else, this reflected what we had learned from our antitrust battles in the 1990s. At that time, we had argued, like many companies and industries, that regulation was unnecessary and likely to be harmful. But one of the many lessons we’d learned from that experience was that such an approach didn’t necessarily work—or would be regarded as unacceptable—for products that have a sweeping impact across society or that combine beneficial and potentially troubling uses.

We no longer shared the resistance that most tech companies traditionally had shown for government intervention. We’d already fought that battle. Instead we had endorsed what we thought of as a more active but balanced approach to regulation. That was one reason we called for federal privacy legislation in the United States as early as 2005. We knew there would be days when the government would get the details wrong and when we might regret advocating its involvement. But we believed this general approach would be better for technology and society than a practice that relied exclusively on the tech sector to sort everything out by itself.

The key was to figure out the specifics. A piece by Nitasha Tiku in Wired captured the importance of this dynamic. As she noted toward the end of 2018, “After a hellish year of tech scandals, even government-averse executives have started professing their openness to legislation.”15 But, as she recognized, our goal was to take “it one step further” by proposing a specific proposal for governments to regulate facial-recognition technology.

By December we felt we had learned enough to suggest new legislation. We knew we didn’t have answers for every potential question, but we believed there were enough answers for good initial legislation in this area that would enable the technology to continue to advance while protecting the public interest. We thought it was important for governments to keep pace with this technology, and an incremental approach would enable faster and better learning across the public sector.

In essence, we borrowed from a concept that has been championed for start-up companies and software development, referred to as a “minimum viable product.” As defined by entrepreneur and author Eric Ries, it advocates creating “an early version of a new product that allows a team to collect the maximum amount of validated learning (learning based on real data gathering rather than guesses about the future) about customers.”16 In other words, don’t wait until you have the perfect answer to every conceivable question. If you are confident that you have reliable answers to critical questions, act on them, build your product, and get it into the market so you can learn from real-world feedback. It’s an approach that has enabled not just businesses but technology to move faster and more successfully.

Even while moving more quickly, it’s critical to be thoughtful and confident that the initial steps will be positive. In this case, we believed we had a strong set of ideas to address facial recognition. I publicly made our case for new legislation at the Brookings Institution in Washington, DC,17 and published more details about our proposal.18 We then took the cause on the road, presenting it over the next six months at public events and legislative hearings across the United States and in eight other countries around the world.

We believed that legislation could address three key issues—the risk of bias, privacy, and the protection of democratic freedoms. We believed that a well-functioning market could help accelerate progress to reduce bias. No customer we encountered was interested in buying a facial-recognition service that had high error rates and resulted in discrimination. But the market couldn’t function if customers lacked information. Just as groups such as Consumer Reports had informed the public about issues like auto safety, we believed academic and other groups could test and provide information on the accuracy of competing facial-recognition services. This would further empower researchers like Joy Buolamwini at the Massachusetts Institute of Technology to pursue research that would prod us along. The key was to require companies that participated in the market to make it possible to test their products. That’s what we proposed, in effect using regulation to reinforce the market.19

To help reduce the risk of discrimination, we believed a new law should also require organizations that deploy facial recognition to train employees to review results before making key decisions—rather than just turning decision-making over to computers.20 Among other things, we were concerned that the risks of bias could be exacerbated when organizations deployed facial recognition in a manner that is different from what was intended when the technology was designed. Trained personnel could help address this problem.

In some ways, a thornier question was when law enforcement should be permitted to use facial recognition to engage in ongoing surveillance of specific individuals as they go about their day.

Democracy has always depended on the ability of people to meet and talk with each other and even to discuss their views both in private and in public. This relies on people being able to move freely and without constant government surveillance.

There are many governmental uses of facial-recognition technology that protect public safety and promote better services for the public without raising these types of concerns.21 But when combined with ubiquitous cameras and massive computing power and storage in the cloud, facial-recognition technology could be used by a government to enable continuous surveillance of specific individuals. It could do this at any time or even all the time. This use of such technology in this way could unleash mass surveillance on an unprecedented scale.

As George Orwell described in his novel 1984, one vision of the future would require citizens to evade government surveillance by finding their way secretly to a blackened room to tap in code on each other’s arms—because otherwise cameras and microphones will capture and record their faces, voices, and every word. Orwell sketched that vision nearly seventy years ago. We worried that technology now makes that type of future possible.

The answer, in our view, was for legislation to permit law enforcement agencies to use facial recognition to engage in ongoing surveillance of specific individuals only when it obtains a court order such as a search warrant for this monitoring or when there is an emergency involving imminent danger to human life. This would create rules for facial-recognition services that are comparable to those now in place in the United States for the tracking of individuals through the GPS locations generated by their cell phones. As the Supreme Court had decided in 2018, the police cannot obtain without a search warrant the cell phone records that show the cell sites, and hence the physical locations, where someone has traveled.22 As we put it, “Do our faces deserve the same protection as our phones? From our perspective, the answer is a resounding yes.”23

Finally, it was apparent that the regulation of facial recognition should protect consumer privacy in the commercial context as well. We’re rapidly entering an era in which every store can install cameras connected to the cloud with real-time facial-recognition services. From the moment you step into a shopping mall, it’s possible not only to be photographed but to be recognized by a computer wherever you go. The owner of a shopping mall can share this information with every store. With this data, shop owners can learn when you visited them last and what you looked at or purchased, and by sharing this data with other stores, they can predict what you’re looking to buy next.

Our point was not that new regulations should prohibit all such technology. To the contrary, we are among the companies working to help stores responsibly use technology to improve the shopping experience. We believe many consumers will welcome the resulting customer service. But we also felt that people deserve to know when facial recognition is being used, ask questions, and have real choices.24

We recommended that new laws require organizations that use facial recognition to provide “conspicuous notice” so people will know about it.25 And we said there needed to be new rules developed to decide when and how people can exercise meaningful control and provide consent in such contexts. The latter issue clearly will require additional work over the coming years to define the right legal approach, especially in the United States where privacy laws are less developed than in Europe.

It was also helpful to think about the reach of new laws. For some aspects, we didn’t need to encourage the passage of laws everywhere. For example, if one significant state or country were to require that companies make their facial-recognition services available for public and academic testing, then the results could be published and would spread everywhere else. Acting on this belief, we encouraged state legislators to consider new legislation as they prepared to convene for their sessions across the United States at the start of 2019.26

But when it comes to consumer privacy protection and the protection of democratic freedoms, one needs new laws in every jurisdiction. We recognized that this is likely unrealistic, given the differing views of governments around the world. For this reason, a simple call for the government to act would never be enough. Even if the US government got its act together, it’s a big world. People could never have confidence that all the world’s governments would use this technology in a way that is consistent with human rights protections.

The need for government leadership does not absolve technology companies of our own ethical responsibilities. Facial recognition should be developed and used in a manner consistent with broadly held societal values. We published six principles corresponding to our legislative proposals, which we have gone on to apply to our facial-recognition technology, and we have created systems and tools to implement them.27 Other tech companies and advocacy groups have started to adopt similar approaches.

The facial-recognition issue provides a glimpse into the likely evolution of other ethical challenges for artificial intelligence. While one can start, as we did, with broad principles that are applicable across the board, these principles are tested when put into practice around concrete AI technologies and specific scenarios. That’s also when potentially controversial AI uses are more likely to emerge.

There will be more issues. And as with facial recognition, each will require detailed work to sift through the potential ways the technology will be used. Many will require a combination of new regulation and proactive self-regulation by tech companies. And many will raise important and differing views between countries and cultures. We will need to develop a better capability for countries to move more quickly and collaboratively to address these issues on a recurring basis. That’s the only way we’ll ensure that machines remain accountable to people.