In December 2013, as tech leaders met at the White House to urge President Obama to reform government surveillance practices, the conversation at one point changed course. The president paused and offered a prediction. “I have a suspicion that the guns will turn,” he said, suggesting that many of the companies represented at the table held more personal data than any government on the planet. The time would come, he said, when the demands that we were making on the government would be made on the tech sector itself.
In many ways, it was surprising that the guns hadn’t turned already. In Europe, arguably, they had turned long before. The European Union had adopted a strong data privacy directive in 1995.1 It established a solid baseline for privacy protection that exceeded anything found in the United States. The European Commission built on this directive by proposing an even stronger privacy regulation in 2012. It would take four years of deliberations, but the EU adopted its sweeping General Data Protection Regulation in April 2016.2 As the United Kingdom voted to leave the EU following its Brexit vote just two months later, data protection authorities in the country were quick to affirm their support for continuing to apply the new rules in the UK itself. As Prime Minister Theresa May told tech executives in a meeting in early 2017, the government recognized that the UK economy would remain dependent on data flows with the continent, and this required a uniform set of data privacy rules.
But what about the United States? As data privacy rules continued to sweep around the world, it remained an outlier. Across Europe officials fretted increasingly about the protection of their citizens’ privacy in a world where data crossed borders and entered American data centers, but the United States failed to protect privacy broadly at the national level. I had given a speech myself on Capitol Hill in 2005 calling for the adoption of national privacy legislation.3 But aside from HP and a few other companies, most of the industry either yawned or opposed the idea. And Congress remained uninterested.
It would take the efforts of two unlikely individuals to spark change in the United States. The first person to go to the mats on privacy was a law student from the University of Vienna named Max Schrems. During a stopover in Europe in 2019, Max introduced us to the Austrian delicacy of boiled beef and shared his unlikely story.
Schrems is a bit of a celebrity in Austria, and if you followed the trans-Atlantic privacy saga, he’s instantly recognizable. “I lost my privacy over a privacy case,” he laughed.
Privacy, including the American notion of it, has always intrigued him. When Schrems was seventeen, he was dropped into the “middle of nowhere” Florida as a high school exchange student. The tiny town of Sebring was a culture shock for sure, but not for reasons that you might expect. It wasn’t social gatherings centered around the Future Farmers of America or the southern Baptist church that disoriented Schrems, but rather the school’s methods of tracking students.
“There was a whole pyramid of control,” he said. “We had a police station at the school and there were cameras in every corridor. Everything was tracked from grades, SAT scores, attendance, and little stickers on our student IDs that allowed us to use the internet.”
Schrems proudly recalled how he helped his American peers circumvent the blocks the school had put on Google searches. “I showed them that there’s Google.it, which works perfectly fine because the school only blocked dot-com,” he said. “The exchange student introduced the school to international top-level domains!”
It was a relief to return to Vienna, he told us, “where we have so much freedom.”
Privacy was still on Schrems’s mind in 2011, when at the age of twenty-four he returned to the United States for a semester at the Santa Clara University School of Law in California. A visiting lecturer, who also happened to be a lawyer at Facebook, spoke to Schrems’s privacy class. When Schrems quizzed him on the company’s obligations under European privacy law, the lawyer replied that the laws weren’t enforced. “He told us that ‘you can do whatever you want to do’ because the penalties in Europe are so trivial that the enforcement is nonexistent,” Schrems said. “Obviously, he didn’t know a European was in the room.”
The exchange inspired Schrems to dig deeper and write his term paper about what he regarded as Facebook’s shortcomings in addressing its European legal obligations.
For most students, the story would have ended there. But Schrems was no ordinary student. Within a year, he took what he learned and filed complaints with the data protection authority in Ireland, where Facebook’s European data center was located. His complaint was straightforward but had implications that were potentially upending for the global economy. He argued that the International Safe Harbor Privacy Principles put in place to allow European data to be transferred to the United States needed to be struck down. The reason, he argued, was that the United States had insufficient legal safeguards to protect European data properly.
The Safe Harbor principles were a fundamental pillar of the trans-Atlantic economy, but it was little known except by privacy experts. It was a creature of the EU’s 1995 privacy directive, which permitted Europeans’ personal information to move to other countries only if they had adequate privacy protection in place. Given the absence of a national privacy law in the United States, some political creativity was needed to keep data moving across the Atlantic. The solution, adopted in 2000, was a voluntary program that enabled companies to self-certify that they complied with seven privacy principles endorsed by the US Department of Commerce. The principles mirrored EU rules, and they enabled the European Commission to conclude that the United States provided adequate privacy protection as required by the 1995 directive.4 The International Safe Harbor Privacy Principles were born.
Fifteen years later, the trans-Atlantic movement of data had exploded. More than four thousand companies were taking advantage of Safe Harbor to deliver $240 billion of digital services annually.5 This included everything from insurance and financial services to books, music, and movies. But the financial aspects represented just the tip of the information iceberg. American companies had 3.8 million employees in Europe who depended on Safe Harbor data transfers for everything from their paychecks to health benefits and personnel reviews.6 The total European sales of American companies topped a whopping $2.9 trillion, and most of it required the movement of digital data to make sure that goods reached their destination and revenue was accounted accurately.7 It was a barometer of the world’s extraordinary reliance on data.
While government officials and business leaders saw Safe Harbor as a modern-day necessity, Max Schrems saw something entirely different. Like the youth in Hans Christian Andersen’s fairy tale, he looked at the Safe Harbor principles and asserted, in effect, that “this emperor has no clothes.”
Schrems had been a Facebook user since 2008 and relied on that basis for his complaint with the Irish Data Protection Commissioner. By 2012, he had returned to Vienna. After a “twenty-two-email back-and-forth” with Facebook, Schrems received a CD containing a PDF with twelve hundred pages of his personal data. “It was only half or a third of what they had on me, but three hundred pages of it were things I had deleted—it actually said ‘deleted’ on each post.”
As he saw things, a Safe Harbor agreement that permitted Facebook to collect and use so much data in this way couldn’t possibly provide the protection that European law required.
Schrems publicized his complaints, creating a minor media cycle across Europe as he argued that the Safe Harbor principles should be struck down. Facebook quickly dispatched two of its senior European executives to Vienna to try to persuade him to reconsider his views. They spent six hours in a hotel conference room next to the airport urging Schrems to narrow his complaints. But he would not drop them, insisting that he wanted the Irish commissioner to address his concerns.8
Others in the tech sector and privacy community followed the issue with interest, but most didn’t expect Schrems’s case to go far. After all, he had spent more time drafting his complaints than completing his term paper at Santa Clara, which remained unfinished but with an extension from his professor.9 Soon the matter appeared to reach the end of the road, as the Irish Data Protection Commissioner ruled against Schrems, concluding that it was bound by the European Commission’s finding in 2000 that Safe Harbor was adequate. It seemed time for Schrems to return to writing his law school paper. But he would not back down.
His case finally reached the European Court of Justice. And on October 6, 2015, all hell broke loose.
I was in Florida getting ready for an event with customers from Latin America when the phone rang early in the morning. The court had struck down the International Safe Harbor Privacy Principles.10 It concluded that Europe’s national data protection authorities are empowered to make their own assessments of data transfers under the agreement. In effect, the court gave more authority to independent regulators it knew would be tougher in reviewing privacy practices in the United States.
Immediately people wondered whether this meant a return to the digital dark ages. Would trans-Atlantic data flows now stop? Preparing for precisely this contingency, we had put in place other legal measures to ensure our customers could continue to use our services to move their data internationally. We scrambled to reassure customers. Across the tech sector everyone put on a brave face, but the European court’s decision was disturbing. In the words of one lawyer who helped negotiate Safe Harbor, “We can’t assume that anything is now safe. The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”11
The decision led to months of feverish negotiations. It was a bit like trying to put Humpty Dumpty back together again. U.S. Commerce Secretary Penny Pritzker and European Commissioner Věra Jourová were on point to develop an approach that was more likely to satisfy the court and Europe’s various privacy regulators. As I arrived at the European Commission in January 2016 to discuss the state of play with Jourová, I was surprised when she greeted me while I was downstairs waiting for an entrance pass. She chuckled as she mentioned that she had just stepped out for a moment. A gentleman she had never met recognized her outside and walked up and said, “We should know each other. My name is Max Schrems.”
While the international negotiations continued, the tech sector prepared for the worst. At Microsoft we explored whether we could take advantage of Seattle’s proximity to Canada and shift key support to our facilities in Vancouver. It would mean shuttling some Redmond employees back and forth, but because the court’s decision didn’t impact data transfers between Canada and Europe, we could ensure more seamless operations.
Ultimately, this proved unnecessary. In early February 2016, Pritzker and Jourová unveiled a new agreement. They replaced the Safe Harbor principles with the Privacy Shield, which contains heightened privacy requirements and an annual bilateral review. Microsoft became the first tech company to pledge that it would comply with the new data-protection demands.12
A data disaster had been averted. But the episode revealed just how much had changed.
For one thing, it showed that there was no such thing as a privacy island—no longer could anyone assume that all their data stayed within the borders of a single nation. This is the case even for a continent as big as Europe or an economy as large as the United States. Personal information moves from country to country for all types of digital transactions, and most of the time people are not aware of it.
This created a new outside political lever with potentially profound privacy implications for the United States. The members of the European Court of Justice effectively empowered the continent’s data-protection regulators, well known for their passionate commitment to privacy, to negotiate for tougher American privacy standards.
If there was any doubt about this goal, it was dispelled when credible firsthand reports circulated quietly within government circles shortly after the court’s 2015 decision. A member of the court who had been at the center of its deliberations met in person with several national privacy regulators to walk them through the details of the decision and recommend how they could best use it to negotiate with the White House and Department of Commerce. It was the type of step that would fly in the face of the separation between courts and the executive branch in the United States. It was unusual in Europe, but not unheard of in many parts of the world.
While American political leaders can give speeches that denounce the overreach of European privacy regulators, there is one thing they cannot change. This is the critical reliance of the US economy on the ability of American firms to move data to and from other countries. In the world today, one can debate whether to construct an immigration wall to stem the flow of people. But no nation can tolerate a barrier that stops the international flow of data. This means trans-Atlantic negotiations that impact the privacy practices of American companies have become an economic fact of life.
Even the ultimate implications for China are weighty. Over time, the European approach can lead to mounting pressure on China to confront an important crossroads. It can move forward without privacy protection for data within its borders, or it can strengthen its economic connections with Europe with the inevitable data flows this will require. But it will become more difficult to do both.
Like the reaction to many near-disasters, however, the immediate response to the negotiation of the Privacy Shield was mostly a sigh of relief. It was another wake-up call, but again people hit the snooze button. Data flows could continue, and companies could continue to do business. Most tech companies and government officials postponed deeper thinking about the longer-term geopolitical implications for another day.
In some key respects, this was more than understandable. The rest of 2016, with the Brexit vote and the American presidential election, commanded people’s attention. And within a few months, everyone was focused on a different privacy development from Europe. It was the EU’s looming implementation date for its General Data Protection Regulation, or GDPR.
The GDPR quickly became a household acronym for people working in the tech sector. While it wasn’t unusual to hear lawyers use acronyms to refer to government regulations, the GDPR started to roll off the tongues of engineers, marketers, and salespeople alike. There was good reason. The regulation required the re-architecting of many of the world’s technology platforms, and this was no small task. While it wasn’t necessarily part of the EU’s plan, it became a second way for Europe to influence privacy standards across the United States and around the world.
The GDPR is different from many government regulations. Most of the time, a regulation tells a company what it cannot do. For example, don’t include misleading statements in your advertisements. Or don’t put asbestos in your buildings. The fundamental philosophy of a free market economy encourages business innovation, with regulation putting certain conduct off-limits but otherwise leaving companies broad freedom to experiment.
One of the biggest features in the GDPR is in effect a privacy bill of rights. By giving consumers certain rights, it requires that companies not just avoid certain practices but create new business processes. For example, companies with personal information are required to enable consumers to access it. Customers have a right to know what information a company has about them. They have a right to change the information if it’s inaccurate. They have a right to delete it under a variety of circumstances. And they have a right to move their information to another provider if they prefer.
In important ways, the GDPR is akin to a Magna Carta for data. It represents a critical second wave of European privacy protection. The first wave came in 1995, with a privacy directive that required that websites notify consumers and get their consent before collecting and using their data. But as the internet exploded, people were inundated with privacy notices and had little time to read them. Recognizing this, Europe’s GDPR required that companies give consumers the practical ability to go online to view and control all the data that had been collected from them.
It’s not surprising that its implications for technology are so sweeping. Start with the proposition that any company with millions of customers—or even thousands of customers—needs a defined business process to manage these new customer rights. Otherwise it will be swamped with inefficient and almost certainly incomplete work by employees to track down a customer’s data. But more than that, the process needs to be automated. To comply quickly and inexpensively with the GDPR, companies need to access a customer’s data in a unified way across a variety of data silos. And this requires changes to technology.
For a diversified tech company like Microsoft, the impact of the GDPR could hardly have been more intense. We had more than two hundred products and services, and many of our engineering teams had been empowered to create and manage their own back-end data infrastructures. There were certain similarities, but there were also important differences in the information architecture used in different parts of the company.
Quickly we realized these differences would be problematic under the GDPR. Consumers in the EU would expect a single process that would pull all their information across all our services so they could see it in a singular and unified way. The only way for this to happen efficiently would be for us to create a new and singular information architecture that would span all our services from one end to the other. In other words, from services like Office 365 to Outlook to Xbox Live, Bing, Azure, Dynamics, and everything in between.
In early 2016, we assembled a team with some of our best software architects. They had two years before the GDPR would take effect on May 25, 2018, but they had no time to spare.
The architects needed first to turn to lawyers, who defined what the GDPR required. With the lawyers, they then created a specification that listed all the technology features our services would need to enable. The architects then crafted a new blueprint for the processing and storage of information that would apply to all our services and make these features effective.
By the last week of August, the plan was ready for review at a meeting with Satya and the company’s senior leadership team. Everyone knew that the blueprint called for a massive amount of engineering work. We’d need to move more than three hundred engineers to work full-time on the project for at least eighteen months. And in the final six months before the GDPR implementation date, the number would swell into the thousands. It represented a financial commitment in the hundreds of millions of dollars. It was not a meeting that anyone wanted to miss. Some cut short their vacation to be there.
The engineering and legal teams walked through the blueprint, timelines, and resource allocations. It impressed everyone. And in some ways, it surprised everyone. As the meeting progressed, Satya suddenly exclaimed with a bit of a chuckle, “Isn’t this great?” He continued, “For years it has been next to impossible to get all the engineers across the company to agree on a single privacy architecture. Now the regulators and lawyers have told us what to do. The job of creating a single architecture just got a whole lot easier.”
It was an interesting observation. Engineering is a creative process and engineers are creative people. When two software engineering teams approached the same problem in different ways, it could be enormously difficult to persuade them to reconcile their differences and develop a common approach. Even if the differences didn’t go to the heart of a feature of fundamental importance, people tended to hold on to what they had created.
Given the large, diversified, and empowered engineering structure at Microsoft, this challenge was sometimes greater than at other tech companies. It had led us in the past sometimes to maintain for years two or more overlapping services, an approach that almost never turned out well. Apple, in contrast, had sometimes relied on its narrower product focus and Steve Jobs’s centralized decision making to solve this problem. It was perhaps ironic, but European regulators were doing us something of a favor by defining a singular approach that required engineering compromise all around.
Satya signed off on the plan. Then he turned to everyone and added a new requirement. “As long as we’re going to spend all the time and money to make these changes, I want to do this for more than ourselves,” he said. “I want every new feature that’s available for our use as a first party to be available for our customers to use as a third party.”
In other words, create technology that could be used by every customer to comply with the GDPR. Especially in a data-dominant world, it made complete sense. But it also added more work. All the engineers in the room gulped. They left the meeting knowing they’d need to put even more people on the project.
The enormous technical requirements help explain a second dynamic that quickly emerged, one with important geopolitical implications. Once the engineering work was on track to comply with the GDPR, it was hard to be enthusiastic about creating a different technical architecture for other places. The costs and engineering complexity of maintaining differing systems are just too great.
This led to an interesting conversation with Canadian prime minister Justin Trudeau in early 2018. As Satya and I met with him and some of his top advisers, we touched upon privacy issues, which remain an important topic with the Canadian public. As Trudeau mentioned the potential for changes in Canadian privacy law, Satya encouraged him simply to adopt the provisions in the GDPR. While this suggestion was met with some surprise, Satya explained that unless there was some difference that was of fundamental importance, the costs of maintaining a different process or architecture for a single nation seemed likely to outweigh the potential benefits.
Our greater enthusiasm for the GDPR put us in a different camp from some others in the tech sector, who sometimes tended to focus more on the parts of the regulation they found onerous. While there were parts of the GDPR that we found confusing or worse, we believed that one key to long-term success for the tech sector was sustaining public trust on privacy issues. This approach was another area born from our antitrust experience in the 1990s and the high reputational price we had paid. A more balanced approach on potentially contentious regulatory issues might strike some of our tech sector peers or even our own engineers as excessively diplomatic. But I felt that there were many days and issues that made it the wiser course.
Others in the tech sector nonetheless often pointed to the American public’s ambivalence toward privacy as reason to ignore US regulatory pressures. “Privacy is dead,” they’d say. “People just need to get over it.”
I believed the privacy issue would be quiet until the day it was not. A firestorm could break out with little of the political foundation in place for a more thoughtful approach. The public ambivalence toward privacy reminded me of the nuclear power industry’s experience decades earlier.
Throughout the 1970s the nuclear power industry had failed to engage in an effective public discussion about the risks associated with its technological advances, leaving the public and politicians alike unprepared for the meltdown that occurred at the Three Mile Island Nuclear Generating Station in Pennsylvania in 1979. As a result of the calamity, and unlike in other countries, the political fallout from Three Mile Island stopped American nuclear power construction in its tracks. It would take thirty-four years before construction would start on another nuclear power plant in the United States.13
I felt this was a historical lesson to learn from rather than repeat.
In March 2018, the privacy equivalent of Three Mile Island arrived when the Cambridge Analytica controversy exploded. Facebook users learned that their personal data had been harvested by the political consulting firm to build a database targeting US voters with advertisements designed to support Donald Trump’s presidential campaign. While the usage itself violated Facebook’s policies, the company’s compliance systems had failed to detect the problem. It was the type of issue that draws plenty of criticism but leaves a company with no real defense. All it can do is apologize, which is what Mark Zuckerberg did.14
Within weeks the public mood shifted in Washington, DC. Instead of dismissing regulation, politicians and tech leaders finally were talking about it as an inevitability. But they failed to describe what they thought this regulation should do.
That answer would come from the other side of the country, near Silicon Valley itself. And this drama involved a second character who was as unlikely to play a leading role as Max Schrems had been.
It was an American named Alastair Mactaggart. In 2015, the San Francisco Bay Area real estate developer hosted a dinner party at his home in Piedmont, California, a leafy suburb across the bay from the Silicon Valley empires quietly dealing in private information. As Mactaggart quizzed one of his guests about his job at Google, he wasn’t just dissatisfied with the answers, he found them terrifying.
What private data were tech companies collecting? What were they doing with it? And how can I opt out? If people knew what Google knew, the engineer replied, “They’d freak out.”
That conversation over cocktails set the wheels in motion on a two-year, more than $3 million crusade. “It felt very important. I thought, ‘Someone has to do something about this,’” Mactaggart told us almost three years later when we met in San Francisco. “I figured that the someone to do something might as well be me.”
The father of three wasn’t looking to take a swing at the tech industry. He was a successful businessman and a firm believer in free and open markets. After all, he’d made his money from soaring real estate prices in the tech-fueled region. But he was determined to make a difference, hoping to one day tell his kids that he’d helped protect something precious: our personal data.
In the age of what Mactaggart and some others call “commercial surveillance,” our online searches, communications, digital location, purchases, and social media activity tell more about us than we probably want to share.15 He concluded that this bestowed incredible power on a handful of companies. “You must accept their privacy terms, or you can’t use their services,” he said, referring to the free online tools we unwittingly pay for with our information. “But these are services that we rely on to live in the modern world. There really is no opting out.”
This lack of oversight propelled him to recruit like-minded supporters and draft a new privacy law for California. “I live in a highly regulated world,” he said, referring to the accepted regulation and building codes that govern real estate. “It’s healthy. The law needs to catch up with tech or people will just continue to push the boundaries.”
Mactaggart had learned plenty about the workings of government from his real estate experience. He was politically astute, recognizing that Silicon Valley opposition would likely make it as difficult to pass a law in Sacramento, the state’s capital, as it would be to pass a federal law in Washington, DC. But California, like some other western US states, had a political alternative. These states, established in the middle and late 1800s, had constitutionally mandated processes that, with enough signatures, could put an initiative on the ballot for the voters to decide.
California’s initiative process had changed the course of American history in the past. Four decades earlier, in 1978, the state’s voters adopted Proposition 13 to limit taxes. The measure reduced property taxes in the state, but its broader impact was far greater. It helped fuel a public movement across the country that added momentum to Ronald Reagan’s Presidential election in 1980 and stronger national pressure to reduce the size of government and cut taxes. It created a watershed political moment, reflecting in part the fact that one in every eight Americans lives in California.
If Cambridge Analytica could become the equivalent of Three Mile Island, could Alastair Mactaggart create the privacy equivalent of Proposition 13?
It quickly seemed likely that the answer was yes. Mactaggart gathered more than double the signatures needed to put the measure on the ballot. His polling said that 80 percent of voters started out supportive of his proposal. He was disappointed that 20 percent of voters were not, until his pollsters explained that they had never seen such high numbers. While well-funded initiative campaigns almost always lead to a closer outcome in the end, it was apparent that if Mactaggart was willing to spend more of his real estate millions on an effective campaign, he would have more than a good chance of success at the ballot box in November.
At Microsoft, we considered Mactaggart’s initiative with mixed feelings. On the one hand, we had long supported privacy legislation in the United States, including at the federal level. Led by Julie Brill, a former FTC commissioner who now leads the company’s privacy and regulatory affairs work, we had decided to take a different approach from the rest of the tech sector when the GDPR took effect in May 2018. Rather than make the regulation’s consumer rights available only to people in the EU, we extended this to all our customers everywhere in the world. It made for some surprising insights. We quickly learned that American consumers were even more interested in putting these rights to work than Europeans, validating our sense that the arc of American history would ultimately bend toward the adoption of privacy rights in the United States.16
But we found the text of Mactaggart’s draft initiative complicated and in some respects confusing. We worried that in some places it would lead to technical requirements that would differ from the GDPR for little good reason. These were the types of problems that could be remedied by a legislature and its detailed drafting process, but not by a take-it-or-leave-it proposition at the ballot box. The question was how to persuade everyone to move the effort from the November ballot to the statehouse without killing it along the way.
Other tech companies embarked on a fund-raising campaign to oppose the initiative. Silicon Valley recognized that success would likely require raising more than $50 million. We donated $150,000. It was enough to stay connected with the rest of the industry but not the type of money that would give the opposition effort too much momentum.
Ultimately, the large amount of funding needed for a California initiative campaign created an incentive for both sides to negotiate. Mactaggart was willing to sit down with key elected officials to help hammer out the legislative details. Some of the other tech companies had a hard time deciding what they wanted to do. But we dispatched two of our privacy experts to Sacramento, where they worked pretty much around the clock, going through the details with the legislative leaders and Mactaggart’s team.
At the last possible minute, the legislature adopted the California Consumer Privacy Act of 2018, and Governor Jerry Brown quickly signed the measure. It was the strongest privacy law in the history of the United States. Like the GDPR, it gives the Golden State’s residents the right to know what data companies are collecting on them, to say no to its sale, and to hold firms accountable if they don’t protect personal data.
The national impact was felt almost immediately. Within a matter of weeks, even opponents who had long resisted comprehensive privacy legislation in Washington, DC, began to discover something akin to new religion. With the California floodgates open, it was apparent that other states would likely follow. Rather than face a patchwork of state rules, business groups started lobbying Congress to adopt a national privacy law that would preempt—or in effect overrule—California’s law and other state measures. While much remained ahead, Mactaggart had successfully changed the country’s consideration of privacy issues. It was a momentous achievement.
When we sat down with Mactaggart in San Francisco, it was impossible not to be impressed. It would have been easy to see him as a threat—an activist looking to rein in an industry that had become too powerful. Instead, we found a likable pragmatist who was thinking broadly about the future.
“This isn’t over,” he said. “We’ll be talking about technology and privacy for the next hundred years. Just like we do with antitrust law more than a century after the Standard Oil case.”
As a company that had survived antitrust turbulence eighty years after the DOJ had broken up Standard Oil, we easily understood the comparison. And ultimately, Mactaggart’s historical comparison provides some of the most important food for thought.
The combination of the efforts of Max Schrems and Alastair Mactaggart reveals several important lessons for the future.
First, it’s hard to believe privacy will ever die the quiet death that some in the tech sector predicted a decade or two ago. People have awakened to the fact that virtually every aspect of their lives leaves behind some type of digital footprint. Privacy needs to be protected, and stronger privacy laws have become indispensable. The day will come when the United States joins the European Union and other countries in applying a law like the GDPR.
We’re also likely to see a third wave of privacy protection emerge over the next few years, especially in Europe. Just as the GDPR responded to the plethora of privacy notices that people didn’t have time to read, we’re already hearing concerns that people lack the time to review all the data that the GDPR is making available online. This is likely to prompt a new wave of governmental rules to regulate how data can be collected and used.
This also means the tech sector will need to apply more of its technical creativity to innovations that protect privacy while also enabling data to be put to good use. Some new technical approaches have already emerged, like the ability to pursue AI advances with data that remains encrypted and hence better at protecting privacy. But this is just a start.
Finally, the experiences of Schrems and Mactaggart speak to important strengths and opportunities for the world’s democracies. The leaders of an autocratic government might look with alarm at the unpredictable ability of a law student and a real estate developer to upend the rules that govern some of the most powerful technologies of our time. But there’s another perspective—and on balance it seems the better view. Schrems and Mactaggart used established judicial and initiative processes to redress what they regarded as a wrong. Their success speaks to the ability of a democratic society, when it works well, to adapt to a people’s changing needs and move a nation’s law where it needs to go with less rather than more disruption.
The integrated nature of the global economy and the long reach of Europe’s privacy rules will create pressure even on countries like China to adopt strong privacy measures. In other words, Europe is not just the birthplace of democracy and the cradle of privacy protection. It’s quite possibly the world’s best hope for privacy’s future.