If we are to prevent another Cambridge Analytica from attacking our civil institutions, we must seek to address the flawed environment in which it incubated. For too long the congresses and parliaments of the world have fallen for a mistaken view that somehow “the law cannot keep up with technology.” The technology sector loves to parrot this idea, as it tends to make legislators feel too stupid or out of touch to challenge their power. But the law can keep up with technology, just as it has with medicines, civil engineering, food standards, energy, and countless other highly technical fields. Legislators do not need to understand the chemistry of molecular isomers inside a new cancer drug to create effective drug review processes, nor do they need to know about the conductivity of copper in high-voltage wiring to create effective insulation safety standards. We do not expect our legislators to have expert technical knowledge in any other sector because we devolve technical oversight responsibility to regulators. Regulation works because we trust people who know better than we do to investigate industries and innovations as the guardians of public safety. “Regulation” may be one of the least sexy words, evoking an image of faceless jobsworths with their treasured checklists, and we will always argue about the details of their imperfect rules, but nonetheless safety regulation generally works. When you buy food in the grocery store or visit your doctor or step onto an airplane and hurtle thousands of feet in the air, do you feel safe? Most would say yes. Do you ever feel like you need to think about the chemistry or engineering of any of it? Probably not.
Tech companies should not be allowed to move fast and break things. Roads have speed limits for a reason: to slow things down for the safety of people. A pharmaceutical lab or an aerospace company cannot bring new innovations to market without first passing safety and efficacy standards, so why should digital systems be released without any scrutiny? Why should we allow Big Tech to conduct scaled human experiments, only to realize that they become too big a problem to manage? We have seen radicalization, mass shootings, ethnic cleansing, eating disorders, changes in sleep patterns, and scaled assaults on our democracy, all directly influenced by social media. These may be intangible ecosystems, but the harms are not intangible for victims.
Scale is the elephant in the room. When Silicon Valley executives excuse themselves and say their platform’s scale is so big that it’s really hard to prevent mass shootings from being broadcast or ethnic cleansing from being incited on their platforms, this is not an excuse—they are implicitly acknowledging that what they have created is too big for them to manage on their own. And yet, they also implicitly believe that their right to profit from these systems outweighs the social costs others bear. So when companies like Facebook say, “We have heard feedback that we must do more,” as they did when their platform was used to live-broadcast mass shootings in New Zealand, we should ask them a question: If these problems are too big for you to solve on the fly, why should you be allowed to release untested products before you understand their potential consequences for society?
We need new rules to help create a healthy friction on the Internet, like speed bumps, to ensure safety in new technologies and ecosystems. I am not an expert on regulation, nor do I profess to know all the answers, so do not take these words as gospel. This should be a conversation that the wider community takes part in. But I would like to offer some ideas for consideration—at the very least to provoke thought. Some of these ideas may work, others may not, but we have got to start thinking about this hard problem. Technology is powerful, and it has the potential to lift up humanity in so many ways. But that power needs to be focused on constructive endeavors. With that, here are some ideas to help you consider how to move forward:
The history of building codes stretches back to the year 64 C.E., when Nero restricted housing height, street width, and public water supplies after a devastating fire ravaged Rome for nine days. Though a fire in 1631 prompted Boston to ban wooden chimneys and thatched roofs, the first modern building code emerged out of the devastating carnage of the Great Fire of London, in 1666. As in Boston, London houses had been densely constructed from timber and thatch, which allowed the fire to spread rapidly over four days. It destroyed 13,200 homes, eighty-four churches, and nearly all of the city’s government buildings. Afterward, King Charles II declared that no one shall “erect any House or Building, great or small, but of Brick, or Stone.” His declaration also widened thoroughfares to stop future fires from spreading from one side of the street to the other. After other historic fires in the nineteenth century, many cities followed suit, and eventually public surveyors were tasked with inspections and ensuring that the construction of private property was safe for the inhabitants and for the public at large. New rules emerged, and eventually the notion of public safety became an overarching principle that could override unsafe or unproven building designs, regardless of the desires of property owners or even the consent of inhabitants. A platform like Facebook has been burning for years with its own disasters—Cambridge Analytica, Russian interference, Myanmar’s ethnic cleansing, New Zealand’s mass shootings—and, as with the reforms after the Great Fire, we must begin to look beyond policy, to the underlying architectural issues that threaten our social harmony and citizens’ well-being.
The Internet contains countless types of architectures that people interact with on a daily and sometimes hourly basis. And as we merge the digital world with the physical world, these digital architectures will impact our lives more and more. Privacy is a fundamental human right and should be valued as such. However, too often privacy is eviscerated through the bare performance of clicking “accept” to an indecipherable set of terms and conditions. This consent-washing has continually allowed large tech platforms to defend their manipulative practices through the disingenuous language of “consumer choice.” This positions our frame of thinking away from the design—and the designers—of these flawed architectures, and toward an unhelpful focus on the activity of a user who does not have understanding or control over the system’s design. We do not let people “opt in” to buildings that have faulty wiring or lack fire exits. That would be unsafe—and no terms and conditions pasted on a door would let any architect get away with building dangerous spaces. Why should the engineers and architects of software and online platforms be any different?
In this light, consent should not be the sole basis of a platform’s ability to operate a feature that engages the fundamental rights of users. In following the Canadian and European approach of treating privacy as an engineering and design issue—a framework called “privacy by design”—we should extend this principle to create an entire engineering code: a building code for the Internet. This would include new principles beyond privacy, to include respect for the agency and integrity of end users. Such a code would create a new principle—agency by design—to require that platforms use choice-enhancing design. This principle would also ban dark pattern designs, which are common design patterns that deliberately confuse, deceive, or manipulate users into agreeing to a feature or behaving in a certain way. Agency by design would also require proportionality of effects, wherein the effect of the technology on the user is proportional to the purpose and benefit to the user. In other words, there would be a prohibition on undue influence in platform design, where there are enduring and disproportionate effects, such as addictive designs or consequential mental health issues.
As with traditional building codes, the harm avoidance principle would be a central feature in such a digital building code. This would require platforms and applications to conduct abusability audits and safety testing prior to releasing or scaling a product or feature. The burden would rest with tech companies to prove that their products are safe for scaled use in the public. As such, using the public in live scaled experiments with untested new features would be prohibited, and citizens could no longer be used as guinea pigs. This would help prevent cases like Myanmar, where there was no prior consideration from Facebook on how features could ignite violence in regions of ethnic conflict.
If your child was lost and needed help, whom would you want them to turn to for help? Perhaps a doctor? Or maybe a teacher? What about a cryptocurrency trader or gaming app developer? Our society esteems certain professions with a trustworthy status—doctors, lawyers, nurses, teachers, architects, and the like—in large part because their work requires them to follow codes of ethics and laws that govern safety. The special place these professions have in our society means that we demand a higher standard of professional conduct and duties of care. As a result, statutory bodies in many countries regulate and enforce ethical conduct of these professions. For society to function, we must be able to trust our doctors or lawyers to always act in our interests, and that the bridges and buildings we use every day have been constructed to code and with competence. In these regulated professions, unethical behavior can bring dire consequences for those who transgress boundaries set by the profession—ranging from fines and public shaming to temporary suspensions or even permanent bans for more egregious offenders.
Software, AI, and digital ecosystems now permeate our lives, and yet those who make the devices and programs we use every single day are not obligated by any federal statute or enforceable code to give due consideration to the ethical impacts to users or society at large. As a profession, software engineering has a serious ethics problem that needs to be addressed. Tech companies do not magically create problematic or dangerous platforms out of thin air—there are people inside these companies who build these technologies. But there is an obvious problem: Software engineers and data scientists do not have any skin in the game. If an engineer’s employer instructs her or him to create systems that are manipulative, ethically dubious, or recklessly implemented, without consideration for user safety, there are no requirements to refuse. Currently, such a refusal to act unethically creates a risk to the employed engineer of repercussions or termination. Even if the unethical design later is found to run afoul of a regulation, the company can absorb liability and pay fines, and there are no professional consequences for the engineers who built the technology, as there would be in the case of a doctor or lawyer who commits a serious breach of professional ethics. This is a perverse incentive that does not exist in other professions. If an employer asked a lawyer or nurse to act unethically, they would be obligated to refuse or face losing their professional license. In other words, they have skin in the game to challenge their employer.
If we as software engineers and data scientists are to call ourselves professionals worthy of the esteem and high salaries we command, there must be a corresponding duty for us to act ethically. Regulations on tech companies will not be nearly as effective as they could be if we do not start by putting skin in the game for people inside these companies. We need to put the onus on engineers to start giving a damn about what they build. An afternoon employee workshop or a semester course on ethics is a wholly insufficient solution for addressing the problems we now face from emerging technologies. We cannot continue down the path we are on, in which technological paternalism and the insulated bro-topias of Silicon Valley create a breed of dangerous masters who do not consider the harm that their work has the potential to inflict.
We need a professional code that is backed by a statutory body, as is the case with civil engineers and architects in many jurisdictions, where there are actual consequences for software engineers or data scientists who use their talents and know-how to construct dangerous, manipulative, or otherwise unethical technologies. This code should not have loose aspirational language; rather, what is acceptable and unacceptable should be articulated in a clear, specific, and definitive way. There should be a requirement to respect the autonomy of users, to identify and document risks, and to subject code to scrutiny and review. Such a code should also include a requirement to consider the impact of their work on vulnerable populations, including any disproportionate impact on users of different races, genders, abilities, sexualities, or other protected groups. And if, upon due consideration, an employer’s request to build a feature is deemed to be unethical by the engineer, there should be a duty to refuse and a duty to report, where failure to do so would result in serious professional consequences. Those who refuse and report must also be protected by law from retaliation from their employer.
Out of all the possible types of regulation, a statutory code for software engineers is probably what would prevent the most harm, as it would force the builders themselves to consider their work before anything is released to the public and not shirk moral responsibility by simply following orders. Technology often reflects an embodiment of our values, so instilling a culture of ethics is vital if we as a society are to increasingly depend on the creations of software engineers. If held properly accountable, software engineers could become our best line of defense against the future abuses of technology. And, as software engineers, we should all aspire to earn the public’s trust in our work as we build the new architectures of our societies.
Utilities are traditionally physical networks said to be “affected with the public interest.” Their existence is unique in the marketplace in that their infrastructure is so elemental to the functioning of commerce and society that we allow them to operate differently from typical companies. Utilities are often by necessity a form of natural monopoly. In a marketplace, balanced competition typically results in innovation, better quality, and reduced prices for consumers. But in certain sectors, such as energy, water, or roads, it makes no sense to build competing power lines, pipelines, or subways to the same places, as it would result in massive redundancy and increased costs borne by consumers. With the increased efficiencies of a single supplier of a service comes the risk of undue influence and power—consumers unable to switch to new power lines, pipelines, or subways could be held hostage by unscrupulous firms.
On the Internet, there are clearly extremely dominant market players. Google accounts for more than 90 percent of all search traffic, and almost 70 percent of adults active on social media use Facebook. But this does not make them universal infrastructure per se. When tech platforms suffer an outage, we can survive and cope for longer (albeit not indefinitely) than we would if the same thing happened with electricity. On the infrequent occasions when Google’s search engine has failed, users coped by moving to other, lesser-known search engines until Google patched its problem. There are also cycles of popularity for large Internet players that are not found in physical infrastructure. MySpace was once the preeminent social media platform, before Facebook crushed it, whereas we rarely if ever encounter market cycles with water or electric companies.
That said, the Internet’s dominant players do share things in common with physical utilities. Like physical utilities, these architectures often serve as a de facto backbone of commerce and society, where their existence has become a given of day-to-day life. Businesses have come to passively rely upon the existence of Google’s search engine for their workforces, for example. And this is not a bad thing. Search engines and social media benefit from network effects, where the more people use the service, the more useful the service becomes. As with physical utilities, scale can create a huge benefit to the consumer, and we do not want to impede this public benefit. However, as with other natural monopolies, the same kinds of risks threaten consumers. And it is these potential harms that we must account for in a new set of rules.
So, in full recognition that there are essential differences between the Internet and physical infrastructure, I will use “Internet utilities” as a term of convenience to mean something similar to but different from a traditional utility: An “Internet utility” is a service, application, or platform whose presence has become so dominant on the Internet that it becomes affected with the public interest by the very nature of its own scale. The regulation of Internet utilities should recognize the special place they hold in society and commerce and impose a higher standard of care toward users. These regulations should take the form of statutory duties, with penalties benchmarked to annual profits as a way to stop the current situation, in which regulatory infractions are negotiated and accounted for as a cost of doing business.
In the same way we do not penalize the scale of electric companies, the scale of Internet utilities should not be penalized where there are network effects of genuine social benefit. In other words, this is not about breaking up large technology companies; this is about making them accountable. However, in exchange for maintaining their scale, Internet utilities should be required to act proactively as responsible stewards of what eventually evolve into our digital commons. They must be made to understand that scale evokes innate public interests that in some cases will, by necessity, supersede their private interests in making profit. As with other utilities, this must include compliance with higher user safety standards specific to software applications and a new code of digital consumer rights. These new digital consumer rights should serve as the basis of universal terms and conditions, in that the interests of Internet users are properly accounted for in areas where technology companies have continuously failed.
The unrestricted power of these Internet utilities to impact our public discourse, social cohesion, and mental health, whether intentionally or through incompetence and neglect, must also be subject to public accountability. A new digital regulatory agency should be established to enforce this new digital regulatory framework with statutory sanctioning powers. In particular, these agencies should contain technically competent ombudsmen empowered with rights to conduct proactive technical audits of platforms on behalf of the public. We should also use market-based reinforcement mechanisms, such as the requirement of Internet utilities to hold insurance for harms incurred from data misuse. In requiring insurance for data breaches, linked to the market-rate value of that data, we could create corrective financial pressure to do better.
We have seen the value of personal data create entirely new business models and huge profits for social media companies. Platforms such as Facebook have vigorously argued that they are a “free” service, and if consumers do not have to pay for the service, the platform cannot be complicit in anticompetitive practice. However, this argument requires one to accept that the exchange of personal data for use of a platform is not an exchange of value, when it plainly is. There are entire marketplaces that valuate, sell, and license personal data. The flaw in current antitrust approaches for large tech companies is that the value of the consumer’s data has not been properly accounted for by regulators.
If we actually considered the rising value in the personal data provided by consumers to platforms, we’d conclude that consumers have continuously been ripped off by these companies, which are not proportionately increasing their platform’s value to consumers. In this light, consumers are giving more value via their data to these dominant platforms without receiving reciprocal benefits. There may be an argument in America’s current antitrust laws that on this basis, the data exchange has been costing consumers more. However, even if this were the case, this is an overly narrow test of what is fair and right for consumers. Instead, were we to create a new classification of Internet utilities, we could use a broader public interest test for the operation, growth, and mergers-and-acquisitions activity of these firms.
However, unlike physical utilities, social media and search engines are not so essential as to be irreplaceable, so regulations should also account for the healthy benefits of industry evolution. We want to avoid regulations that entrench the position of currently dominant Internet utilities at the expense of newer and better offerings. But we also need to reject the notion that any regulation of scaled giants would somehow impede new challengers. To follow this logic would mean that safety and environmental regulation of the oil sector would inhibit the emergence of renewable energy in the future, which makes no sense. And if we are concerned about the inhibition of market evolution, then we could require Internet utilities to share their dominant infrastructure with smaller, competing challengers, to improve consumer choice in the same way dominant telecom companies share communications infrastructure with smaller players. Safety and conduct standards of existing large players are not mutually incompatible with the continuation of technological evolution. In this light, principle-based rather than technology-based regulation should be created so that we are careful not to embed old technologies or outdated business models into regulatory codes.
Thanks. And good luck.