10

Liability

You break it, you buy it. That’s true everywhere you go—except in Silicon Valley.

Remember, for example, from an earlier chapter, that Section 230 of the Communications Decency Act, known as the free pass to social media, largely exempted platforms like Meta and Twitter from responsibility for what they post. (History note, for those not old enough to remember: when Section 230 was written, the noble goal was to protect internet service providers [ISPs] that simply passed information along—not to absolve social media platforms, which were not yet invented then. ISPs kept shuttling data across the network, in keeping with the original intention. But social media companies came to do something different: to actively determine news feeds with algorithms that polarized society, in order to maximize profits and engagement. The technology changed, and the laws didn’t keep up; the tech companies took advantage, and that’s how we got where we are now.)

This has to stop; if social media aggressively circulates lies—particularly those that they easily could have fact-checked—they should be held responsible. Newspapers can be sued for lies; why should social media be exempt?

Section 230 needs to be repealed (or rewritten), assigning responsibility for anything that circulates widely. We can’t just leave our media landscape to the whims of tech leaders making unilateral decisions—people with huge economic vested interests, who may or may not care much about the consequences of what they circulate to society.

And we have to make absolutely, positively certain that the makers of AI (in many cases the same companies that run social media) are held responsible for the harms they are likely to cause, as they automate everything and excise humans from the loop.


Tom Wheeler, the former commissioner of the Federal Communications Commission (FCC), in his excellent book Techlash, talks about the common law principle called duty of care, which says that “the provider of a good or service has the obligation to anticipate and mitigate potential harms that may result.” By way of example, he talks about nineteenth-century railroads:

As nineteenth-century trains raced across farmers’ lands, the steam engines threw off hot cinders that would set fire to the barns, hayricks, and homes as they passed. The Duty of Care, in the form of the tort claim of negligence, was enforced against the offending railroads. The result was that the railroads installed screens across the smokestacks of the steam engines to catch the cinders. The digital economy needs digital smokestack screens to catch the dangerous effects thrown off by platform companies.1

I couldn’t agree more. Dangerous hot cinders are, of course, just one negative externality among many others, like the costs of secondhand smoke or the costs of pollution on climate change. As Wheeler notes, social media has had its share: “The decision of digital platforms to curate their content for maximum engagement results in negative externalities ranging from bullying to lies, hate, and disinformation campaigns by foreign governments.”

For the most part, tech companies haven’t been held responsible for any of this, aside from some occasionally troubling optics. So they haven’t much cared, and when they have cared, it’s usually only briefly. Stricter liability laws, to hold companies responsible for their negative externalities—including new problems created or accelerated by AI—are vital.

Facebook would presumably care a lot more about election interference, for example, if there were truly immense (rather than merely very large) cash penalties for amplifying large quantities of demonstrable misinformation; they would care even more if they lost access to certain markets altogether if they failed to police themselves. As long as the only penalty is bad optics, or fines they can easily afford, they are unlikely to invest heavily in solving the problem. Microsoft’s Designer software appears to have driven the nonconsensual deepfaked Taylor Swift porn, but it is doubtful that the company will in any way be held responsible for the situation, no matter how bad it gets, so there is little incentive to fully solve the problem.2 Insisting on duty of care as a condition for access to customers would be a start.


In December 2023, the EU reached an informal agreement, called the Product Liability Directive.3 The directive aims to

provide people who have suffered material damage from a defective product with the legal basis to sue the relevant economic operators and seek compensation. . . . Product manufacturers will be liable for defectiveness resulting from a component under its control, which might be tangible, intangible, or a related service, like the traffic data of a navigation system. . . . A product is deemed defective when it does not provide the safety a person is entitled to expect based on the reasonable foreseeable use, legal requirements, and the specific needs of the group of users for whom the product is intended.

An important part of this directive, which ties with the earlier discussion in the chapter on transparency, is that the defendant will be required to disclose relevant evidence. (A further goal of the directive is to bring a disparate set of laws across individual EU countries in harmony.4)

Another goal was to “simplify the burden of proof” for people seeking compensation, to protect consumers who might otherwise face “excessive difficulties in particular due to technical or scientific complexity.”5 All of this is to the good.


Some existing US laws, especially Section 5 of the act that established the FTC, “prohibit[ing] unfair or deceptive practices,”6 give at least some coverage, and underlie some aspects of proposed legislation like the Foundation Model Transparency Act.7 Families of people who perished in cars with driver-assist systems are leaning on existing liability laws.8

But there is nothing comprehensive in the United States, and existing laws do not clearly and fully address AI.

Worse, the infamous Section 230, which by default protects media platforms from liability from the content they share, was also written pre-AI. But no clear ruling has been made yet, which means that in a country with a legal system like the United States, which revolves around judicial precedent, things are up for grabs.

In the meantime, AI companies—like social media companies before them—might well try to use Section 230 to shield themselves from liability.

Seeking stronger, clearer, more explicit protections, Senator Richard Blumenthal (D-CT) and Senator Josh Hawley (R-MO) have proposed to protect consumers along somewhat similar lines to what Europe has informally agreed on, with their Bipartisan AI Framework.9 (So far it is, sadly, merely a proposal, not something that the majority leader has chosen to present before the full Senate.) In their words, which I fully endorse:

Congress should ensure that A.I. companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Where existing laws are insufficient to address new harms created by A.I., Congress should ensure that enforcers and victims can take companies and perpetrators to court, including clarifying that Section 230 does not apply to A.I.10

For those unfamiliar with the term, a private right of action is, basically, legal grounds for a lawsuit.


At the widely covered January 2024 Senate judiciary meeting where Mark Zuckerberg was prime focus, the costs of Section 230 were front and center.11 Senator Dick Durbin (D-IL) went after it hard, right from the beginning:

Only one other industry in America has an immunity from civil liability. For the past 30 years, Section 230 has remained largely unchanged, allowing big tech to grow into the most profitable industry in the history of capitalism, without fear of liability for unsafe practices. That has to change.

Senator Lindsey Graham (R-SC) followed suit, questioning Jason Citron, the CEO of the platform Discord, and then going even harder after Zuckerberg:

Sen. Graham: Do you support removing Section 230 liability protections for social media companies?

Citron: I believe that Section 230 needs to be updated. It’s a very old law.

Sen. Graham: Do you support repealing it so people can sue if they believe they’re harmed?

Citron: I think that Section 230 as written while it has many downsides, has enabled innovation on the internet . . .

Sen. Graham: So here you are. If you’re waiting on these guys to solve the problem, we’re going to die waiting. [Turning his attention to Zuckerberg] Mr. Zuckerberg. Try to be respectful here. The representative from South Carolina, Mr. Duffy’s son got caught up in a sex extortion ring in Nigeria using Instagram. He was shaken down, paid money that wasn’t enough and he killed himself using Instagram. What would you like to say to him?

Zuckerberg: It’s terrible. I mean no one should have to go through something like that.

Sen. Graham: You think he should be allowed to sue you?

Zuckerberg: I think that they can sue us.

Sen. Graham: Well, I think he should and but that [but because of Section 230] he can’t.

Later, Senator Amy Klobuchar (D-MN) concurred:

I agree with Senator Graham that nothing is going to change unless we open up the courtroom doors. I think the time for all of this immunity is done because I think money talks even stronger than we talk up here.

Every one of the senators seemed ready to repeal (or amend) Section 230. Godspeed to them. American citizens should have the same rights to sue tech companies that European citizens soon will.


All that said, although the tech companies absolutely should not continue to have the kind of blanket shielding from liability that they currently have, liability is tricky. You don’t want to hold car manufacturers responsible for every bank robber who uses their cars. But you might want to hold gun manufacturers or cigarette companies accountable to some degree.

A recent MIT working paper introduces a thought-provoking metaphor, pondering when a user should be responsible, and when a manufacturer should be, analogizing to what they call a “fork in the toaster” situation, asking “when a user . . . is responsible for a problem because the AI system was used in a way that was clearly not responsible or intended.” By analogy, they write:

one can’t be held personally responsible for putting a fork in a toaster, if neither the nature of toasters nor the dangers of electricity are widely known. . . . The AI system provider should in most cases be held responsible for a problem unless it is able to show that a user should have known that a use was irresponsible and could not have been foreseen or prevented.12

Current fine-print taglines like “Bing is powered by AI, so surprises and mistakes are possible” hardly seems to me like enough to prepare lay users for all the chaos that can ensue from hallucinations and bias and other problems. Meanwhile, Microsoft’s Designer is being used for creating deepfake porn. We could well ask whether companies that make them have done enough to keep their tools from being used in that fashion.

In my view, current AI practices fall far short of protecting society from the potential ills that Generative AI has been implicated in.

According to a September 2023 poll, 73 percent of US voters “believe AI companies should be held liable for harms from technology they create.”13 It’s time to make that happen.