Time after time, purely voluntary self-regulation has proven to be a failure and has largely been abandoned in Europe.
—Mark MacCarthy, Regulating Digital Industries
At that historic May 2023 Senate Subcommittee on AI Oversight, there was much to admire. Serious public servants temporarily set aside politics, expressed humility, and seemed genuine in the search for what was best for the country. One moment, though, was absolutely cringe-inducing:
Sam Altman [saying much the same as I said a few minutes earlier]: I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused . . . And then third I would require independent audits . . .
Sen. John Kennedy (R-LA): Would you be qualified to, to if we promulgated those rules, to administer those rules?
Altman [declining]: I love my current job. [Crowd laughs]
Sen. Kennedy: Cool. Are there people out there that would be qualified?
Altman: We’d be happy to send you recommendations for people out there.
If we are to have any hope at all of a just society, the fox can’t guard the henhouse.
Oversight has to be independent—not determined by a list of people hand-picked by the companies we aim to oversee.
Unfortunately, as I met with senators and representatives and their staff in the days and months that followed, I realized that, just about everywhere I went, Sam Altman had been there first. Congresspeople are people. Like everyone else, they get a thrill out of meeting celebrities, and Altman is a celebrity. Take, for example, this December 2023 Washington Post report:
“I’ve never met anyone as smart as Sam,” said Sen. Kyrsten Sinema (I-Ariz.), who spent extensive time with Altman in Sun Valley, Idaho, last summer. “He’s an introvert and shy and humble, and all of those are things that are not normal for people on the Hill. But he’s very good at forming relationships with people on the Hill and he can help folks in government understand AI.”1
It’s all well and fine for those in Congress to admire Altman, but that’s not what we are paying them for. We’re paying them to keep us safe.
But they can only do that if they can look at companies with enough distance to be neutral. The 2010 US Supreme Court Decision on Citizens United, more or less giving corporations carte blanche to influence elections, is not helping.2 As the late Justice John Paul Stevens wrote in his dissent, “A democracy cannot function effectively when its constituent members believe laws are being bought and sold.”3
Not long ago, former Google CEO Eric Schmidt told Meet the Press, “When this technology becomes more broadly available, which it will, and very quickly, the problem is going to be much worse.” This is a very reasonable worry. But then he added, “I would much rather have the current companies define reasonable boundaries,” because, he said, “there’s no way a non-industry person can understand what is possible.”4
That’s utter nonsense (which is what I said to Schmidt later that day in an email, albeit slightly more politely). Lots of scientists, not all on big tech’s payroll, are perfectly competent to understand what is possible—to the extent that anyone at all, in industry or otherwise, can understand these black boxes.
We have many precedents in other industries for involving independent experts in important decisions, such as around medicine, airplanes, and nuclear energy. The idea that only industry people can decide is a myth.
And forget about self-regulation. It rarely if ever works. For example, as Georgetown Fellow Mark MacCarthy discusses in his recent book Regulating Digital Industries,
In 2016, tech companies agreed to a European Union code of conduct on online terror and hate speech. The companies pledged to remove from their systems any material that incited hatred or acts of terror. They promised to review precise and substantial complaints about terrorist and hate content within twenty-four hours of receiving them and cut off access to the content, if required.5
Needless to say, terror and hate speech didn’t suddenly, magically disappear.
We don’t make sure our pharmaceuticals and food supply are safe simply by hoping for the best. And we don’t ensure their safety by leaving safety strictly to the companies making medicine and food. We have independent regulators, like the Food and Drug Administration (FDA), Federal Aviation Administration (FAA), and FTC, to keep companies’ feet to the fire—with good reason. This need not be massively expensive, either; as Roger McNamee said to me in email, “We have learned in areas far more complex than AI (e.g., pharmaceuticals, banking, food) that only a few regulators need to be graduate level experts in a given field. . . . If you regulate for desired outcomes, you change incentives. Eventually the industry does most of the work.
The independence in independent oversight should not and need not be infinite; as MacCarthy has written:
[a digital regulator] should have sufficient regulatory authority to advance and balance sometimes conflicting policy goals and to adapt to changes. However, it must still be accountable to Congress, the courts, and the public, and to prevent partisan abuse, its authority to regulate content should be restricted. In addition, the rules surrounding its operation should be structured carefully to minimize the risks of capture by the industries it seeks to regulate.6
What we need most of all are independent scientists, not funded by the big tech companies, at the table: people smart enough and trained enough to call bullshit on the companies when they need to. A great start would be refunding and reopening the US Office of Technology Assessment, which “provided legislators with nonpartisan researchers on new developments and recommendations for dealing with digital problems.”7
A good example of why we can’t leave the regulation of AI strictly to governments—and of why governments need to listen more to scientists—is the fiasco of driverless cars. In August 2023, the California Public Utilities Commission (CPUC) gave Waymo and Cruise permission to greatly increase their operations.8 Within a week, Cruise was involved in multiple incidents. With egg on its face, California quickly backtracked, and Cruise sharply cut back its operations.9 The freedom that the utilities commission had briefly granted to Cruise was clearly premature.
That part was pretty widely reported on. But CPUC made another serious error as well. They had been asking manufacturers for only a tiny subset of the data they should have been asking for. One of the biggest omissions was that the state was not requiring enough data about “tele-operation,” which is to say about how much remote operators were involved in the actual moment-by-moment driving of the cars. A tiny bit might be expected; it was an open secret in the industry that the so-called driverless cars, even ones with literally no safety driver aboard, would sometimes call into remote centers when they got stuck. One might even argue that it is better to have humans somewhere in the loop. But there is a world of difference between a car needing human help once a day and a car that needs constant help. The former might be seen as a project near completion, potentially of considerable benefit to society, possibly outweighing risks. A car that needed constant help might be so far from completion that it shouldn’t really yet be on public roads. All residents of San Francisco, whether they like it or not, coexist with driverless cars, so they all have a right to know.
A friend of mine, who is a scientist, told the state of California that they should ask for such data. But the state appears not to have listened.
And, then, in early November 2023, The New York Times broke a major story: the number of people in Cruise’s teleoperation center exceeded the number of driverless cars they had on the road.10 Vehicles that were being billed as “autonomous” were really more like semi-autonomous, heavily reliant on off-stage humans. Public safety was being compromised for a system that appears to be what insiders call “Wizard of Oz,” in honor of the film’s famous line, “Pay no attention to the man behind the curtain.” A good, independent oversight body, with sufficiently empowered scientists on board, would never have allowed such a thing to happen.
AI may be too big to fail, but we can’t leave its oversight purely to the government employees, either. Nor to regimes hand-picked by the companies. Independent scientists absolutely have to be in the loop.