We may create a great US agency, and I hope that we do, that may have jurisdiction over US corporations and US activity, but doesn’t have a thing to do with what’s going to bombard us from outside the United States. How do you give this international authority, the authority to regulate in a fair way for all entities involved in AI?
—Senator Dick Durbin (D-IL), asking me a tough question
Earlier in the Senate hearing, at my TED Talk, and in an essay in The Economist, one month earlier, I had made the case not only for a national agency, but for global AI governance.1
Why should we want such a thing? Why should we have any hope that we might get it? I see many fundamental reasons why virtually all nations should want some degree of international collaboration around AI governance.
First, no nation should want to give up their sovereignty to the big tech companies. But that is exactly the track we are on: one in which the big tech companies control essentially all the data and a large part of the economy, and thus set many of the rules. Of course, if a single country, or even a small group of countries, tries in any way to inhibit the rapaciousness of big tech, big tech is likely to threaten to leave that country or group of countries behind, as Altman did in May 2023, when he threatened to withdraw ChatGPT from the EU if they “overregulated.”2 Safety in numbers—countries working together, in unity—might in some instances be the only way for governments—rather than unelected tech leaders—to set the rules.
Second, no nation should wish to surrender to rings of cybercriminals who might use new technologies to manipulate markets and citizens to an unprecedented degree. As AI improves, however, cybercriminals may be able to escalate what they do in ways never before imaginable. Countries need to share information and work together to prevent that from happening. (There is already some degree of transnational collaboration around cybercrime, but AI increases the risks, and dealing with that will likely call for new techniques and new agreements.)
Third, no nation should want climate change to accelerate even faster than it already is, but as discussed earlier, the environmental costs of ever-growing large language models are considerable. Just as companies should want shared rules in order to reduce costly customized retraining for every nation (if each nation had its own rules), countries should want shared rules in order to minimize the ecological costs of such shared retraining.
Fourth, no nation should wish to surrender to some superhuman intelligence that is grossly unaligned with human values. If and when conflict with AI comes, the world needs to be prepared. From climate change to the pandemic, international responses to major, even planet-threatening issues have been slow and disjointed, often too little, too late. AI poses a special challenge because of the speed at which it moves. It is conceivable that a single piece of superintelligent malware could, once written, spread around the globe in an instant. Nations need to be prepared; we need to have international treaties by which information is shared and procedures are in place in order to act—before a bad situation develops.
Fifth, no nation should wish for the equivalent of “forum shopping” or tax havens, in which rogue AI companies set up shop in countries with laxer laws, doing dodgy things that potentially put everyone at elevated risk. International cooperation is essential to preventing that.
Finally, there may be economies of scale to AI governance, so pooling resources at the global level may be needed. Experts in AI are expensive and scarce; rather than putting every country in the position of having to recruit scarce talent, countries should work together. The same is true for research. As the time-tested African proverb says, “If you want to go quickly, go alone. If you want to go far, go together.”
The idea of an international agency is undeniably picking up steam, but it also occasionally gets some pushback. For example, Henry Kissinger himself saw fit to respond to my own international AI governance advocacy in his very last article, in October 2023, arguing with Harvard’s Graham Allison:
In current proposals about ways to contain Al, one can hear many echoes of this past. The billionaire Elon Musk’s demand for a six-month pause on AI development, the Al researcher Eliezer Yudkowsky’s proposal to abolish AI, and the psychologist Gary Marcus’s demand that AI be controlled by a global governmental body essentially repeat proposals from the nuclear era that failed. The reason is that each would require leading states to subordinate their own sovereignty.3
As honored as I am to have been mentioned in Kissinger’s final work, the argument there seems to me to be something of a straw man—as if the only option would be an absolute subordination of authority. In reality, less absolute systems have historically been to some degree effective, and accepted, such as the International Atomic Energy Agency, and the International Civil Aviation Organization. If we develop international governance, as I think we should, it will be because national governments are willing to give up a tiny bit of sovereignty in exchange for security. That is how it worked with nuclear weapons and aviation—and that is how it would work with AI. All international treaties require some degree of sublimation of sovereignty; there is nothing special about AI in that regard.
I am not optimistic about this happening in the short term, but am not altogether pessimistic either. In fact, I don’t think that I ever have seen the world get behind an idea faster. Even Kissinger seemed to be, concluding at the end of his essay that “in the longer run, a global AI order will be required.”
Frankly, when I called for international AI governance in early 2023, I was not hopeful. AI ethicist Rumman Chowdhury had just spoken up as well, in an op-ed in WIRED that also came out in April, but relatively few other people seemed to hold out much hope.4 Some even counseled me to downplay international AI governance in my upcoming Senate testimony. At that point, my bet would have been against any kind of international governance for AI happening at all.
Instead, to my amazement, there has been widespread enthusiasm expressed for international AI governance in the months that followed, and not just from civil society.
Altman was one of the first prominent tech leaders to lend it public support; he directly backed me up in the Senate hearing, in response to a question from Senator Durbin about international AI governance.
I want to echo support for what Mr. Marcus said. I think the US should lead here and do things first, but to be effective, we do need something global. . . . There is precedent. I know it sounds naive to call for something like this, and it sounds really hard. There is precedent. We’ve done it before with the IAEA [International Atomic Energy Agency]. We’ve talked about doing it for other technologies. . . . I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face, like an impractical idea. And I think it would be great for the world. Thank you, Mr. Chairman.
I was frankly overjoyed. Within a few weeks after that, a number of world leaders also started speaking up for global AI governance, including the UK prime minister, Rishi Sunak, and the Secretary General of the UN, Antonio Guterres; toward the end of 2023, the UN rolled out a formal draft proposal.5 Demis Hassabis of DeepMind (now Google DeepMind) and others pledged support in a meeting with Rishi Sunak.6 By the end of 2023, even the Pope chimed in, calling for a legally binding AI treaty, urging “the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence.”7
Amen.
Still, Rome wasn’t built in a day, and nor was any international treaty, ever. Getting there will be an uphill battle.
And, crucially, as Stanford professor and former European Parliament member Marietje Schaake has argued, we shouldn’t expect any existing governance model to suffice. One proposal, for example, has been to model global AI governance on the Intergovernmental Panel on Climate Change (IPCC), which writes regular, expert-level reports on climate change. An AI parallel could certainly be built, but what’s been proposed would have no real authority. In Schaake’s words:
Even before the United Kingdom held its inaugural AI Safety Summit, plans for the new “IPCC for AI” stressed that the body’s function would not be to issue policy recommendations. Instead, it would periodically distill AI research, highlight shared concerns, and outline policy options, without directly offering counsel. This limited agenda contains no prospect of a binding treaty that can offer real protections and check corporate power.8
That is way too weak. Schaake’s sharpest point, though, perhaps directed particularly toward the United States, is that we can’t really expect international AI governance to work until we get national AI governance to work first: “Establishing institutions that will ‘set norms and standards’” and ‘monitor compliance’” without pushing for national and international rules at the same time is naive at best and deliberately self-serving at worst.”
Or, as Neil Turkewitz put it on X, “Without legal accountability for AI harms, all the architecture and ‘self-regulation’ is merely compliance theatre.”9
When Prime Minister Sunak advocates for global AI governance at the beginning of November, and his Minister of AI and Intellectual Property calls for leaving everything to industry “in the short term,” just two weeks later, it’s hard to take Sunak’s calls for global AI governance seriously.10
We need both—national and global AI governance—and the two need to work together.